Three years ago I began my own lonely little war with artificially intelligent chatbots by wondering at the appearance of coherent but irrelevant answers on a college essay exam. I’ve responded with lots of little hacks, but the war is still expanding on all fronts, barreling towards doom scenarios prophesied by no less than Nobel prize winner Geoffrey Hinton, who senses just how imperious his digital children might become.
Google released its latest and greatest model last month and made it free to college students. AI writing assistants are weaseling their way uninvited into every writing tool they use. The most malevolent AI overlord could hardly come up with a more insidious first step to domination. Yet the bots are not driving this strategy. Profit is.
I am neither technophobic nor anticapitalist. In seeking to understand what we are facing, I built a rudimentary AI app, proposed a research agenda, and established an AI consultancy. Yet I am first and foremost a professor of history, but more of a coach than a knowledge purveyor. I don’t expect students to retain very many facts from my courses, instead I teach them to assess information. My goal is for students think creatively and critically, to assess evidence and draw defensible conclusions. My key revelation has been that students sort of believe us when we preach the good word of honest offline work as the key to learning, but they worry about competing for grades against students taking AI shortcuts. Old-fashioned instruction in the humanities will have great value in an economy where technocratic tasks are mastered by AI systems that, at their core, leverage language as their means to mimic intelligence itself. Our product is a social process. Through student writing in particular, we refine the capabilities of the human brain to respond to great complexity, not only with nuance, but with compassion.
AI systems are trained on the product of such refined minds. Google’s new “augmented textbook” platform proposes to teach people with techniques used to encode its chatbots. Subject to Google’s normal terms and conditions, the purpose of the platform may be less about human learning than about leveraging human feedback to train its own AI models. I am among thousands of scholars to benefit from the Anthropicsettlement because our writing contributed to the vast statistical database that constitutes their large language models. Yet this cannot compensate the generations of scholars upon whom our work builds. Scholarly research has never been measurable in terms of market value. The technical magic these companies sell builds upon what for decades were unmarketable and unheralded contributions by Professor Hinton and others who believe in the pursuit of knowledge as a communal value. They leverage language scraped from scholars, novelists, and the most idealistic islands of the internet: the blogs, journals, and wikipedias where people have freely bared their souls and applied their minds. In an age when AI can write code, the high-level explanatory task of writing good specs will be “the new code,” as OpenAI researcher Sean Grove stated in a talk last year. AI systems rely on refined minds not only for their training, application, evaluation, and control, but to gauge their effects on society and knowledge itself.
Texas Tech is not an elitist university; nearly anyone can get in. But despite Austin anxieties, we still offer an elite education for those who seek it, with small classes and direct access to top-notch professors. But the hard sell of AI to our busy students, most of whom are hustling through college with side jobs, overwhelms their better angels and undermines their own thinking. They are sorely tempted to produce a paper but miss the process. Here the expensive elite institutions may be able to respond more effectively with bespoke solutions for their highly motivated students.
In a vicious finish to a talk three years ago, Yuval Noah Harari hammered home just how the monopolization of AI in the hands of a billionaire aristocracy holds great dangers for democratic society.
We cannot abandon the management of these systems into hands of a small group privileged to attend elitist universities.
AI models are not mere technical products; they are ideological. Educational priorities premised upon return-on-investment minimize the value of the humanities.
AI will be most useful, and most profitable, in a society of critical thinkers with humane values.
And such a society will be necessary for the survival of democracy in the face of a mediascape saturated by ideologically motivated AI output. The humanities have never been more important.
Paul Bjerk is a professor of history at Texas Tech.
This article originally appeared on Lubbock Avalanche-Journal: Texas Tech history professor on role of AI, importance of humanities | Opinion
Reporting by By Paul Bjerk, special for the Avalanche-Journal / Lubbock Avalanche-Journal
USA TODAY Network via Reuters Connect

