When I published my short book on AI and human redundancy last month, I wanted to examine the issues in their broadest sense. I’m not just concerned about jobs disappearing (although they most certainly will) and erstwhile employees having no means to earn a living (disastrous though that will be), but also with people’s inability to contribute at an intellectual, cognitive or creative level to society.
The New York Times is reporting that OpenAI wants to muscle in on university campuses with a matriculation-to-graduation embrace of the students and professors. The article conjures up a world of faculty members being forced to ‘teach’ students via customised bots and everyone making use of Sam Altman’s ChatGPT Edu service. Cal State has apparently already given Chatty G to some 460,000 students across 23 campuses.
My own alma mater, the London School of Economics, announced in April that it was partnering with rival company Anthropic to provide students with access to ChatGPT’s close cousin Claude.
Apparently the large language model will be there to guide their reasoning process rather than provide answers, which is a relief. The motto of the School is, after all, rerum cognoscere causas - to know the meaning of things. And if a technology comes along that allows you to skip reading, skip thinking and move straight to a fully-formed output, cynics might start to worry about the motto changing to nihil scire.
What do university lecturers and school teachers really think of the extremely rapid changes that are being forced upon them by their institutions, or for that matter, the voluntary adoption of LLMs by their classes? Well, 404 Media in the USA published some truly heartbreaking interviews with professionals in education earlier this week.
The accounts are worth reading in full if you have a strong stomach, although journalist Jason Koebler summarises the situation very graphically. “They describe trying to grade ‘hybrid essays half written by students and half written by robots,’” he recounts, “trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate.”
Students, of course, see the arrival of artificial intelligence as a welcome phenomenon which takes some of the heartache and hassle out of higher education. Read the comments of Sunjaya Phillips, for instance, who talked to the BBC recently about her experience at Oxford Brookes University in the UK. “It’s definitely transformed my academic experience,” she says, describing an “open conversation” with the institution about how the technology can be used to “structure assignments” or suggest “creative ideas”.
And here’s the dilemma for higher education. If you deny students the right to use technology that is widely available, you’re seen as being part of a modern Luddite movement. If you allow them to use it, you give them a shortcut through the very elements of higher education that are most valuable: reading, digesting, thinking and interpreting. To use an analogy, we might all appreciate the pub lunch at the end of a five-mile country walk, but the point of the walk was actually to take in the flora and fauna and not to transport yourself directly to the boozer.
Talking of boozers, the discussion we’re going to see in the coming years is going to be so circular and insane that it would drive a philosophy professor to drink.
The academic institutions will know that their whole raison d’etre is challenged and undermined by Mr Altman’s hallucinatory products, but they will be forced to claim disingenuously that AI can be a useful aid to scholarship.
There may be an element of truth to this if a postdoctoral researcher is using bespoke models to speed up discovery in microbiology or a geographer is looking for patterns in urban transport data. But it won’t be true for undergraduates who are learning the fundamentals of their subject for the first time. And it won’t be true if the use of AI consists of students prompting ChatBloodyGPT on the history of the Balkans or the impact of female entrepreneurs on US business culture or the philosophy of Immanuel Kant with five minutes to spare before an essay deadline.
Students will kid themselves that they are gaining an education that is equivalent to the one that previous generations received, because otherwise the tens of thousands they’re investing will seem rather pointless. Those who fear a different reality - that ChatGPT, Claude and Gemini are in danger of replacing their true learning rather than enhancing it - will be trapped into going along with the pretence, knowing that it leads to a certificate.
Perhaps most worryingly, lecturers will feel compelled to go along with it too, observing that their institutions have endorsed the AI and may even be actively encouraging staff to incorporate it into academic life. To challenge this premise might see you labelled as a trouble-maker or bypassed when it comes to the next departmental promotion.
Increasingly, new cohorts of university entrants will have come through school systems which themselves may have been encouraging the use of AI. So the ability to internalise knowledge, interpret sources, express an opinion and structure an argument may have been outsourced to a machine from an early stage. At the very least, this is an extraordinary social and psychological experiment, completely without precedent. It might end well, I suppose, but excuse me if I’m somewhat sceptical. Remember social media? That was the last big universal tech experiment unleashed on society without debate.
Proponents will say that AI is going to be a part of everyone’s working lives, so it’s vital that we prepare young people in schools, colleges and universities to use the tools. But this technology is the very thing that, according to Silicon Valley, is going to erase people’s working lives. And can we be sure that those who are still employed when they graduate have the independent knowledge, skills and intellectual curiosity to challenge and scrutinise anything produced by AI?
It's fairly terrifying to a person of my age! I work for myself in advocacy for incarcerated men languishing on Death Row. Do I need to try to understand this new 'language' or can I just pretend I vaguely understand it?!