Candles in the wind tend to be rather vulnerable. Easily extinguished before they’ve had a chance to shine. And I guess that’s what veteran artist Elton John had in mind when he spoke out over the weekend on the question of AI and copyright.
The Honky Cat singer made his fortune long ago, but can see that our current and future generations of musicians face the bleak prospect of their work being scraped by big tech. No recompense or acknowledgement. Their unique creative accomplishment crunched up at a recycling plant and then reassembled.
In my extended essay AI and the redundant human, I use the analogy of the Star Trek transporter, which disassembles people at a molecular level and then reassembles them. In most instances, the Star Fleet officers emerge unscathed and recognisable. It’s only if the tech goes wrong that we might find them transformed into some uncanny valley version of their former selves.
With generative AI, the whole purpose of the technology is to create something apparently new in response to a prompt. The end result may be a pastiche of someone’s work, as in the Studio Ghibli controversy, or it might be much more subtle. Sometimes we may know the output is a direct result of appropriated work and other times, it may be extremely hard to tell because it is actually an amalgam of multiple inputs.
In terms of political debate in the UK, there is a stark division between those - such as members of the House of Lords - who believe that AI businesses should declare the copyrighted work they’ve hoovered up, and representatives of the government.
The Starmer administration seems determined to avoid restricting the large tech businesses who sponsor the development of artificial intelligence. This is driven by a desperate need for growth in the economy to fund social programmes. (And also a desire to appease the Trump administration with its wrecking-ball tariff threats.)
This debate goes far beyond individual court cases or even specific pieces of legislation. My prediction is that it will run and run. There is also a pretty fundamental problem in that just about everything ever produced to date has already been purloined the the AI companies. Attributing value retrospectively seems like an almost impossible task.
In other recent news, a student at the private Northeastern University in Boston, Massachusetts, wants her money back after discovering her professor was using ChatGPT. Ella Stapleton noticed some odd quirks in the materials produced by Rick Arrowood for his lectures and presentations, suggesting a helping hand from AI, despite the lecturer’s apparent aversion to its use by the students.
The full impact of generative AI on education has yet to emerge, but my suspicion is that it will be profound and paradigm-shattering.
In a world where students can produce work at the touch of a button, the whole raison d’etre of the system starts to disappear. That’s because the work itself was never the point. It simply served to demonstrate something deeper: that students had read and internalised knowledge and then reformulated it or critiqued it in some way themselves. That process was the education.
This is true in school with 10-year-olds and 15-year-olds, just as much as it applies to a university undergraduate at the age of 20.
For years, we discouraged students from plagiarising academic content. We also told them to avoid reliance on web-based platforms such as Wikipedia. The objections in higher education would have been that such sources weren’t authoritative, but there was another issue at play. We know that when people cut and paste, they don’t think. They have made no effort. This negates the purpose of the whole exercise and devalues the work of students who have done the reading and thinking.
When students use AI to generate their essays and papers - perhaps shortly before a deadline - they deliver something plausible and appropriately coded. They also, however, pave the way for their own redundancy. That’s because in a workplace, anyone can press a button. No graduate required.
Of course, we might argue that the graduates will be uniquely placed to critique the output of the AI. But having gone through a school and university system in which such thinking was constantly outsourced to a machine, I wouldn’t count on it.