Unknowing bunnies, artificial MPs and dodgy Airbnbs
It seems that Oliver Richman (olivesongs11 on TikTok) has managed to capture the Zeitgeist recently with his song about AI-generated rabbits jumping on trampolines. The content creator’s musical musings about the nature of reality have spawned a number of tributes, including some rather beautiful instrumental covers.
The message behind the song is pretty simple: if we can’t be certain about the reality of bouncing bunnies or humans that we see on our TV screens, what can we be certain about?
Many of us are worried about a blurring of fact and fiction.
But most of the time - for the moment - we’re still grounded in some kind of reality, while nervously tiptoeing around the edge of a cliff face that we feel might give way at any time.
When we hear that Mark Sewards, the Member of Parliament for south-western Leeds in northern England, has created a virtual version of himself that constituents can consult online, we’re probably not going to panic.
Much like Santa, Sewards is clearly a busy guy who’s spread a bit thin, so an elf-like, AI-powered avatar stands in for him to field questions from his flesh-and-blood electors. With the propensity of LLMs to hallucinate freely, one wonders what exactly the politician will find his alter ego promising, but I don’t think anyone would be in danger of mistaking the chirpy, cartoon-like AI MP for the actual parliamentarian.
Another story emerged this week, which is certainly more serious, but again not much of a worry from the fact vs fiction perspective. A British academic was shocked to get a demand for $12,000 from a New York Airbnb host, who claimed she’d damaged furniture in the apartment after a stay of two and half months. There were pictures of a coffee table with a crack reminiscent of a disturbance in the San Andreas fault. And the camera never lies, right?
Well, the outraged guest pointed out inconsistencies in different photos, which were strongly suggestive of digital manipulation. Some of the news coverage suggested AI had been involved, because… well… I guess everything has to have an AI angle these days. But if AI were used, it must have been some kind of seriously booky amateur-hour app, as the faked image looked very low rent in comparison to the likely price tag of the Manhattan property.
What struck me about these stories was the fact that they could - even with today’s technology - have been much more concerning. If the avatar of Mark Sewards actually looked like the real politician, for instance, more ethical alarm bells would be ringing. Platforms such as Synthesia or Veed show the direction of travel here.
If an Airbnb host had the awareness and inclination to manipulate imagery in a more sophisticated way, the technology is out there to do it. If you’re claiming thousands of dollars of damage were inflicted on a property, you could even pay someone more expert in AI tools than you to do a professional job on the ‘evidence’.
We know, in fact, that AI is already capable of much more sosphisticated fraud. People’s voices can be sampled and manipulated to say anything. Video created on platforms such as Google’s Veo 3 includes synced sound and is indistinguishable from real footage. Plausible essays on the philosophers of ancient Greece can be conjured instantly and help present a facade of classical knowledge.
Hell, for one of the previous posts on this Substack, I even asked ChatGPT to create a fake LinkedIn post that would pass cursory visual scrutiny. The ultimate sacrilege.
The worry of many is that we will reach a point where extremely plausible deepfakes go beyond personal disputes and individual identity theft and start to influence whole cultures, shape political reality and lead to wars.
Society has always been confronted with fakery and forgery, of course. A UK election 100 years ago turned on a make-believe letter from Russian Bolshevik Grigory Zinoviev to the British Communist Party, creating a ‘red scare’ that undermined the Labour Party. And talking of 20th-century Russia, The Protocols of the Elders of Zion - a notorious fabricated text plotting Jewish world domination - had only just been declared a fake a year or two earlier, having been circulating for a couple of decades prior.
The motivations of humans to deceive others has always been there, but there are two major differences today.
The first is obviously the awesome faking power of the technology, meaning that it’s not that easy to ‘see the joins’ or make instant judgments about what we’re reading, hearing or viewing. We have got past the stage when those who are duped can be labelled as stupid or naive. Anyone is a potential victim. I’ve seen people arguing that schools should be teaching kids to spot fakes. To which my response is best of luck.
The second issue is the democratisation of access to the deepfake technology. It is in the hands of rogue governments, unscrupulous businesses, terrorist organisations and political extremists. It resides on the smartphones and laptops of cranks, grudge-holders and misfits. It inspires practical jokers, serial scammers and propaganda peddlars.
When Nina Schick published her book Deep Fakes and the Infocalypse back in 2020, she warned of a looming ‘f***ed-up dystopia’. While she identified ways in which we could unite against it, little progress seems to have been made in the intervening five years.
We know that bots can inhabit unknowing rabbits. We’re pretty confident that the guy singing about the rabbits is real.
For now.