AI: the whirling vortex that sucks us all in
I’ve written already about the madness of school work produced by AI being marked by AI. The student pretends that they know something, while the teacher pretends the student isn’t pretending. It’s an educational death spiral that can only end in tears for all concerned.
But there’s a parallel charade currently being played out in another arena. Job applicants have discovered that their CVs and covering letters for vacancies have an extra sparkle and allure if concocted by ChatGPT or a similar large language model. Perhaps they feed the bot some raw material and ask it to make improvements. Or maybe they outsource the whole shebang.
Employers look askance when these factory-produced applications start clogging up their inboxes and tut-tut at the idea a candidate can’t be bothered to produce something of their own. But pots and kettles spring to mind, as recruitment consultants and HR people are frequently using AI to filter the applications.
I was reading recently on LinkedIn about a recruiter who feeds the job description for a role into chatty g, and uploads CVs too, which he claims to anonymise. He then gets the hallucinatory chatbot to rank the candidates in order of their likelihood of succeeding in the role.
Whether the process of LLMs crunching human-originated CVs has any ‘predictive validity’ - the term used by psychologists to describe the statistical likelihood of tests having any actual relationship to eventual performance in the job - is certainly worthy of research. But once we accept that many of the CVs may have been generated by ChatGPT, my hunch is that we might as well allocate interviews on the basis of star signs.
And consider another crazy layer to this. The guy who’s using the LLM to filter all the CVs is replacing an important part of what used to be his job. It may save him so much time and effort that he’ll soon be redundant and in need of ChatGPT to update his own CV.
Is it just me, or is there something about the circular and self-fulfilling feedback loops created by AI that feel profoundly troubling?
If you know that you CV is going to be appraised by AI, you start modifying it to reflect what AI is looking for. And what’s the best way of modifying it appropriately and successfully? Why, using AI of course! Especially if all the other applicants are going to be using it. You don’t want to be at a disadvantage.
When Microsoft recently laid off thousands of people (in no small part due to the growth of generative AI in the gaming industry), an executive had the nerve to suggest that the redundant workers used products such as ChatGPT and Copilot as a counsellor and job coach. No doubt the response of most former workers would have consisted of a phrase with the same syllable count as Xbox, but rather more X-rated.
We can see exactly the kind of ways this is likely to unfold, can’t we? We’ve entered a world where AI causes problems to which AI is the only plausible solution.
Artificial intelligence is going to drive a huge spike in psychological insecurity, for instance. People will fear for their jobs and livelihoods in the years ahead, as the technological revolution gathers pace. Deepfake video content will meanwhile blur the line between fact and fiction; people will enter into relationships with AIs; and social interaction between humans will become even more difficult and fractured.
As sure as eggs are eggs, society’s pre-existing mental health crisis will deepen. And we’ll be told that the only practical answer is AI therapy. It’s like the shop owner who sees a couple of thugs smash up some of his stock and is then told that he needs to pay them a retainer for ‘protection’.
And here’s another scenario, which is almost certainly already playing out.
Someone has an unsatisfactory experience as a customer, perhaps finding that a product or service is not as described or depicted by the AI imagery and copy that was used to promote it. They want to complain and get a refund, but struggle to formulate their message, so they turn to AI. It helps make their case seem clearer and more plausible.
Perhaps they imagine a person is reading at the other end, but in reality it’s an AI agent specifically tasked with handling complaints. This agent creates a bespoke response appropriate to the specific circumstances. This reply then goes back to the customer in the form of the email or instant message. And the built-in AI used by the email platform or messaging platform summarises everything neatly into a couple of sentences for the complainant.
A human may be at the heart of the dispute, but becomes a bystander to its resolution - partly through convenience and partly through lack of other options. AI might have formulated the original product messaging, structured the initial complaint, interpreted the complaint, produced a response, and then interpreted that response.
The redundant human imagines they remain in control, even as they are sucked into an ever-swirling, ever-whirling AI vortex.