Three dangerous myths about AI and jobs
When confronted with massive change and disruption all around us, it’s only natural that human beings often become disconcerted and worried. If we haven’t yet been personally affected, but we see danger looming, we perhaps tell ourselves stories that soothe our nerves and convince us that everything will work out for the best.
A hurricane is blowing in? Well, maybe it will veer off course and miss us entirely. Perhaps it will end up as a Category 3 rather than a Category 5. And if it does hit, our makeshift boards and sandbags will ensure we’re safe.
The problem with our infinite capacity for myth-making and our desire for a comfort blanket is that we’re left blind-sided when the crisis actually hits. And that’s exactly how it seems things are likely to be with artificial intelligence.
Myth Number One: “It’s just a tool…”
This is such a popular mantra on platforms such as LinkedIn that it has almost become folklore. We used a calculator in school. We were given a laptop to use at work. This is just more of the same.
Well, leaving aside the obvious issue that it’s an extremely powerful tool in comparison to a calculator - and we wouldn’t let a bunch of school kids loose with chainsaws - there’s another thing we’re in denial about. Tools are things that we control. I choose whether to use the calculator and what I ask it to calculate. And because some AI is currently much like this (think of people’s prompts to ChatGPT or Runway), the assumption is that it will always be that way or that this is fundamental to all AI systems. Actually, AI is very soon set to be largely ‘agentic’ and autonomous.
The Director of Product Marketing at American business Nvidia, whose tech helps to power the AI revolution, defines the agentic version of the technology as that which “uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems….” Erik Pounds says that, in practice, this means the model can perceive, reason, act and learn. An example of acting, might be a “customer service AI agent… able to process claims up to a certain amount”. And this is why the technology will prove a major driver of human redundancy.
Not only will very few people be needed to prompt the AI tasks that humans do directly control, but many AIs will also be quite happily working away without any need for human input at all. I’m not sure this penny has quite dropped yet.
Myth Number Two: “Your job won’t be taken by AI, but by someone who knows how to use AI…”
This is closely interrelated with Myth Number One. There are many people on social platforms who advise you to dive deep into artificial intelligence and learn how to use the different models. If you don’t, you’ll be left behind.
Like a lot of beguiling arguments, there is a grain of truth in it. Clearly, it you don’t know the first thing about AI in 2025 and beyond, you’re going to find it very hard to get on in many workplaces. It’s the equivalent of turning up for a job interview in 2005 and admitting that you’d never used Google or were unfamiliar with Microsoft Word. Knowing something about AI will probably be a prerequisite for employment in many sectors, but will just be a hygiene factor in a job application and is really not going to distinguish you from anyone else.
There will be an elite group of people who know AI at a technical level, with an understanding of machine learning and deep learning, and these folks will do very well in the next few years as companies advance towards what’s known as Artificial General Intelligence (AGI). And then there will be everyone else, whose job isn’t related to the technical aspects of AI, but which might involve making use of prompts to generate text and video, or perhaps analyse data.
The AI systems are already very good at interpreting conversational language and the input needed from humans is far from demanding. I see ads all the time that tell me I can create working apps out of a few short sentences. (Currently, I take these claims with a modest pinch of salt, but this world is clearly just around the corner.) In a very short space of time, AI will be intuitive enough that your granny or grandad could use it if they so wished. Interactions will be initiated by simple written or verbal prompts in everyday English or Polish or Mandarin.
Myth Number Three: “Jobs will go, but new ones will be created…”
This is another one that sounds superficially plausible. I point out in my book AI and the redundant human that new roles have always been created by previous technological revolutions. If we go back to the early 2000s, people started losing their jobs in traditional travel agents, where customers would browse brochures, sit down with agents wearing garish uniforms, and book holidays. But maybe when those agents got their marching orders, they would end up getting jobs managing the website for Trivago or working in customer service for lastminute.com.
The online revolution created a whole raft of occupations that hadn’t previously existed, so why wouldn’t this next one? And the fact that we can’t think of what they’ll be shouldn’t worry us. Because who would have known about a social media manager in 1995?
My prediction is that artificial intelligence will prove to be fundamentally different in this regard. It is a technology that can potentially compete with, or outperform, humans in the widest possible range of cognitive and creative tasks.
Let’s say, for argument’s sake, a lot of roles in business consulting or finance disappear because AI can do the work of humans more quickly, cheaply and efficiently. It’s then conceivable that because of its immense power, the AI spots new consulting niches or invents novel financial products that we’d never previously imagined. Great news! But on what basis would we assume that humans would be needed to deliver them, any more than we were needed for the work that we previously did?
As AI generates new opportunities to meet market needs, solve problems or make money, it will be AI that will make those opportunities a reality. And if AI generates economic prosperity, it will - as Geoffrey Hinton has warned - do so in the context of a fundamentally unequal world. A few people will get immensely rich, while many others will find themselves struggling to make a living, necessitating a universal basic income. Who pays for that income is going to be a hot topic in the years ahead.