There are about 100 billion neurons in the average human brain. Between them, by one estimate, they can perform a thousand trillion mathematical operations per second.
So what? Powerful computers nowadays can roughly match the brain in terms of Flops (floating-point operations per second). Supercomputers can work even faster. Which means when Elon Musk tells Rishi Sunak AI is bringing humanity to a point “where no job is needed” – where AI ends the need to work – he could be right.
The future is unknowable. In a sense it’s already upon us. McKinsey has brought forward its estimate of when computers can emulate “median human performance” in natural language from 2027 to this year.
But Musk is, more likely, wrong.
- There is little evidence so far that AI is a job-killer.
- There’s plenty of evidence that it’s a job creator.
- And there’s plenty of evidence that people misunderstand what AI actually is.
What AI is. It’s the bleeding edge of information technology at any given time, and has been since the 1990s. It has known big leaps forward, but it’s not new or fundamentally distinct. It has evolved:
- Expert Systems. 1990s artificial intelligence consisted of manually curated computer programmes that followed logical rules defined by humans. They were good at solving narrow, limited problems (like beating Gary Kasparov at chess in 1997) but they struggled elsewhere. In the 2000-2010s expert systems were superseded by…
- Machine learning. Statistical models trained on data that can “learn” a solution to a problem.
- Neural networks. The most advanced machine learning models are based on the architecture of the brain. These steadily became more powerful over the 2010s, driven by the exponential growth of data and computing power available for training. In 2013 the most powerful neural networks had up to 60 million “parameters” (connections between computer “neurons”). OpenAI’s GPT-4 is estimated to have over a trillion.
- Foundation models. Around 2020, researchers at OpenAI and Google discovered their most powerful neural networks were suddenly showing a new capability. Models trained on truly vast amounts of data were able to “generalise”, understanding natural language and performing general tasks when instructed. These came to be known as foundation models, which are the cornerstone of modern AI and the basis of…
- Generative AI – artificial intelligence that can create new text, images and audio based on instructions, including deep fakes.
- Large Language Models are a subset of foundation models trained on text data. The most famous example, GPT-4, powers OpenAI’s ChatGPT. Competitors include Claude 2 (from Anthropic) and Grok-1 (from Musk’s xAI).
- Frontier AI. The latest foundation models’ capabilities aren’t fully understood even by researchers. The UK’s AI summit last week defined these systems as frontier AI.
Not so scary. Musk predicted in his session with Sunak last Friday that AI would become “smarter than the smartest human”. If so, all bets are off. Until then, history suggests it won’t be a net job destroyer because technological advancements tend to create new jobs as they automate away old ones:
- The World Economic Forum estimated in 2020 that AI would take away 85 million jobs globally by 2025, but generate 97 million new jobs in data and computing.
- A 2021 report by PwC for the UK government said that while AI’s impact on jobs was still unclear, the most plausible scenario was of “broadly neutral long-term employment”.
- Wells Fargo, the American bank, surveyed US productivity advances and related growth in employment and GDP over the last century, and concluded in a report published in August that “mass unemployment and declining living standards as a result of generative AI seem unlikely for the foreseeable future”.
Are we sure? No. Advances in AI capabilities in 2023 have been unprecedented. The jury’s out on whether computers have passed the Turing Test yet (by fooling humans into thinking they’re chatting with other humans in blind tests), but McKinsey’s research is telling. It has not only updated its estimate of when computers can convincingly emulate humans to this year. It’s also raised its estimate of the number of hours automatable in all jobs from 50 to 60-70 per cent.
Time to reskill.
aLSO, in the nibs
NEW from tortoise
Should pro-Palestinian marches go ahead on Armistice Day?
James Harding and the Tortoise team are joined by economist Ian Goldin to discuss the pro-Palestinian protests in the UK, attitudes towards immigration and whether we should feel sorry for Sam Bankman-Fried