“It’s the economy, stupid.” In 1992, James Carville not only summed up the essence of Bill Clinton’s US presidential campaign, but of a whole era. Until the first decade of this century, economics was the driver of political action. The left advocated for workers and broad redistributive policies. The right argued for less government and a wider role for the private sector.
In the last decade, the old boundaries have blurred. A backlash to the inequalities created by globalisation has shifted political debate away from economics and towards identity. Culture wars have become mainstream. Democratic societies are splitting into ever narrower segments – with each demanding respect from the mainstream.
But we are in danger of missing the change that is reshaping our lives and will, arguably more than economics or culture, rewrite politics. It’s the technology, stupid.
As we learnt at the Responsible AI Forum, the potential of artificial intelligence is dazzling.
Deep learning algorithms are developing drugs, solving protein folding, reaching towards answers for fusion and accelerating net zero, figuring out seemingly impossible mathematical problems, slashing the costs of logistics, revolutionising education… the list goes on. AI can’t solve every problem. It’s particularly good at prediction. It’s optimal when dealing with huge, complex combinations of data. And it won’t be stopped by sci-fi fears of the singularity – i.e. when computers overtake humans – or 21st-century Luddism. It’s a step change in civilization and, overwhelmingly, for the good.
But AI also raises critical dangers. As SoftBank’s Masayoshi Son warned, cyber criminals have the power to bring a “dark night” to societies, i.e. to switch the lights off and shut down infrastructure systems. Sir John Sawers, the former MI6 chief, was right to note that it will consolidate the control of autocrats. More worrying still, Stuart Russell, professor of computer science at Berkeley, fears the potential of AI-driven “killing machines,” and Tom Glocer, the founder of BlueVoyant, warns of AI-accelerated biohacking. These are dangers that are criminal, lethal and, at their very worst, existential.
Policymakers also have to deal with the profound economic distortions which AI has already started to create. Like the car, AI will remake economies rather than just be part of them. Yet governments don’t have the tools to handle a fully fledged AI economy. As Azeem Azhar pointed out, our laws and even our language are not designed to regulate a marketplace where thousands of niche sectors could each be dominated by one or two firms.
The geopolitical implications are also unclear. AI is remaking the world map. It’s already a world of two systems, the US and China. Europe might be taking the lead in AI regulation, but as our Global AI Index shows, it’s losing ground in AI investment. If trends continue, Europe might not have much of an industry to regulate. The US/China AI duopoly puts private companies in a bind: Ash Fontana at Zetta Venture Partners says that if you work in the AI world, then sooner or later you will have to choose between the US marketplace and China’s economic system.
At Tortoise we do not presume to know the answer to these issues. But we’ve built an AI Forum to allow policymakers, AI entrepreneurs and scientists to discuss them. If you haven’t joined already, and would like to, please get in touch here.
In 1944, 44 Allied nations gathered at the Mount Washington Hotel in Bretton Woods, New Hampshire, to negotiate a new monetary order in the wake of the Second World War. It may be that a similarly comprehensive agreement is required to ensure the responsible deployment of AI in the years to come.
Demis Hassabis’s plan for a world institute of AI – bolstered by cutting-edge researchers – is compelling. Alternatively, the UN could take the lead and publish global AI principles similar to its 17 sustainable development goals.
Before any grand bargain, however, the most important immediate task is to identify what AI should not be used for. As Tom Hurd, the former head of the UK’s joint biosecurity centre, said, the world should agree never to sell AI-based military systems into conflict zones. There should always be a human hand on the nuclear button, he added. And he also called for a global alliance on bio-terrorism.
This is just a start. We are keen to learn more about the machines that can learn.
Find more information about becoming a member of the Tortoise AI Network – which gives you full access to the video recordings of all the sessions as well as invites to a host of member events – on our website, or please contact Alexandra Mousavizadeh at email@example.com.
Watch the highlights from the Forum below.