Yesterday in Davos a professor from the Massachusetts Institute of Technology was offering personalised deepfakes to anyone willing to be filmed for a 15-second video.
So what? Whether they were believable is up to the viewer (see video below). But they were free, virtually instant and powered by easily-trained, off-the-shelf AI tools that are the consuming obsession of this year’s winter meeting of the World Economic Forum (WEF).
And not just the WEF:
Disinformation pie. Deepfakes are only one slice of a giant pizza of lies on offer across the digital universe including doctored images, human-created fake news and misinformation masquerading as advertising.
But it’s a fast-growing slice: from 2022 to 2023 the volume of deepfake sexual content grew by 400 per cent and of deepfake fraud by 3,000 per cent, according to Control AI, a campaign group. Most deepfakes currently contain sexual material but politics’ share is likely to grow in a year when 4 billion people in more than 70 countries will be invited to vote.
Muck / brass. Digital ad money ($680 billion last year) follows eyeballs, which tend to follow the three As:
These are most reliably generated by incendiary or shocking content created without regard for truth. So disinformation is “a likely and predictable outcome of this market system,” as Carlos Ruis Dias writes, not an unexpected side-effect.
Been there. In 2016, when Trump last ran for president, a group of Macedonian teenagers claimed to be making $5,000 a month each from ads next to fake news stories about looming Hillary Clinton indictments. Eight years on YouTubers peddling bad science and conspiracy theories are making multiples of that.
Deepfake supply chains. In the UK, 96 per cent of web users want deepfakes banned. A serious attempt to make that happen would tackle what Tegmark calls the entire deepfake supply chain, from two-bit developers to megalithic deployers. As with malware and online child abuse, it would become a crime to create or distribute deepfakes.
Google that. Plug “how to make a deepfake” into your favourite search engine and it’s likely to return a mixture of handy how-to guides, links to commercial suppliers and news articles on how easy deepfakes are to find, and how hard to outlaw.
Google’s James Manyika says most deepfakes are produced by a “long tail” of small creators whose content the platform can’t watermark or control in terms of search results. On misinformation more broadly, Google executives at Davos say they’re trying to protect elections, not undermine them; that curbing the distribution of misinformation is “a continuing research challenge for the entire industry”; and that they’re using good AI to help detect bad.
And yet. Neal Mohan, CEO of YouTube, which Google owns, admits the generative AI used in deepfakes “will be in the hands of bad actors and it will make the cost of producing that kind of content zero”. So will AI outsmart democracy before it outsmarts humans? Tegmark answers with a question: “Isn’t it already?”
Despair not. Manyika says within our lifetimes, thanks to generative AI, we should be able to communicate with dolphins in their own language.
In the meantime, here is your reporter.