Hello. It looks like you�re using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at memberhelp@tortoisemedia.com

Honking for humans

Honking for humans

Long stories short

  • Wilko collapsed into administration, leaving 12,500 jobs at risk.
  • China’s consumer price index fell to -0.3% in July.
  • Disney raised streaming prices in the US and launched a cheaper, ad-supported service in the UK.

Honking for humans

Google and Universal Music are in talks about a licensing arrangement to let singers’ images and voices be used in AI-generated music, for a fee. 

So what? There’s no deal yet, let alone compensation for Drake and The Weeknd, who lost out on around £1,500 in royalties from a deep fake song using their digitised sounds that was downloaded 630,000 times before it was taken off the platform earlier this year.

But the Google / Universal negotiation is significant even so:

  • For Google it’s part of a charm offensive aimed at becoming an acceptable face of AI, and could yield revenue if Google can take a cut of deep fake user fees passed on to Universal and its artists.
  • For Universal the talks hint at a way to monetise AI even though it’s not a tech firm, and to show it’s fighting for talent at a time when a lot of talent is on strike.
  • For the creative industry more broadly, crying out for AI regulation, they offer a glimpse of a win-win in the absence of timely legislation or case law.
  • For society they show it might be possible to reach an accommodation with deep fakes and other AI-powered plagiarism, rather than always fighting them.

Deep fake love lost. Large language models pose threats to the livelihoods of screenwriters and actors, united under “honking for humans” placards on picket lines in the US. It poses systemic risks too:

  • To market stability – an AI-assisted fake image of an explosion outside the Pentagon caused the S&P 500 to dip in May. The index lost only 0.3 per cent, but that was enough for short-sellers to make money in a market dominated by algorithmic trading that one observer said synthesises headlines and breaks them down into buy or sell orders “on a millisecond basis”. 
  • To consumer security – reports of deep fake imposter scams rose 34 per cent in the US in the three months after the release of ChatGPT (50 per cent in cases involving impersonation of a government official). The ballooning synthetic ID fraud business, which uses AI to turn stolen images into fake passports and licences, is worth an estimated $6 billion a year.
  • To products and brands – if a fake image can fool millions of web users into thinking Pope Francis really wore a white Balenciaga puffer coat this spring, it’s only be a matter of time before a bad actor shorts a well-known tech stock, then fakes a battery fire in, say, an eagerly-awaited electric pick-up. Isn’t it? 

Deep fakes aren’t innocent even when they’re meant to be the subject of popcorn TV. There’s been a backlash against Deep Fake Love, made in Spain for Netflix, in which participants have to watch their partners in flagrante with other people and decide if it’s real. 

Real world response. Digital fakes aren’t new. David Cope, a Californian professor of music and computer science, co-wrote 5,000 Bach-style chorales with a programme called Emmy in 2005. But with AI, deep fakes are racing ahead of regulation and non-governmental players are trying to catch up. 

  • On consumer and IP rights: Universal Music’s CEO, Jeff Harleston, implored Congress last month to pass laws protecting artists against the unauthorised use of their voice or likeness in deep fakes; mandating labelling of AI-generated content; and giving copyright owners the right to view the training data in AI models.
  • On due diligence and data governance, the Harvard Business Review recommends careful vetting of sources if building bespoke models; checking of source documentation if using them off the shelf; privacy as the default setting in all work with AI; not exposing sensitive data to large language models if possible; and always updating security software. 

Also, it probably makes sense to lawyer up. Liam Tolen and Chris Fotheringham of Ashfords, the law firm, say UK courts don’t recognise copyright infringement as US courts do if a deep fake lifts only the sound of an artist’s voice rather than actual words. As for footage, they offer the hypothetical example of fake footage of Justin Bieber kicking a swan. That could be grounds for a defamation claim – but it hasn’t been tested yet.

Thanks for reading. If you want to get in touch, drop us a line at sensemaker@tortoisemedia.com.