Join us Read
Listen
Watch
Book
Sensemaker Daily

Scarlett Johansson: safety catch is off at OpenAI

Scarlett Johansson: safety catch is off at OpenAI
Scarlett Johansson deepfake complaint could be a foretaste of dystopia.

Scarlett Johansson has said she was “shocked, angered and in disbelief” when OpenAI released a chatbot that sounds “eerily similar” to her.

So what? It’s this month’s second-biggest story about OpenAI. 

  • The biggest involves the departure of two top executives in charge of long-term safety.
  • This leaves the world’s most powerful AI company with no one leading efforts to align its goals with those of real people.
  • In AI parlance, these people ran a “Superalignment” team tasked with ensuring that state-of-the-art AI systems don’t turn on their creators and destroy humanity – and they’ve quit.

Johansson meanwhile feels violated by the apparent theft of her famous voice. Her story may be a taste of things to come.

Not-so-Open AI. Nine months ago, Johansson received an offer from OpenAI’s CEO, Sam Altman, to voice up the company’s products. She turned it down for “personal reasons”. But then:

  • Hi, Sky. During a live demo of the latest model of OpenAI’s ChatGPT earlier this month, a low female voice cooed “oh stop, you’re making me blush” in response to a compliment from a male researcher. That was Sky, one of five choices of voice for the company’s AI assistants. It sounded just like Johansson in the 2013 film Her, which tells the story of an introvert who falls in love with an operating system (it doesn’t end well).
  • Her? On the day of the demo, Altman cryptically tweeted “her” to his 3 million followers, in an apparent nod to the similarities. He’s also previously said it is his favourite film. 
  • Scarlett statement. Johansson told NPR Altman contacted her team two days before the demo, asking her to reconsider the offer. They never spoke, and OpenAI went ahead with the launch anyway.
  • Scratch that. OpenAI said on Sunday that it would pull Sky, adding that the chatbot’s voice was “never intended” to sound like Johansson’s. 

Johansson pointed to the company’s lack of transparency and a potential copyright violation, saying she wanted “appropriate legislation to help ensure that individual rights are protected”.

On paper OpenAI is a nonprofit-corporate hybrid on a mission to build superintelligent AI that “benefits all of humanity”. 

In practice it’s unambiguously profit-driven – and thriving. This month alone it has partnered with the social media giant Reddit; launched a slate of new products; and benefitted from a thriving partnership with Microsoft, which on Monday announced that its latest super-high-tech PCs will run on OpenAI’s GPT-4o model.

At what cost? Last November Altman was briefly fired as OpenAI CEO for not being “consistently candid in his communications”. When he returned to his post he maintained that safety was a priority but promised to “fight bullshit and bureaucracy”. A run of bad PR points to an aggressive CEO who seems more worried about profit and control than mission:

  • Jan Leike, one of the Superalignment leaders who stepped down, said “safety culture and processes have taken a backseat to shiny products”.
  • Altman was forced to apologise after Vox revealed that non-disclosure agreements forbid ex-employees from criticising the company for life and force them to give up all vested equity (i.e. millions of dollars) if they refuse to sign it.
  • Fortune found that OpenAI never provided Leike’s team with the resources it promised in public statements, essentially dooming it to fail.

The competition. Companies like Anthropic and Cohere have mimicked OpenAI’s unusual non-profit corporate structure in a promise to build AI safely. But large language models like ChatGPT are vastly expensive to produce and, so far, the “good guys” are losing the race.

Superalignment teams have been criticised for being too vague – unable to define the “existential risk” that comes with making AI. The known risks include data bias, privacy erosion and job displacement.

What’s more. In the past year, Meta and Google also dissolved their core AI safety teams. As did Microsoft, OpenAI’s biggest investor and the clear frontrunner in the race for AI dominance.


Enjoyed this article?

Sign up to the Daily Sensemaker Newsletter

A free newsletter from Tortoise. Take once a day for greater clarity.



Tortoise logo

A free newsletter from Tortoise. Take once a day for greater clarity.



Tortoise logo

Download the Tortoise App

Download the free Tortoise app to read the Daily Sensemaker and listen to all our audio stories and investigations in high-fidelity.

App Store Google Play Store

Follow:


Copyright © 2025 Tortoise Media

All Rights Reserved