Hello. It looks like you�re using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at memberhelp@tortoisemedia.com

Samuel Altman, CEO of OpenAI, is sworn in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law oversight hearing to examine artificial intelligence, on Capitol Hill in Washington, DC, on May 16, 2023. Altman testified that regulating artificial intelligence was essential, after his chatbot stunned the world. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said. (Photo by ANDREW CABALLERO-REYNOLDS / AFP) (Photo by ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)

Sam Altman testifies

Sam Altman testifies


The man behind ChatGPT has warned the US Senate that there is an “urgent” need for AI regulation. What might that look like?

At the Sir Harry Evans Summit for Investigative Journalism, Expedia Group chairman Barry Diller spoke of his optimism about AI technologies and, in particular, of his faith in one man – Sam Altman.

“Here you’ve got somebody who is so purely motivated, who is not economically driven and who understands the dangers that are in front of AI,” Diller said. “Unfortunately, or fortunately… he is hardly the only player here.”

Sam Altman is CEO of Open AI and sits on Expedia Group’s board. Under his stewardship, OpenAI launched ChatGPT, the chatbot that has made AI more accessible to the public than ever. It was no surprise, then, that when he gave evidence to members of the US Senate about the future of AI, everyone was keen to hear what he had to say.

During Tuesday’s three-hour hearing, the OpenAI executive described the incredible potential of the technology that underpins his company, saying it could be used to cure cancer or combat climate change. But he also outlined the risks – including misinformation, fraud, job automation, the exploitation of women, and copyright infringement.

“I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman told the Senate. “We want to work with the government to prevent that from happening.”

Altman proposed that lawmakers introduce regulation “urgently”, and set out a three-point plan for them to do so.

He first suggested that the US government create a federal agency that can grant  – and revoke – licences to companies building AI models of a certain size. 

He also called for a legally binding set of safety guidelines for AI models. Finally, Altman proposed that independent, third-party auditors review AI tools produced by companies.

None of these are new ideas. As AI technologies have developed, governments have discussed ways to regulate it, but plans are only just starting to emerge. 

The EU has already issued strict guidelines for the use of artificial intelligence – including large language models and generative AI – and the UK’s Competition and Markets Authority is planning a review of the AI market.

The US Senate was broadly receptive to Sam Altman’s suggestions. They praised his commitment to safety – and the fact that, on the surface, he doesn’t seem to be motivated by profit, because he is not a majority shareholder in OpenAI. 

That may be true, but Altman is, at his core, a businessman, and it is in his interest to make the company profitable. If he collaborates with US lawmakers, he will have a say in the regulations that ultimately govern OpenAI.