Long stories short
Alibaba founder Jack Ma resurfaced in mainland China after reportedly spending more than a year overseas. South Korean police said they had arrested Do Kwo, the boss behind the terraUSD and Luna cryptocurrency fraud. US House Speaker Kevin McCarthy said lawmakers “will be moving forward” with a TikTok bill.
Last week Google released its chatbot Bard – an answer to OpenAI’s ChatGPT – with almost as many caveats as capabilities. Companies like Google and Microsoft are rushing to deploy large language models even though problems with hallucinations and misinformation remain unaddressed.
So what? Generative artificial intelligence may become a huge market. It could threaten 300 million jobs and boost gross domestic product around the world by 7 per cent in the next decade, according to Goldman Sachs. Google’s deployment of Bard shows that companies are taking precautions when it comes to the flaws in their models – but releasing them into the market nonetheless.
What could go wrong? Sundar Pichai, the company’s CEO, told employees that “things will go wrong” with Bard, after an initial public demonstration of the new product showed it spitting out falsehoods. Pichai’s hope is that users will eventually make Bard better by giving it feedback.
Generative models, including Bard and ChatGPT, can
say things with confidence that aren’t true (called “hallucinated responses”);create deceptive images or texts (like this picture of Pope Francis);be manipulated into expressing biased or harmful views;have logic failures; andbe rude.
Users of Google’s new language model are reminded that it might get things wrong and encouraged to turn to Google’s search products in order to verify its statements. This amounts to telling users: “abandon trust, all ye who type here,” as James Vincent put it in The Verge.
A relatively innocuous example: Bard told one user that the months that follow January and February are “Maruary, Apruary, Mayuary…”
Google says it has solicited feedback from 10,000 testers “from a variety of backgrounds and perspectives” to combat this issue, but ultimately accepts that problems will arise.
Instead of ensuring Bard’s results were as reliable as those generated by Google’s eponymous search engine, the chatbot was pushed to a public waiting list in order to compete with Microsoft-backed ChatGPT, which can do a lot of the same stuff as Bard but without access to the Google Search data infrastructure.
Beyond Bard, more untested and unfinished applications from other companies are likely to follow.
Why? The huge potential of the technology is driving growth and incentivising companies to get their foot in the door as soon as possible. Bill Gates, founder of Microsoft and still an advisor to the company, has said that “entire industries will reorient” around generative models. The sense of an accelerating race to deploy is palpable.
By the numbers
29 – per cent, the proportion of Gen Z in the US already using generative models at work.
100 – million, the number of users acquired by ChatGPT two months after launch.
$11 – billion, total investment in OpenAI by Microsoft.
$100 – billion, share value loss at Google after its failed Bard demo.
What’s the problem? Super-charged misinformation and the atrophy of human intelligence. By regurgitating information that is already on the internet, generative models cannot decide what is a good thing to tell a human and will repeat past mistakes made by humans, of which there are plenty.
“Companies like Google and Microsoft may be the only players capable of dealing with the darker side of these models,” said Simon Greenman, a partner at BestPracticeAI and member of the World Economic Forum’s Global AI Council, “but new types of attack, and new issues will emerge, like prompt injections that circumvent guardrails, prejudice against specific groups or more erroneous and offensive answers.”
What’s the solution? Possibly, keeping humans in the loop. “Large language models are remarkably creative, at the extreme this leads to hallucination. The way to solve this is to combine the creative power of the models with traditional ways of verifying knowledge: verifying sources and using human experts as fact checkers,” said Sean Williams, the founder of AutogenAI. “Using human experts is about creating a brilliant interface between humans and the large language model.”
Generative artificial intelligence is still in its infancy, despite the recent breakthroughs. But the systems for controlling its impact on people need to be figured out sooner rather than later. As Sam Altman, CEO of OpenAI, put it: “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”
Academics are arguing that regulations, like those currently being debated in the European Parliament or the White House’s AI Bill of Rights, are not ready to cope with the boom in generative artificial intelligence, and the misinformation it could create.
Google named its model Bard, after the storytellers of old – known for creating believable but fantastical stories. For now – with companies scrambling to use the public as testers and competing to release the next big application – some models are doing just that: spinning tall tales.
Possible headacheEmployees at Apple have expressed concerns that the company’s augmented reality headset will miss the mark on price and usefulness. According to the New York Times, eight current and former staff say the headset is viewed with scepticism within the company – a stark difference from the enthusiasm with which employees have viewed previous landmark releases. Part of the problem is price. The device is tipped to retail at $3,000, and employees are unsure whether the market for such a product really exists. But the world’s most valuable company has always had a knack for creating new demand. So if history is any guide, Apple’s foray into virtual reality will be a market-making success.
Addicted usersMeta is facing a lawsuit filed by the San Mateo County Board of Education in California. The suit – which also names Google, Snap and TikTok – alleges that the platforms have actively engaged in getting children addicted to social media, negatively affecting their behaviour and wellbeing. The filing describes Meta’s “end goal” of making “young people engage with and stay on the platforms as long as possible… This is best accomplished by catering an endless flow of content that is most provocative and toxic.” Meta has yet to respond, but has faced similar criticism many times in the past. It tends to point to its child safety and fact checking features.
U-turnThe UK competition authority has revised a previous decision about the impact of Microsoft’s acquisition of Activision Blizzard. The CMA had determined that Microsoft could profit from withholding Call of Duty games from the platforms of its main rival, Sony. Last week it changed that decision and determined that the acquisition of Activision Blizzard and Call of Duty would “not materially affect” competition with Sony – which has a large catalogue of its own games. Microsoft had alleged that the CMA made errors in its calculations, and that those calculations were based on erroneous assumptions about Microsoft’s Xbox business. It seems they may have been right. The CMA’s decision is just one part of the considerations around the deal, and the final ruling is not due until the end of April.
Well positionedTencent has reversed two successive quarters of revenue declines. CEO Pony Ma optimistically said the company is “well positioned to benefit from a rebound in China’s economic growth” with the end of the country’s zero-Covid policy. The positive attitude also comes from the government’s recent pledge to support China’s internet giants by easing its regulatory crackdown, as well as Tencent’s successful investments in WeChat’s short video function and overseas gaming. Effectively, as per the FT, Tencent posted a quarterly revenue of Rmb 145 billion ($21 billion) in the three months to December, showing a 0.5 per cent increase from the same period a year earlier, and a 19 per cent net profit rise.
Ghost workersSome of Google’s Raters have been given a pay increase from $14.50 to $15 per hour, according to NPR. Content moderators drive Google Search results (Google calls them Raters), and ensure that information is accurate and not distressing. The Alphabet Workers Union estimates that there are 200,000 Google Raters worldwide, meaning they would make up more than half of the company’s workforce. But, as unseen “ghost workers”, they are often managed through third-party contractors and are not entitled to the health insurance, paid leave or pension contributions that other Google employees get. Content labelling and moderation is vital for optimising language models like Bard.
Cloud chasingAmazon will offer artificial intelligence startups $300,000 of free cloud computing resources if they sign up to its Amazon Web Services platform, according to the WSJ. Amazon is chasing customers for its cloud services in the new and booming generative model industry. The rapid growth of businesses like OpenAI and Anthropic – which build models like the ones powering Bard, Bing and ChatGPT – has opened a path for cloud service providers like Amazon to grow too. Large models depend on cloud-based hosting services, and are highly demanding in terms of data. Amazon is set to capitalise.
Thanks for reading. Please tell your friends to sign up, send us ideas and let us know what you think. Email email@example.com.
Additional reporting by Alexi Mostrous & Serena Cesareo