Long stories short
• Rishi Sunak said he’d discuss a “Cern-like” research centre for artificial intelligence with Joe Biden this week.
• Apple unveiled its long-awaited mixed reality headset, Vision Pro (more below).
• Ella Irwin, Twitter’s head of trust and safety, resigned from the company.
A UN policy paper published this week begins with a warning. “In every society around the world, harm is rampant,” it reads. “Open, safe and secure use of the Internet is slipping away from us – potentially permanently.”
So what? The UN is preparing to take a stand on the exploitation of private data, digital inequality and the development of artificial intelligence systems that have concentrated power in the hands of a select few, while exploiting data from all over the internet.
How? António Guterres, the UN secretary general, is backing the paper – which contains policies for a new Global Digital Compact – in the hope that it will become a series of commitments endorsed by heads of state at the UN’s Summit of the Future next year. The paper recommends that member states commit to:
- creating one million more advocates for digital technologies – 250,000 of which should be in Africa;
- connecting all schools in the world to the internet by 2030;
- giving $100 billion to the UN’s digital transformation fund for less developed nations by 2030; and
- establishing a board to advise on international standards and measures for artificial intelligence systems.
The gap. The recommendations aim to close the digital divide between developed and developing nations.
Only seven of the 50 most influential artificial intelligence companies in the world are headquartered outside of the US, according to Forbes, and they’re all in developed countries. None are headquartered in Africa or South America.
There are nearly 2,000 times as many secure internet servers per capita in the US than in Nigeria and developed countries produce the vast majority of fundamental research into artificial intelligence. But much of the labelling of training data, including of graphic and harmful images, takes place in the developing world.
If artificial intelligence is to be a defining technology of the modern era, it will be in the image of established powers, at the expense of an exploited majority, to the benefit of a select few.
The select few. Last week, a group of executives, researchers and engineers including Sam Altman, CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind and Dario Amodei, CEO of Anthropic, signed a one-line statement which warned about the risk of extinction from artificial intelligence.
Altman, who sees the creation of potentially dangerous superintelligence as inevitable, has recommended the creation of a UN organisation for artificial intelligence – equivalent to the International Atomic Energy Agency (IAEA) – to put it on par with nuclear warfare and future pandemics as a threat to humanity.
“The UN is an interesting avenue because it is a place where the global north and global south can have equal footing, and should have a role in creating transparency mechanisms so we can know what’s going on in each country. But it all depends on whether the agency’s powers are enforceable,” says Ivana Bartoletti, a visiting cybersecurity and privacy fellow at Virginia Tech and founder of the Women Leading in AI Network.
Covid precedent. The UN’s Pandemic Preparedness Treaty, overseen by the World Health Organization, was conceived in response to Covid. It is still being negotiated but involves a binding convention and other international instruments intended to guarantee pandemic prevention, preparedness and response.
It would set an interesting precedent for the UN’s role in governing artificial intelligence, if similar standards for collaboration and enforceable controls could be put in place.
Leaders in the private sector are asking for as much, but they might get more than they bargained for if the UN focuses more on present harms than on farther-off existential risks.
“What I find so negative about all the apocalyptic messaging is that it misses the agency we currently have to govern artificial intelligence. It is a bundle of parameters, data, people and technical resources assembled by us. People shouldn’t be feeling mystified and powerless,” Bartoletti added.
Money money money. Short-term profits and investment have flooded the foundational models sector. OpenAI is valued at nearly $30 billion. Short-term governance like restrictions on the use of training data and energy, mandatory fair pay for data and its labelling, as well as enforced fact-checking or verification would all curb the bonanza.
Which explains why many executives prefer to talk about theoretical existential risks, rather than the present harms on which their businesses rely.
Apple finally did it. The reveal of its mixed reality headset – the Vision Pro – dropped at the company’s Worldwide Developers Conference this week. Here are some smart ideas from people in the know (which you might not have read in the whirlwind of reporting since the demonstration): first, with its $3,499 price tag, Vision Pro is not competing directly with Meta’s Quest, but with high-end smart television devices (even though some feel it has clearly blown Meta’s offering out of the water). Second, Vision Pro uses Unity, a development platform that will speed up interoperability with other Apple devices and potentially with others outside the Apple ecosystem, meaning content creators can get going with it quickly. Third, the “Pro” moniker suggests that a cheaper, scaled back version is likely in the works. Read yesterday’s Sensemaker for more.
For someone apparently obsessed with virtual “metaverse” experiences, Mark Zuckerberg’s insistence that “engineers perform better in person” is curious. This view is also the basis for his recent demand that Meta staff return to the office at least three days a week this summer. A spokesperson for the company said: “We’re confident that people can make a meaningful impact both from the office and at home.” Zuckerberg has argued that in-person work means people get more done and build better relationships. Strange, for someone who’s spent the last few years saying we should all be floating around on an alien planet as legless avatars with headset over our eyes.
Cortana – an old-school Microsoft feature based on the infamous character from Halo – is being discontinued. The virtual assistant, which was originally launched in 2014 on Windows operating systems and phones, was one of the company’s first artificial intelligence offerings, and performed much like its fictional namesake, offering sage (if sometimes stilted) advice to users. It appears Cortana has become a victim of the fast-moving competition over foundational models and no longer fits into Microsoft’s vision of its artificial intelligence product line. Microsoft has recently invested billions in OpenAI and the GPT strain of large language models, as well as Windows Copilot. So it’s goodbye, Cortana…
Tencent and NetEase combined to produce 80 per cent of the total revenue from China’s top video game companies this quarter. The pair of giants produced seven of China’s ten most popular titles, and Tencent alone accounted for over 50 per cent of the domestic market’s growth. All this is to say that China’s video game market is still a functioning duopoly despite measures by the government to curb the power of the sector’s biggest businesses and promote an agenda of “social responsibility”. Last year saw the CCP freeze approvals of new video games licences for release in China for over nine months. It appears Tencent – and its chief domestic Rival, NetEase – have bounced back strongly, other competitors be damned.
YouTube has ended its policy of removing fraudulent claims about the 2020 US presidential election from its platform. The 2016 election, which saw Donald Trump enter the Oval Office, forced many social media platforms to develop complicated content moderation policies surrounding political misinformation. Google – which owns YouTube – justified the change by saying that the landscape had changed and the risk of inciting “other real-world harm” by allowing false content related to 2020’s election campaign had diminished. Per the BBC, the platform will also continue to update and tweak its content moderation systems ahead of the 2024 election, which will see Trump bid for the White House once again.
Retention and deletion
The Federal Trade Commission settled a lawsuit against Amazon this week. It alleged that the company had failed to delete recordings of children in a timely fashion, in violation of federal child privacy laws. The regulator ordered that Amazon pay $25 million as part of the settlement. Amazon’s in-home devices like the Amazon Echo and Alexa collect audio data in order to learn voice commands and perform predictive analytics. The data collected could contain the voices of children, and are subject to strict retention and deletion requirements. The Washington Post reported that the FTC are arguing that the recordings gave Amazon “a valuable database for training the Alex algorithm to understand children.” Microsoft reached a similar settlement with the FTC on Monday, after collecting children’s data via its Xbox platforms.