Here’s what you need to know this week:
State-by-state:
Apple backtracked on office work
Microsoft promised a pay boost
Meta opened the black box
Google’s YouTube will survive in Russia
Amazon built too many warehouses
Tencent can’t deliver deep tech
Ten people were shot and killed last week in an act of suspected racial terrorism in Buffalo, New York. The total number of mass shootings in the US in 2022 now sits at 198. We’re only five months into the year.
The tropes are depressingly familiar.
Payton Gendron, 18, an angry young white man, known to the authorities, live-streamed the atrocity online. And once again, the world’s biggest tech platforms appear to have provided the infrastructure for the shooter to rationalise, organise and promote his crimes.
The outrage has prompted the usual questions about what role platforms played in terrorist radicalisation – and whether they could have done more to stop it.
The Buffalo shooter used Twitch – owned by Amazon – to broadcast the shootings live. Before the attack he had made 17 references to his plans on Discord – a gaming community site previously courted for acquisition by Microsoft.
This is now an established pattern. Research published by the Institute for Strategic Dialogue last year found that several major gaming platforms play host to extremist activity.
Jacob Davey, head of research and policy at the Institute and co-author of the research, said that while he “hesitates to make a direct tie to the act of gaming itself”, the gaming-adjacent platforms show “a concerning trend towards becoming involved in white supremacy and terrorism”.
He told us: “The problem is the co-opting and adapting of incredibly viral youth internet culture [and] the illusion that this is a fringe part of the internet. It’s not fringe. It’s a big part of how people conduct themselves online.”
Ideas spread quickly, Davey explained, often among very young users who “co-radicalise” while creating and promoting them. Users in the research averaged 15 years old.
How did platforms respond? Twitch suspended the Buffalo shooter’s stream just two minutes after it started, and only 22 people were tuned into the stream when it was live. That may be as good as one can expect from a social media company of Twitch’s scale, but during the two minutes the feed was live someone took a copy of the video and posted it on Streamable, an obscure video site, where it was viewed more than three million times.
A link to the copy was posted on Facebook where it received more than 46,000 shares. Facebook failed to remove the link for more than ten hours.
In this case, then, Twitch actually comes out quite well. There are more questions for Facebook and for Discord, where the shooter had made his intentions clear for months. Overall, however, the situation is a depressing reminder of how few structural barriers exist to prevent shooters live-streaming their crimes.
(There are exceptions: YouTube, for instance, requires people to verify their accounts and have at least 50 followers before going live, discouraging terrorists from broadcasting via burner accounts.)
What’s new? This debate is happening at a time when some states – like Texas – are pushing for laws which would prevent platforms from removing extremist content, rather than forcing them to do more to moderate it.
This month Texas introduced a new law called HB20, which allows people to sue platforms with more than 50 million users if they believe the platform removed a post due to the viewpoint it expresses.
It is intended to firm up protections for free speech on political issues.
The law is being appealed and will probably go to the Supreme Court. But if upheld, critics say it could force platforms to host all sorts of objectionable content – including Nazi manifestos and other extremist screeds.
Twitch might have been sued for removing the Gendron’s channel before the shooting in Buffalo took place.
As Casey Newton, the tech commentator, points out, most users want a certain amount of content moderation and they don’t want their feeds covered in Nazi propaganda.
Laws like HB20 could legally require platforms to host the worst that humanity has to offer. We’re reaching a pivot point at which platforms and policymakers have to decide what the future of content moderation will look like.
Apple has delayed plans to force staff to return to the office three days a week. The tech state has cited a resurgence in Covid cases as the reason for pushing back the requirement, which was supposed to go into effect on May 23. However, as we reported last week, Apple was facing a backlash from senior figures like Ian Goodfellow, its AI chief, who viewed the policy as unworkable. Apple will now expect employees to come in for two days a week instead.
Employees at Microsoft will be offered more money. CEO Satya Nadella plans to boost pay for workers by offering increased stock grants and other benefits, according to reports by Insider. The plan also includes “nearly doubling” the company’s salary budget. Microsoft is looking at pay rises as one of the ways it can stave off employee dissatisfaction and retain workers, after a survey found that just 66 per cent of staff felt they were “getting a good deal at Microsoft”, down from 73 per cent last year. Glassdoor estimates Microsoft already pays new graduate software engineers $163,000 a year. The company reported profits of $16.7 billion on sales of $49.4 billion in its latest quarterly results.
Meta has agreed to open the doors to researchers, a crucial step in understanding the harm that social media can do. Speaking at the Tortoise Responsible AI Forum, Stuart Russell, a professor of computer science at Berkeley, said his team had secured an agreement with Meta to look at Facebook’s underlying usage data with the aim of understanding more about how its algorithms function. When whistleblower Frances Haugen leaked information about the inner workings of Facebook’s platforms and the findings of its internal research last year, she called for independent academic researchers to be given access to the datasets. The details of the research agreement, and of course the findings, remain to be seen.
Russia has said a block on YouTube is unlikely. As the first few days of Russia’s invasion of Ukraine unfolded Putin’s regime moved aggressively to block access to digital platforms for Russian citizens. Facebook and Instagram have been blocked. Twitter has been restricted. YouTube – despite claims from Moscow that it had failed to effectively moderate anti-Russian content and deplatformed Russian state media outlets like RT – has remained available to Russian citizens. According to reports by Reuters, Putin’s minister for digital development has said a block on YouTube is not part of the plan, because “when we restrict something, we should clearly understand that our users won’t suffer”. YouTube is an important part of Russia’s digital economy, used by 90 million Russians every month.
Amazon has overbuilt $2 billion worth of warehouses. The massive hiring spree that Amazon undertook during the pandemic – which swelled its workforce by 300,000 people and made it the second largest employer in the US – isn’t the only upscaling that helped it cope with its Covid-related demand spike. It also built and bought way too many warehouses, leaving its cost base much larger than it was. The tech state is finding ways to cope. Sellers on Amazon have to pay a fee to keep their items in its logistics system, ready to be purchased and shipped. Now Amazon is increasing those fees to help cope with the cost of having too many workers and too many facilities using too much fuel. “You’re completely at the mercy of Amazon,” said one marketing manager who sells on the site. “When price rises come in, it’s Amazon’s way or the highway.”
China regulated its internet companies “the Chinese way”, according to Kai-Fu Lee, former president of Google China. Inspired by the approach that US and European regulators took to protecting internet users, China has made “a rapid-fire series of measures that have had a huge impact”, curbing the power of large platform-based internet businesses. Companies like Tencent have found themselves more aggressively impacted by legislation and restrictions than the likes of Meta, Microsoft, Google and Apple – which will soon operate under the rulings of the Digital Markets Act, the Online Safety Bill and other emerging regulations in the US. Kai-Fu Lee told the Tortoise Responsible AI Forum that China has noticed its internet giants – Tencent, ByteDance, Alibaba and others – will not “carry the torch forward” when it comes to deep tech. He said a shift of attention was needed to support startups in the areas of quantum computing, semiconductors and artificial intelligence, and that regulating the big players was an efficient way to achieve this shift.
Thanks for reading,
Luke Gbedemah
@LukeGbedemah
Alexi Mostrous
@AlexiMostrous