The tech platforms are hurrying to fix themselves during the pandemic. But is it enough? And will it stick?
When it comes to social media – its scale and its reach – it’s often worth restating the basics. There are currently 2.5 billion active users on Facebook. 2 billion on each of WhatsApp and YouTube. 1 billion people use Instagram, while 0.8 billion are on TikTok – and rising fast.
These platforms are hangouts for huge proportions of the world. We find photos, videos, dance routines, jokes and joys there, but also more serious things – they are, increasingly, the way in which we access the news, both good and bad, true and fake. A 2019 Ofcom survey found that social media was the main online news source for 41 per cent of people in the UK.
So when we talk about the news, as Tortoise is doing this week, we have to talk about social networks. And that’s doubly true during a pandemic, when timelines – as much as headlines – are dominated by a single, all-encompassing story, and the information we receive about it can literally mark the difference between life and death.
The amplifying effect of social media has taken on a noticeably different role during the coronavirus crisis. In fact, this has been the great flourishing of something more than citizen journalism – you could call it citizen public service broadcasting.
Social media has given researchers and doctors an unprecedented platform at a time when our collective hunger for scientific analysis and explanation has never been higher. Roberto Burioni, an Italian virologist, has been busy publishing regular informative videos to his Facebook following of over 696,000 people. All of a sudden, people have favourite Twitter epidemiologists. Professor Devi Sridhar, Edinburgh University’s chair of global public health, now has more than 66,800 followers. Updates from Professor Karol Sikora and Dr Caitlin Rivers are liked and liked again, as though they were Kardashian selfies.
Then again, it’s not just the scientists who have big followings. 5G conspiracy theorist and Holocaust denier David Icke has 308,900 followers on Twitter.
Whatever good they might provide, these platforms are also hotbeds for a second type of virus: the misinformation infodemic. Conspiracy theories, hoaxes and rumours have proliferated since January. They have been weaponised by governments, shared by unsuspecting family members, and repeated on national television.
In March, Tortoise investigated how fake news about coronavirus was spreading on social media. Data from Italy’s Bruno Kessler Foundation revealed that 46,000 Twitter posts linked to Covid-19 misinformation were published each day that month.
Inaccurate information has spread quickly and globally. While compiling our interactive database of coronavirus misinformation, Tortoise data journalist Ella Hollowood found that, for example, false claims made in one UK Facebook post (shared 368,000 times) not only resurfaced on social media in Nigeria, India and Cambodia, but also picked up an inaccurate attribution to Unicef along the way.
“What [people] see and are exposed to through social media and messaging services throughout the day is probably more important in shaping the way they look at an issue like [coronavirus] than what they see on the news,” the British MP Damian Collins explains.
Collins used to chair the Digital, Culture, Media and Sport select committee in Parliament, and presided over a major inquiry into disinformation and fake news. He has responded to the crisis by setting up an independent, coronavirus-specific fact-checking service called Infotagion.
Users are invited to send in screenshots of the information they come across so that it can be fact-checked against trusted sources (including the World Health Organisation and official government advice) by a team of researchers. Infotagion not only helps people to determine whether online content is accurate and therefore safe to share, but creates “an open and public archive of what people have actually seen in different phases during the period of the coronavirus”.
They have received thousands of submissions. “It paints quite a vivid picture of disinformation during Covid-19,” Collins says.
At first, the content Infotagion received centred around fake cures. Then it was WhatsApp messages from “a friend of a friend” purportedly working on the front line and exposing unreported, unverified information. The next wave tended towards 5G, as well as conspiracy theories around the origins of the virus and who was to blame.
The threat to public safety is hard to overstate. “There are people that disinformation has killed during this period,” Collins notes.
On the whole, social media platforms haven’t ignored this problem. Many have attempted to tackle false information while trying to uphold and signpost alternative trustworthy content.
“You’re never going to solve the information-environment problem around something like Covid with content moderation alone and just removing tweets,” says Nick Pickles, head of policy at Twitter.
While the platform has changed its policies to better target and remove content that has a potential to do harm, it has also partnered with organisations such as NHS England and the Centers for Disease Control and Prevention to help provide users with verified information. Product interventions include a new in-app 5G search prompt, introduced just yesterday, directing users to a UK government webpage.
It’s a similar story elsewhere. YouTube, for example, has committed to removing videos that discourage people from seeking medical treatment, that claim harmful substances have health benefits, or that inaccurately link coronavirus to 5G networks.
Facebook has invested $100m in the global news industry, and announced grants for local news and fact-checking organisations. It directs users who search “coronavirus” or who are members of Covid-related groups to relevant health organisations. Misinformation that could cause harm – like encouraging someone to drink bleach as a “cure”– will be removed.
“We understood the crisis and its impact quite early on. And we did try to do everything that we could to help – which is why we’ve kept a record of all the things that we’ve been working on just so people can see it,” a Facebook spokesperson claims.
But not all platforms are alike. The encrypted messaging service WhatsApp is a little trickier to manage. It has become the wild west of social media during the pandemic, as unverified chain-messages fly between groups at record pace. Facebook has introduced a limit so that these messages can only be forwarded to one chat at a time, but it won’t halt the flow: only slow it down.
There’s a tricky question hovering behind the social media platform’s recent actions: why now? Why not, well… ages ago? Many of the issues facing social media platforms at the moment are scaled-up versions of pre-existing problems.
One potential answer: many of the seemingly “new” measures used to counter Covid-19 disinformation are scaled-up versions of pre-existing techniques. “A lot of the approaches that we are using are based on the things that we’ve already tried in individual countries, responding to specific problems. I think what’s new here is that it’s the first time where we’ve really had all these approaches running globally at the same time,” Pickles explains.
Although Facebook has announced some platform changes built specifically for Covid-19, a spokesperson said that other innovations were already “in the pipeline, but we had to release them, work on them and release them sooner”.
Not everyone is convinced. “[Social media platforms] have started doing several things differently and I think some of that is probably helping a bit,” Collins concedes. But: “I think what’s interesting [is that] the networks have started to do stuff that they said in the past they couldn’t do or wouldn’t do.”
Besides, none of it really seems to go far enough. To put it bluntly: there’s a reason why he felt it necessary to set up Infotagion in the first place.
And there’s another question looming in the background: what will remain? Will recent safeguards stick around after the virus has peaked, or will all the same old problems return in different ways, on different days, in future?
It’s worth pointing out, of course, that not everything will still be applicable. In some instances, the definition of what might be considered a policy violation is intimately linked to context. Take encouraging people to visit a restaurant. “In normal circumstances that wouldn’t be something we would take action on. Calling for people to go to a restaurant during Covid has a very different context,” Pickles explains. “There are certain situations where content that wouldn’t normally break our rules would break our rules in a Covid context because of the associated risks for people.” A degree of flexibility is required.
But it seems probable that at least some of the new measures implemented over the past weeks are here for good. Coronavirus has interrupted patterns of work and forced Facebook, Twitter and others to ramp up their automated content moderation as staff work from home or go on furlough. These changes look unlikely to go away: AI moderation is quicker, more pre-emptive and, in theory, powerfully improvable.
(Although Twitter did assure us that certain decisions will only be taken with human oversight: “an account can’t be permanently suspended without a person looking at it to make sure…”)
The pandemic has pushed social media platforms to improve their game – in turn revealing that they do, indeed, have the capability to make vast changes when it’s required of them. This is an argument for keeping up the pressure after the crisis has abated. After all, there’ll still be billions of us in those ecosystems then, reading news that should be fit to print.