It was only because he studied epidemiology that Manlio De Domenico, a statistical physicist at Italy’s Bruno Kessler Foundation, had the foresight to start collecting coronavirus-related tweets in January, weeks before the world woke up to the full scale of the pandemic.
“I listened to what was going on in China and I asked to see what was happening on Twitter,” he told us. “I thought it could be serious. Sadly, I was right.”
Since De Domenico’s team began gathering data on 22 January, they have analysed more than 127 million tweets mentioning the coronavirus. Every day the number grows by about five million.
Figures shared with us by the team, based in in Trento, Italy, show that 46,000 Twitter posts which linked to Covid-19 misinformation were published each day this month on average – exposing potentially tens of millions of people to conspiracy theories, hoaxes or false statistics.
This is an infodemic and, within it, a colossal amount of fake news about the novel coronavirus is swamping social media and threatening to overwhelm public health messages.
Between 22 January, when the city of Wuhan was locked down by Chinese authorities, and 14 March, around 275,000 Twitter accounts posted 1.7 million links to unreliable information about the virus, the Italian data shows.
The findings are likely to seriously underestimate the extent of misinformation, however, not least because they are limited to Twitter, one of the few social networks to allow its data to be analysed by researchers.
Facebook and YouTube have billions more users, while WhatsApp (which is owned by Facebook), offers peer-to-peer encryption that makes it impossible for public officials, researchers or even Facebook itself to track misinformation.
Such is the concern over WhatsApp in particular that, last week, Irish Prime Minister Leo Varadkar urged people to “please stop sharing unverified info on WhatsApp groups”.
Like the virus itself, the real scale of misinformation can only be glimpsed in part. We can only chart what we see. And like the virus, which has now infected 294,110 people and killed 12,944 around the globe, misinformation is spreading via human-to-human sharing across multiple languages to reach every corner of the globe.
“False narratives about coronavirus are truly global and spread faster than the virus itself,” Graham Brookie, the director of the Atlantic Council’s Digital Forensic Lab, told us. “And like the virus, misinformation doesn’t respect our neatly defined borders.”
The most viral claims include: that Chinese scientists created the coronavirus in a secret laboratory; that drinking bleach or eating garlic cures the infection; that Pope Francis caught Covid-19; and that new 5G technology caused the sickness.
Several false claims have been weaponised by governments pandering to nationalist tenancies: earlier this month, Chinese officials pushed the unfounded rumour that the virus was introduced by members of the United States Army who visited Wuhan in October. US sites have created racist memes laying the blame for the virus on Chinese eating habits.
Unlike the virus, which is yet to peak in Western Europe, one ray of hope offered by De Domenico’s data is that the sheer quantity of coronavirus-based misinformation appears to be waning – if only just.
Having peaked on 28 February, when nearly half of all coronavirus-based tweets contained links to unreliable news sources, this number fell to 26 per cent on 14 March, suggesting that efforts by technology companies to purge fake news about the virus is beginning to work, allowing genuine public health information to cut through.
Experts remain concerned, however, that critical messages such as those urging citizens to self isolate remain at risk of being drowned out by fake news.
Facebook, YouTube and Twitter all said they were working hard to direct users towards reliable sources of medical information, and were communicating directly with the World Health Organization and other bodies. When a Facebook user tries to share a conspiracy theory about the coronavirus, it is marked as false once the claim has been reviewed by fact-checkers.
Despite such efforts, we found numerous examples of fake or misleading news about the coronavirus on several main platforms. One YouTube video watched by more than 16,000 people promotes chlorine dioxide – a type of bleach – as a cure for Covid 19. Another Facebook video viewed more than 100,000 times shows a British woman describing herself as a nurse blaming deaths from the virus on 5G networks.
Separate analysis by Tortoise of more than 200 specific examples of coronavirus-related fake news found that Facebook posts represented 72 per cent of all examples of misinformation flagged to fact-checkers around the world between 22 January and 18 March. However, this could reflect Facebook’s policy of encouraging third-party fact-checkers to vet its material, as well as its overall size.
Focusing on examples of misinformation which had previously been debunked by independent fact-checking organisations, the analysis reveals that the largest proportion of false virus-related news relate to fake events or occurrences, such as airlines shutting down or a particular celebrity becoming ill.
Twitter declined to comment on the specifics of the Italian data, saying it had not been provided with the research beforehand. It said it was taking proactive steps to clamp down on coronavirus misinformation and to elevate credible alternatives.
After collecting what is possibly the largest database of coronavirus-related tweets in the world, the team at Italy’s Bruno Kessler Foundation set to work to pull out posts that linked to fake news or misinformation.
Using data from third-party fact-checkers and machine learning, the analysts were able to identify 5.9 million virus-related tweets that shared a URL to a news site. About a third of these – 1.7 million – linked to a third-party website containing unreliable or false information.
De Domenico found that many of the tweets posting misinformation – around 40 per cent – came from accounts controlled by bots, or non-human actors.
“Infodemics present characteristics very close to those of epidemics,” he said. “With millions of users exposed to unreliable news spread online by social manipulators – not necessarily human.”
Bot accounts, which are set up to pump out tweets without the need for human input, have been linked to coordinated disinformation campaigns and were used by Russia to sow discord in the United States prior to the election of Donald Trump in 2016.
The Kessler team’s findings follow a report from the US State Department’s Global Engagement Centre (GEC) which found that Russian state-funded social media accounts began to post anti-Western messages about the cause of the epidemic on 21 January.
The GEC, which was set up to track and expose propaganda and disinformation, found that thousands of Russia-linked social media accounts had launched a coordinated effort to spread alarm about the coronavirus. The disinformation campaign promoted unfounded conspiracy theories including that the US was behind the Covid-19 outbreak.
“Russia’s intent is to sow discord and undermine US institutions and alliances from within, including through covert and coercive malign influence campaigns,” Philip Reeker, the acting Assistant Secretary of State for Europe and Eurasia, told Agence France-Presse (AFP).
However, one expert analyst who had seen the GEC report, which has not been publicly released, told us that the evidence presented by the US government was fairly scant. “They were very cagey about where their analysis was coming from,” the expert said. “The GEC report was super-Twitter centric… That’s understandable but it’s a problem.”
The Italian data shows that the turning point for the “infodemic” came on 24 February, when the first deaths began being reported in Italy, and peaked four days later, when Microsoft founder Bill Gates wrote an op-ed in the New England Journal of Medicine calling the coronavirus a “once-in-a-century pandemic”.
The Kessler team developed an infodemic “index” to rank how likely a Twitter user in a particular country was to be exposed to misinformation about coronavirus.
Of the top 10 countries suffering coronavirus outbreaks, Iranian Twitter users were most exposed to social media misinformation, the data suggests. South Korea’s Twitter users were the least exposed.
Last month, 73 Iranians died and hundreds were hospitalised after drinking toxic alcohol following rumours which claimed that drinking alcohol eliminates the coronavirus.
More than half of all global tweets spreading misinformation about Covid-19 came from the US, the team found, although they cautioned that geo-tagging mechanisms on Twitter could be “spoofed” to make it seem like a tweet was coming from another country.
Tortoise has formed a collaboration with the Kessler unit to continue to analyse disinformation on the virus. The results of this collaboration will be published in our app throughout the pandemic. The full findings of the Kessler team will be published in full in an upcoming scientific paper.
We wanted to take a close look at the types of misinformation being spread about coronavirus, so we turned to Google’s Fact Check Explorer.
This is a searchable database that collates examples of web pages or social media posts which have been flagged and verified by fact-checkers around the world.
We narrowed down the search to all coronavirus related fact-checks between 22 January, when it was announced that Wuhan would go into lockdown and the first US case was confirmed, and 18 March.
In total, 1,112 results came up. Misinformation was found in 25 languages and had been debunked by fact-checkers working for organisations such as AFP (part of Facebook’s third-party fact-checking programme), Newtral (Spain), Full Fact (UK), BOOM (India), Tempo.co (Indonesia) and Teyit (Turkey).
You can explore the data here. Each circle represents one bit of misinformation. Grouped by theme, you can filter to see the platforms they have appeared on and the topics that keep coming up. Tap to discover the claim and the fact-checkers’ verdict.
Technology giants such as Facebook and Twitter have cracked down on the coronavirus “infodemic” and have been praised for taking proactive action to crack down on misinformation.
But some platforms have taken stronger action than others, and many still publish misinformation on the coronavirus with worrying regularity, analysis suggests.
Facebook in particular has tried to get on the front foot, donating $1 million to local newsrooms to help them cover the crisis and partnering with third-party fact-checkers such as the International Fact Checking Network. An analysis of coronavirus search terms shows that many – but not all – references to misinformation on coronavirus now return zero results or are accompanied with the message “false information”.
Twitter too has taken action, removing thousands of tweets on coronavirus that it said “go directly against guidance from authoritative sources of global and local public health information”. Unlike Facebook, which encourages users to actively report misinformation, Twitter does not offer that as a direct option, however.
Experts have voiced particular concerns about encrypted chat applications such as WhatsApp, the Facebook-owned service which has been described as a “petri dish” of misinformation. Often forwarded by well-intentioned but fearful individuals, hundreds of fake messages have spread misinformation on the platform, including some suggesting that the army was preparing to enforce quarantine in Britain.
“It is clear… that a lot of false information continues to appear in the public sphere,” European Commission Vice President Věra Jourová said last week. “In particular, we need to understand better the risks related to communication on end-to-end encryption services.”
Carl Woog, a Whatsapp spokesperson, told CNN Business last week that the company was not going to send out a mass message to users asking them to seek out accurate information. Instead, it has donated money to the international fact-checking network and has launched a coronavirus information page.
Despite the actions taken by social media companies, examples of fake or misleading news about the coronavirus are easy to find.
One YouTube video posted on 23 February is still online despite propagating unfounded theories that 5G technology is responsible for the deaths in China caused by the coronavirus.
YouTuber Dana Ashlie has racked up more than 1 million views from her video which suggests that the symptoms of Covid-19 were actually caused by humans “hit with 60GHZ waves and its impact on the uptake of oxygen via the hemoglobin”. Apparently to avoid detection by YouTube’s algorithms, she spells out words like “C.H.I.N.A” and “WU-HAN”.
YouTube also contains videos touting unproven cures for Covid-19, such as the use of essential oils or chlorine dioxide, a type of bleach.
The ninth result brought up by the company’s algorithms when we searched for “coronavirus essential oil” is a video called “Essential Oils for CoronaVirus.” In the video, which was posted on 7 March but is still online, a woman called Raven Astrea makes clear that her recommendations are not a replacement for washing hands. But she goes on to say that five essential oils, including lavender and peppermint, have “naturally disinfectant properties” and “help with breathing and lung issues”.
A YouTube video called “Combatting the Coronavirus with Chlorine Dioxide”, seen by more than 16,000 people, sees the host urge viewers to “help themselves” combat coronavirus by “using miracle mineral solution, the active ingredient of which is chlorine dioxide. It kills every virus, every bacteria, every parasite.” Last year, the US Food and Drug Administration (FDA) warned about the dangers to health of drinking MMS.
YouTube declined to comment.
A quick search on Facebook for “Corona and 5G” shows that misinformation remains on that platform too. One video, which has attracted 95,000 views and appears to be a republication of a video already debunked by fact checkers, shows a British woman describing herself as a nurse and stoking unfounded fears about the technology.
“In Wuhan these people suddenly just fall over,” she says. “They have a dry cough. I’ve never seen it. It doesn’t happen. However, it does happen with 5G.”
On Instagram, too, conspiracy theories proliferate. Bianca Paradis, a nutritionist with 11,000 followers, shows herself drinking juice with ginger to “fight off the Coronavirus”. A site called Oliveology published a post stating: “Hello oregano oil, goodbye coronavirus.”
Last night, a Facebook spokesman said the company had removed the Paradis post because it promoted a false cure or prevention method.
The other posts remained up for the moment, he said, because some false claims around coronavirus were reviewed by third-party fact-checkers such as Full Fact. The spokesman said he expected the 5G conspiracy theory video to be removed after being reviewed in this way. As of last night, it was still online.
Our data set contains 235 examples of misinformation reviewed by fact-checkers and captured by Google Fact Check Explorer between 22 January and 18 March 2020. For practical purposes, we have selected only those in the English language and excluded any entries that do not meet the criteria of “mis” or “disinformation” – so no satire, opinions or claims that were found by fact checkers to be true. We analysed the 235 unique bits of misinformation that either aren’t true (74 per cent), claim to be true without any evidence (15 percent) or are misinformed by taking information out of context (11 per cent).
Reporters: Ella Hollowood and Alexi Mostrous
Editor: Basia Cummings
Data visualisation: Chris Newell
Design: Oliver Bothwell
Research: Barney Macintyre, Jack Kessler, Tanya Nyenwa