The Israel-Hamas war is the first big misinformation test of its kind for Elon Musk’s social media platform X.
So far, it’s not going well. The EU has announced a formal investigation of illegal content and disinformation, using powers under the newly approved Digital Services Act.
And while it’s not just on X that false war-related material is spreading – EU commissioner Thierry Breton has publicly written to TikTok and Meta too – researchers say the problem is particularly bad on Musk’s site.
In reply to a previous letter from Breton, X said it had acted on dozens of EU requests to remove illegal content since the conflict began.
The site has until Wednesday to reply to the EU’s key questions about its crisis response protocol, and until the end of the month for the rest.
Failure to comply could result in a penalty of up to 6 per cent global turnover – something X could do without right now.
Since Hamas attacked Israel last Saturday social media platforms have been flooded with violent imagery, hateful content and false information. Examples include old or repurposed videos from previous conflicts taken out of context, clips from video games, doctored images and fabricated claims.
“It seems most prominent on X,” says Jack Brewster, an editor at NewsGuard, which monitors online misinformation. It’s a notable departure from the early weeks of the Russia-Ukraine war, where misinformation mainly appeared on TikTok.
Experts blame two key changes made by Musk since he took over in October 2022: the introduction of paid-for verification and the gutting of X’s moderation teams and policies. Any account can now pay to be verified, giving the illusion of authenticity or credibility. Reporting has shown that verified accounts are algorithmically boosted, and they are also eligible for monetisation.
Of the 25 key false narratives NewsGuard is tracking, most are coming from, or have been spread by, verified X accounts. And while some of the misinformation can be linked to a particular political agenda, many X users appear to be sharing disinformation in order to go viral.
“A lot of people are setting up accounts and trying to capitalise on the global interest in this conflict,” says Brewster. NewsGuard’s analysis suggests that many verified accounts getting traction have been set up recently, and often share links to money-making schemes among the misinformation.
Musk has drastically scaled back X’s capacity for fact-checks and moderation, and in September an EU report found the company had the highest ratio of disinformation of all the large social media platforms. The company is increasingly reliant on its Community Notes system, which allows specific users to fact-check content and flag it as false or misleading. While useful, it’s a system which places the burden of fact-checking onto users rather than experts – and it isn’t always consistent.
Tortoise analysis of popular false narratives found that often the same content might be flagged as misinformation in one post, but have no Community Note in another. It’s not just on small accounts, either. A faked White House document claiming President Biden had authorised $8bn military aid to Israel was shared by multiple verified accounts and was viewed hundreds of thousands of times – with many related posts still available days later without any clarification that the information has been proven false.
The result is that X has become a site that experts say cannot be relied on to provide accurate information during a global crisis.