Hello. It looks like youre using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at memberhelp@tortoisemedia.com

Is platform transparency going backwards?

Is platform transparency going backwards?

Meta plans to shut down CrowdTangle, a tool for tracking the spread of content on Facebook

Here’s what you need to know this week:

  • Affairs of state: CrowdTangle down – is platform transparency going backwards?

State-by-state:

  • Google faces another fine
  • Apple is taking on Klarna
  • Microsoft struggled to inspire office work
  • Meta enabled paedophiles
  • Amazon’s Twitch is under pressure in Russia
  • Tencent said no to crypto

Affairs of state: Is platform transparency going backwards?

CrowdTangle began as a tool for publishers to identify influencers and measure their footprint on social media. It has become a source for researchers, fact-checkers and journalists on the spread of misinformation and how users engage with it. 

Its closure is a sign that the transparency of Meta’s social media operation is going backwards, since it was one of the few windows on the inner workings of Facebook.

CrowdTangle’s founder and CEO had been pushing for Facebook to share more of its data. He left the company in 2021. A spokesperson for Meta said an “even more valuable” tool for researchers to understand the spread of content on Facebook would be provided in future. What that might be, they haven’t said. 

Without CrowdTangle, is Meta transparent enough? No. Priyanjana Bengani, a fellow at the Tow Center for Digital Journalism, says there are two aspects to transparency – data and algorithms.

Meta could give access to data on the activity that takes place in its platforms – the way users interact with content. It could also be more transparent about the function of its algorithms – the rules that define what content is served to which users, and when. 

“So far, platforms have focused predominantly on data,” Bengani says. “Algorithmic transparency is far more complicated, especially because so much of it is influenced by user behaviour coupled with platform interventions. These interventions, what the features are, when they were rolled out, and the scale of the rollout, are often not conveyed to users.” 

In other words: how algorithms work is hard for external researchers to measure because the underlying metrics are not public. This makes longitudinal studies pretty hard too, and you end up in a place where different pieces of research reach different conclusions. 

What should the platforms provide? More data. Dr David Stillwell, professor of computational social science at Cambridge University’s Judge Business School, told us research into social media harms often runs into trouble because the personal data that companies typically share excludes so-called derived data.

Platforms create derived data using the original data captured from the user and their activities. It shows inferences about users’ behaviour, the things they are more or less likely to click on, and when. 

“For example, Facebook predicts your interests by recording what adverts you click on and the posts that you read in the news feed. Facebook doesn’t share this data, so researchers find it hard to research whether people who read certain posts on social media are unhappier than people who read other types of content,” Stillwell said.

Derived data is also the data that best reflects the way search and content-serving algorithms actually work. 

“This is actually key because research shows mixed evidence on whether social media makes people unhappy. It’s not that using social media by itself makes people unhappy. The latest idea is that it’s the way that you use social media that makes the difference.” 

Would platform transparency help? Yes, with caveats. Even if Facebook shared more data with researchers, further study would be needed to understand the relationship between social media use and real-world harms like mass shootings and suicides.

“Is the platform itself causing harm, or merely reflecting the people in society?” asks Luke Thorburn, a researcher at UKRI’s Safe and Trusted AI centre. He points to the fact that it’s often unclear whether the relationships between social media activity and real-world harm are causal or correlational.

Data on engagement – what users like, dislike, click and share – can demonstrate more about what impact a specific piece of content is having. 

“Fundamentally, if we want good answers to the causal questions, rather than just showing correlations, we really need researchers to have the ability to run experiments on platforms, and to do the sort of testing that platforms themselves do all the time.”

Twitter, which has an interface for researchers to access data about content and traffic, is considered by some to be a gold standard. This type of transparency would help answer some questions about specific harms but most other platforms are a long way behind Twitter. 

CrowdTangle didn’t provide derived data or information on specific Facebook profiles; only on groups and posts.

What next? The companies often deflect requests for access to data on grounds of user privacy. But the tension with the research community over access is palpable. “I think we do need some mechanism that would allow researchers to propose, with minimal right to veto from platforms, the ability to run experiments on the platforms,” Thornburn says.

Meta recently pledged to give researchers access to information on adverts about social issues, elections and political affairs. It says it will publish information about targeted ads – including locations, demographics and interest categories – by the end of this month. But this is just one platform on one part of the social media landscape.

It looks as if CrowdTangle will be supported until November. After that, the light shed on Facebook’s inner world could grow much dimmer.


Google: Anti-competitive behaviour

Another week, another antitrust complaint against Google. This time Jobindex, a Danish online job-search platform, has complained to the EU that Google unfairly favoured its own job search service. Margrethe Vestager, the EU antitrust chief, said three years ago that she was looking into Google for Jobs but has yet to take any action. This could be the case that revitalises her interest. If so, Google should be concerned. Vestager has fined the company more than €8 billion in recent years for various anti-competitive practices. 


Apple: Pay later

Apple is taking a bite out of Klarna.Its new operating system for phones – iOS 16 – will introduce Apple Pay Later to US users, allowing them to spread the cost of a purchase into four payments over six weeks, without interest or fees. Buy Now Pay Later (BNPL) services are relatively unregulated and have been criticised for the way they target low-income groups. In December 2021, Panorama reported than 15 million UK adults are actively using BNPL, with the main operators being Klarna, Clearpay, Laybuy and Paypal. Apple’s adoption of the tech arguably normalises the controversial tech still further.


Microsoft: Return to office

Microsoft – the maker of Office – is struggling to get employees back… to the office. Since April the company has required its 57,000 US-based workers to be in the office at least 50 per cent of the time unless a manager gives permission to WFH. But this month Jared Spataro, Microsoft’s Corporate Vice President of Modern Work (check out the job title) said that target wouldn’t be reached until early 2023. Workers “are not the same people they were two and a half years ago,” Spataro said. “They don’t buy the argument that they can’t have any flexibility.” 


Meta: Instagram abuse

A shocking exclusive from Forbes, which has identified more than a dozen Instagram accounts with over half a million followers sexualising child and teenage models. One particular photographer allegedly earned money selling photos to paedophiles, using Instagram accounts which have only recently been shut down. Even after the photographer’s arrest his two other accounts remained active. In a depressingly familiar pattern, it was only when Forbes contacted Meta that the material was taken down. 

Also worth following: In the wake of the US Supreme Court’s ruling to eliminate the constitutional right to abortion, Meta have clamped down on internal communications on the subject. The New York Times reported this week that people within the company, speaking on the condition of anonymity, had said that an internal memo warned staff that Meta “would not allow open discussion” of abortion or the ruling at work. 


Amazon: Twitch in Russia

Amazon-owned Twitch, the video game streaming platform, will be fined by Roskomnadzor. The Russian communications administration, which oversees regulation of foreign digital platforms, has tangled with big tech over content, data and censorship since the invasion of Ukraine began. Twitch is accused of failing to store Russian user’s personal data on servers within Russia, rather than overseas. Roskomnadzor requires that all data from Russian citizens be stored within Russia.


Tencent: No to crypto

Tencent’s WeChat will ban accounts that persistently discuss crypto. The massive Chinese messaging platform has updated its terms of service to warn users that continued discussion of NFTs and other speculative crypto products will result in a ban. China banned crypto mining and transactions in 2021, taking a dim view of the extremely volatile and unregulated nature of the industry. Tencent’s WeChat is falling in line with this stance by insisting that all accounts engaged in discussion of crypto need to stop it. Now. 

Thanks for reading,

Luke Gbedemah
@LukeGbedemah

Alexi Mostrous
@AlexiMostrous