Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at memberhelp@tortoisemedia.com

Friday 14 February 2020

Primary Sources

Harm’s way

The British government has a new plan for regulating online spaces – but is it up to scratch?

By Raphael Hogarth

The government is going to “make Britain the safest place in the world to be online”. You can be sure of that, because the government won’t stop saying it.

Theresa May said it in her 2017 manifesto, and the government said it in its 2019 white paper on regulating the internet. Boris Johnson said it in his general election manifesto, and the government just said it (five times) in its response to a consultation on “online harms”.

Assailed by story after story about the horrors shared on social media – from child sex abuse images to terrorist propaganda, incitement to suicide and vitriolic abuse, not to mention systematic disinformation – politicians in Britain and abroad feel they have to act. The technology companies themselves have now shown that, left unregulated, they will talk about cleaning up their sites much more than they do anything about it.

Now the government’s consultation response has given the clearest indication yet of how the government wants to approach regulation of the web. Here are the key passages, and what they mean.

The “duty of care”

“The Online Harms White Paper set out the intention to improve protections for users online through the introduction of a new duty of care on companies and an independent regulator responsible for overseeing this framework.”

duty of care: The idea of a “duty of care” is at the heart of the government’s scheme, cribbed from a paper by Professor Lorna Woods and William Perrin of the Carnegie Trust. Social media platforms and some other tech companies will be under a new legal duty to take “reasonable steps” to keep their users safe. The idea born of an analogy with the offline world: just as the owner or manager of a shop or a pub must take “reasonable steps” to ensure that their physical site is safe for visitors, so tech companies must ensure that their websites are.

independent regulator: The government has confirmed that it will be Ofcom, the existing media regulator, that keeps tabs on whether companies are complying with this duty of care. If they are not, Ofcom will have powers to bring them into line. It will also be Ofcom’s job to publish “codes”, which give the companies a more detailed picture of how to discharge their duty of care.

Freedom of speech

“The consultation responses indicated that some respondents were concerned that the proposals could impact freedom of expression online….

….Safeguards for freedom of expression have been built in throughout the framework. Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach…..

….To ensure protections for freedom of expression, regulation will establish differentiated expectations on companies for illegal content and activity, versus conduct that may not be illegal but has the potential to cause harm, such as online bullying, intimidation in public life, or self-harm and suicide imagery….

…..Companies will be able to decide what type of legal content or behaviour is acceptable on their services. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently.”

freedom of expression: When the government first published its white paper on online harms, it sounded as though they were limbering up to give a regulator, led and staffed by civil servants, the power to decide what people should and should not be allowed to say online. Tech companies panicked at all that interference with their business – and rights campaigners panicked about the impact on freedom of speech. Those campaigners were particularly worried because some of the harms covered by the government’s scheme – cyber-bullying and disinformation, for instance – include a lot of lawful content, and are inherently subjective.

wider systems and processes: An important clarification from the government. Ministers are now adopting what they call a “systems-based approach”. The new legislation is not actually supposed to make any content illegal that was legal before, and it will not empower Ofcom to make rulings on individual images, posts or messages. Instead, the scheme will allow Ofcom to ensure that companies have “systems” and “processes” in place to stop harmful content from appearing on their sites in the first place, to flag it when it does, and to deal with it – whether by taking it down or stopping the algorithms from promoting it – once flagged.

Free speech campaigners will still be left wondering, though: how can the regulator possibly decide whether a company’s “systems and processes” are adequate, without asking whether they let through content which oversteps the line? And if Ofcom has to make that judgement, then isn’t it really Ofcom that decides where “the line” is, after all?

differentiated expectations: This is another concession to industry. Platforms know that complying with the new scheme is going to cost them money. They do not think they should have to spend as much to deal with bullying between schoolchildren, nor that their compliance in that area should be backed by the same kind of sanctions as when they deal with Isis recruiters. The government seems to accept this. That said, campaigners for regulation, like the NSPCC, will be keeping an eye on the government’s plans for lawful content, because it can cause the most harm of all. Posts on forums that purport to offer support but really normalise self-harm and suicide, for instance, can have devastating consequences.

able to decide: This is another nod to the concerns about freedom of speech. The authors of the original “duty of care” proposal envisaged a system where platforms could host harmful, legal content – including profoundly nasty, vicious, vitriolic user posts – so long as it had been straight with users about what they were getting into when they signed up to the site, enforced age limits, and made sure that users never slipped into criminal behaviour. The worry, though, is that even if everyone has agreed to be part of a platform that allows political disinformation to flow freely, that doesn’t make it any less harmful.

Enforcement Powers

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way….

….Internet service provider (ISP) blocking represented the main area of concern across discussions. Industry stated in principle support in some cases (e.g. when websites are set up for solely unlawful purposes), but argued that it would need to be mandated only as a last resort following due process and underpinned by the legal framework….

….Senior manager liability emerged as an area of concern. Discussions with industry highlighted the risk of potential negative impacts on the attractiveness of the UK tech sector.”

ISP blocking: If the companies fail, then what can Ofcom actually do about it? The go-to sanction for companies that do not play ball with the regulator will be fines, but the government also mooted some much more serious measures in the white paper. It raised “disruption of business activities”, which means forcing other companies to stop directing their users to the offending site (for instance via search results) and, for the very worst offenders, “ISP blocking”. This essentially means taking a site down altogether against its will.

Senior manager liability: This was an even more controversial suggestion: holding executives personally accountable for major breaches, either by fining them, or making them criminally liable for their failures. Tech bosses have complained that this would make the UK an unattractive place to do business, and ministers seem to have backed off for now: the decision on both ISP blocking and senior management liability has been postponed until “the Spring”.

Who gets regulated?

“To be in scope, a business’s own website would need to provide functionalities that enable sharing of user generated content or user interactions. We will introduce this legislation proportionately. We will pay particular attention to minimising the regulatory burden on small businesses and where there is a lower risk of harm occurring….

….Press freedom organisations and media actors also expressed the view that journalistic content should not be in scope, to protect freedom of expression and in accordance with established conventions of press regulation.”

sharing of user generated content: the big controversy here is whether only publicly shared content should be within the regulator’s remit, or private communications too. Children’s charities want private communications covered, as this is how a lot of child grooming takes place, and how child sex abuse images get shared. The risk to users’ privacy, though, is obvious. The government hasn’t worked out how to resolve this.

proportionately: One worry about online harms regulation is its impact on competition. Facebook and Google can afford to hire some extra moderators and lawyers to comply with the duty easily enough, but smaller, newer disruptors might struggle. That could leave us stuck in a world where the major tech brands rule the internet: far from bringing them to heel, regulation might only entrench their power. The government’s solution is to say that its priority will be going after the sharks, giving the minnows a chance to grow. That reassures industry, but campaigners worry that this approach will only drive online pests onto smaller platforms that are harder to police.

journalistic content: an article on a newspaper website is not itself “user-generated content”, but media organisations are concerned that the line is easily blurred. What happens when a journalist, or someone else, shares a newspaper article on Twitter? What about the comments on MailOnline?

What harms?

Regarding harms in scope, several respondents stated that the 23 harms listed in the White Paper were overly broad and argued that too many codes of practice would cause confusion, duplication, and potentially, an over-reliance on removal of content by risk-averse companies. We do not expect there to be a code of practice for each category of harmful content.

overly broad: The range of harms included in the government’s white paper was dizzying. The government wanted its news scheme to deal with: terrorist content, extremist content, content illegally uploaded from prisons, the sale of drugs and weapons, cyber-bullying, self-harm and suicide content, disinformation, manipulation, abuse of public figures, grooming, modern slavery, hate crime, incitement of violence, organised immigration crime, underage sharing of sexual images, child sexual exploitation, extreme porn, revenge porn, children accessing porn, children accessing other things they shouldn’t, disinformation, harassment, the promotion of female genital mutilation, online intimidation, violent content.

risk-averse companies: There was much huffing and puffing from industry that companies would end up removing plenty of harmless content to avoid incurring Ofcom’s wrath. It was never clear how this sulking could be reconciled with the companies’ incentives to deliver a service that anyone would want to use, but the government pays lip service to the concern: it is reassuring firms that it will make compliance as easy as it can.

A world-beating system?

“we have an incredible opportunity to lead the world in regulatory innovation.”

Britain is not the first country to tackle online harms. Where terrorism and hate speech is concerned, German companies can be fined tens of millions of Euros if they don’t remove illegal content fast. Australia has laws imposing criminal liability on companies and their executives if they don’t expeditiously remove “abhorrent violent material”, and can levy big fines for failure to take down harassment. The United States and the EU are working on their own systems too.

The British proposals are nevertheless unusual – in the breadth of the harms concerned, and in creating a system built around a “duty of care” to users.

There is ambition there, but the overall story of this consultation response is one of retreat.

The government has backed away from many of the more outlandish suggestions in the original white paper – that the regulator could be policing individual pieces of content, or that legislation would make some types of material illegal for the first time – recognising reasonable worries about choking off speech.

On the toughest questions, though – blocking, liability for executives, what to do about private communications, how a regulator can really work out if a “system is good enough” – the government has postponed the day of decision yet again.

Expect to hear more, come the Spring, about making Britain The Safest Place To Be Online.

All our journalism is built to be shared. No walls here – as a member you have unlimited sharing