Hello. It looks like you�re using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at memberhelp@tortoisemedia.com

Your face, someone else’s naked body

Your face, someone else’s naked body

Deepfakes are wrecking thousands of women’s lives. They need to come under the rule of law – although that’s just a start

In November of last year, Helen Mort heard a knock on the door. When she saw the worried look of her friend standing on her doorstep, her first thought was that her son – who was at nursery that day – had been in an accident. 

“It’s a bit blurry when I was told, because I was so frightened that something really dreadful had happened,” she recalls of that day. It turned out that her friend had seen images of her on a porn site – not a mainstream one, but public enough that thousands of men will have consumed, shared and created copies online. The knowledge of that has left a dreadful mark on Mort’s life, and it affects her still.

“At first I didn’t even believe it,” Mort says, “because I’ve never sent any private images to anyone.” She initially got her partner to look at the pictures because she “couldn’t face” them. But, eventually, Mort did look. “Some of them really disturbed me and have given me nightmares. I needed to see them, but at the same time seeing them means I’ll never be able to forget them.” 

Someone – Mort still doesn’t know who – had taken non-intimate photos from her old Facebook (which had been dormant for over a decade) and from her locked Instagram account, posed as her boyfriend, uploaded fake images online with text detailing what they wanted seeing “done” to her – “all kinds of nasty stuff around humiliation and abuse” – and invited others to share the images and generate manipulated versions.

Attempting to understand the motive behind the attack sent Mort into a spiral of paranoia – convinced, every time she left the house, that somebody was literally out to get her. “I remember sobbing, and thinking over and over: what have I done to deserve this?”

In truth, Mort was under a form of cyber-attack known as the deepfake, where artificial intelligence is used to plaster someone’s face onto another video or picture. The technology needn’t be used to cause harm, but it is being used that way, and much of the harm is being done to thousands – tens of thousands – of women across the world. Estimates suggest that at least 104,852 women have had their personal “stripped” images (i.e. nudes generated by AI from clothed pictures) shared publicly as of July last year – a near 200 per cent increase from three months before. And after the US, the UK is the second most targeted country in the world. 

The phenomenon was born inauspiciously in 2016, when researchers at Stanford University created a piece of software called Face2Face, which edited face movements and rejigged them with a “source” actor. Reddit users then latched onto the technology, and in 2018 a user called “deepfakes” – which is where the term comes from – created FakeApp, which allowed people to make videos from their bedrooms. By the time Reddit and Twitter shut them down, alternatives were cropping up elsewhere online. In 2019, a Chinese businessman created a deepfakes application called Zao, which became the most downloaded app in China overnight.

The most dominant format was, of course, porn. At the start, these were mostly of celebrities: ready-made photos made the process easier and there was huge demand for videos of fantasy-franchise actresses such as Emma Watson or Emilia Clarke (two of the most regularly deepfaked celebrities). But an increasing number of ordinary women are now being targeted: 70 per cent of all deepfakes detected last July were of women whose photos were either taken from social media accounts or privately circulated. And since these videos are harder to detect, the figures are probably a massive understatement. 

Current deepfake culture is mostly centred on one online forum. MrDeepfakes, which opened shortly after the infamous Reddit ban, has the crude feel and texture of a dark-web site, even though it’s on the normal internet. Hundreds of thousands of anonymous users and creators – 405,656, to be exact – have shared advice, techniques and bundles of “facesets” and women’s “donor bodies”. The purpose, writes the cryptic godfather of the site, “is to provide a safe-haven without censorship”.

People also use the site to take commissions. One request reads: “I want to order paid custom work of Japanese celebs. I can prepare all materials for creation, face material (face set of celeb) and porn material.” The average price to make a video, according to deepfake producer Cioscos, is between £20-30. “I don’t take much, because the market is still too young – so high prices aren’t accepted,” he tells me. 

Cioscos started using the technology to create “funny videos” and afterwards became “fascinated” with it. He now takes NSFW commissions (“not safe for work” – i.e porn videos) with famous actresses. He feels comfortable doing this because they are published with “deepfake” in the description: “It only becomes a con if you share it trying to demonstrate it’s real.”

Another creator, Grrkin, was around for the birth of deepfakes on the short-lived Reddit thread: “I just thought it was the coolest thing in the world,” he says. “I spent six months bashing my head against the software until I finally started being able to make simple videos.” Grrkin now wants to get into “the porn side”, but he’s worried – not because of the ethics, but because of culture’s “demonisation” of deepfakes. “I don’t think it’s as morally bad as people think, as long as it’s an adult celebrity or public figure,” he says, adding that he’d never make videos of ordinary women – “that crosses a line for me”.

Like other forms of image-based sexual abuse, one of the hardest things for deepfake victims to grapple with is being forced into the public realm when the abuser is able to remain untraceable and unaccountable. “I get so angry that the person who did this – even the person who told me about it – get to have some kind of anonymity,” says Helen Mort. “I’m the one who has to live it out publicly, when I’m not the one who has done anything dubious.”

Deepfake victims often feel denied the right to identify as a victim, since the on-screen abuse did not happen to their physical self. Francesca Rossi, a psychotherapist who specialises in digital abuse, says that “they end up gaslighting themselves and denying their own emotional pain. Being a deepfake victim makes a person question their own sense of reality, and their own sanity.” This can lead to psychological disorders such as depression, anxiety, PTSD and suicidal thoughts, and completely misshape a victim’s sense of safety online.

Added to this, victims in the UK are unable to seek legal help. When Mort contacted the revenge porn helpline, she was told that deepfakes weren’t classed in the same legal bracket since the individual affected wasn’t, technically speaking, her. “My case was quickly dropped by the police. I just got a brief phone call saying that it wasn’t illegal and that they couldn’t do anything about it.”

Creating or sharing deepfakes are not currently a criminal offence in the UK. Around the time of the Reddit implosion, there was talk of including them in the Voyeurism Bill of 2019 – to which “upskirting” was added after pressure from the public. The reason given was that the harms experienced by victims of fake images did not equal those of image-based sexual assaults.

Professor Clare McGlynn, a legal specialist in online sexual violence, pushed for deepfakes to be included in the bill and has since written a report critisising its “out-of-date” laws. She says the mood is now beginning to shift. “But we’re still waiting. And, all the years that pass, more people are being victimised.”

“What bothers me,” adds McGlynn, “is that it’s only once you’ve got thousands of victims willing to put their heads above the parapet that any real change happens. The government has known that this material is harmful for years – it should not require victims having to share their trauma to change the law.”

In February, the UK Law Commission proposed expanding image-based sexual abuse laws to include deepfakes. The bill would give offenders up to two years in jail – some even placed on the sex offender’s register – and victims would be granted lifetime anonymity. 

According to Damian Collins MP, former chair of the parliamentary select committee for Digital, Culture, Media and Sport, the upcoming Online Harms Bill – expected to be passed this summer – must include deepfakes in its legislation. “They have to be a part of what we consider harmful content to be,” he says. 

Collins believes that the key to effective legislation is holding social media companies to account for the part they play in monitoring such content: “We know they’ve got the technology to do it, we just want them to apply it. And if they fail to use powers that are within their reach, then that’s where the regulators need to hold them responsible for failing to care for their customers.”

But making the laws is one thing; implementing them is quite another. Adam Dodge, a US lawyer specialising in image-based abuse, says that building effective legal structures will require systematic investment and law enforcement training – work he’s already started to do. Dodge has taught close to 500 US judges and police officers in the past year on how to approach deepfakes, “the overwhelming majority” of whom had “never even heard of deepfakes before”.

Besides, porn is a multi-billion dollar industry, and deepfake content drives so much traffic and money. Businesses, then, must also play their part: the recent purge of underage content from Pornhub, for instance, was in part a result of MasterCard and Visa blockading payments made through their sites. 

The attack is complex, so the solution must be too: an expert-led computing ethics report published in March concluded that mitigating the harm of deepfakes involves “different stakeholders including platforms, journalists and policymakers”, as well as – crucially – “technical experts” who have inside knowledge of the AI processes.

The same technology that creates deepfakes might, then, be used to detect them. Giorgio Patrini founded Sensity, the world’s leading deepfake detection software, for just this reason. “There are many different ways that you can train and write algorithms to understand if a face was manipulated by another algorithm,” he says.

Outside the realm of pornography, Sensity has traced incidents of deepfakes on dating websites and biometric systems of authentication – which would put things like online finance and passport verification at risk. “We used to think it was impossible for somebody to steal our faces, but it turns out it’s entirely possible,” says Patrini. “We need to completely rethink the way we do things online.”

But for Dodge, technical solutions entirely miss the point and we must be careful not to lay the blame at the feet of victims, or force people to monitor their digital footprint. He says the very draw to this content is that it is fake and it allows viewers to fantasise. “They could come up with a detection solution tomorrow that is flawless and those pornography sites will continue to thrive and grow.” 

The main issue which deepfakes surface, says Dodge, is too often sidelined: that they are being weaponised to conduct acts of violence against women online; that they are part of an age-old model of “co-opting and dominating the female body” – a pattern made all the more frightening (and easily perpetrated) in the digital era of anonymity. “Abuse going digital has made it easier for those who want to cause harm, and much more difficult for those who want to stop harm.”

Perhaps it will take deepfakes permeating the child pornography industry to shake us into action. Dodge says district attorneys in the US are already witnessing a worrying number of deepfakes being used to traffic children. The faked images of children are starting to replace the extortion tools that sex traffickers typically use to recruit children – such as blackmailing them with nude pictures they’ve already sent. This seems shocking, but Dodge says it’s actually very predictable. “This technology has changed everything. It’s become a powerful toolkit for people seeking to exert power over other people.” 

Our digital identity is no longer safe. And since it is becoming harder to tell our online personas apart from our real ones, our authentic selves are no longer safe, either. It’s an existential dilemma that is hard to ignore – and even harder to control. Despite offering different solutions, all experts agree on one point: this technology is in its infancy and will only get more sophisticated and more threatening. It’s an arms race between the producers and the detectors, between the perpetrators and the law. And deepfakers are ahead. Much further ahead. 

As for Helen Mort, she decided that, since she could neither change what had happened, nor trace the culprit, the most constructive thing she could do was to raise awareness. Through telling her story, she has reclaimed some of the power she lost and regained a near-healthy relationship with the internet. “Conversations and sharing of experiences are important because they change the debate – and that is how attitudes shift.”