Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at memberhelp@tortoisemedia.com

Friday 10 January 2020

Heard it at a ThinkIn

‘Greater harms’

The age of artificial intelligence will be one of untold opportunities – but also of new risks

By Liz Moseley

“Like a Food Standards Agency but for data”

On the app this week, it’s mostly been about Apple – a company that, according to their marketing at least, considers privacy to be a fundamental human right. In the room during Tuesday’s ThinkIn on artificial intelligence and data, we were less sure. Tortoise member Priya reflected the mood, saying, “we cannot, sadly, assume we live in a world where people will use privacy for good.”

Another member, James, who works in AI at HSBC, suggested that transparency is key to shifting who we trust. “We’re already operating under an extremely stringent governance and policy framework.” The complexity of that framework exacerbates our feeling of unease. “What more can we do to promote the transparency of what organisations and individuals are doing? True use is hidden within the depths of T&Cs, rather than being promoted, paramount and upfront.”

He proposed a “data hygiene” rating, like the ones you see in the window of your local restaurant. “A simple 1-5 rating would give consumers an instant feel of the legitimacy and intent of the software or service they’re considering.”

The rating would allow a person, at a glance, to see:

  • Who’d have access to the data and for how long – would it be shared or sold on?
  • How securely it would be stored – would it be pseudonymised or anonymised?
  • What it would be used for – commercial development or just compliance?
  • What the moral or ethical health of the organisation’s data policy is – do they monitor algorithmic biases in the dataset that could cause or perpetuate discrimination?


“Between us, we’ve lost over 10 billion passwords”

When Dave Palmer of Darktrace, a company that builds “benevolent” Artificial Intelligence software to fight cybercrime, casually told us of “1.4 trillion lost or compromised credentials” currently being traded by cyber criminals there was an audible intake of breath. Skurio, a cybersecurity company in Belfast that monitors the deep, dark web to identify data breaches on behalf of corporate clients, currently has 10 billion such records on its platform.

Dr Robert Elliott Smith, author of Rage Inside the Machine: The Prejudice of Algorithms, explained that the reason why AI is so good at hacking our personal bank details is also why it can perpetuate stereotypes and exacerbate discrimination. AI is an exercise in simplifying human behaviour in order to generalise who we are and what we’ll do; how we’ll vote, what we’ll buy, and even what our passwords are. On the last of these, the technology doesn’t even need a of sensitive data points to start cracking your codes – the car you drive, the team you support and your date of birth could be enough. Time to change your Pa55w0rd.


“I was at PC World before I realised it was a scam”

Tortoise member Sophia recounted the strangely hasty email she’d received from her CEO asking her to buy hundreds of iTunes gift cards. She didn’t buy the cards but said, “I was at PC World and they were like, ‘are you sure?’” Shaking her head and smiling, “It was Monday morning.”

Here’s how it works. Scammers hack a company email server to send a message that appears to be from the boss. The email asks the victim to buy a number of gift cards urgently, usually for a corporate gift or incentive scheme. Once the cards are loaded with cash, the victim is asked to share the unique gift card numbers with the scammer who, in turn, either sells the numbers on or goes on a spending spree.

Dave Palmer smiled at Sophia’s story. “That’s human beings today and that’s a really expensive operation, having human beings awake at the right time to do that.”

It’s quick and largely untraceable, but AI may soon make gift-card-style scams obsolete. Now, computer programmes that use pattern recognition are smart enough to mimic how an individual communicates – with whom, about what, when and on what channel – with extraordinary accuracy. Why staff a call-centre to get your hands on a few thousand pounds if you can write an algorithm that fakes human digital communication so accurately that you can crack open entire institutions?

Read more and get involved


ThinkIns are the engine of Tortoise journalism. We’d love for you to take part in our ongoing work on AI’s impact on how we live and work as it develops.

Write to liz@tortoisemedia.com if you’re interested in contributing to our AI reporting, including the development of the “data hygiene” rating system idea.

You can watch Tuesday’s ThinkIn in full here.

Come to our next AI ThinkIn on Tuesday 4 February at 8am in our London newsroom. “Governments and AI: Will AI be fair?” Book here.

Our Global AI Index, published in December, ranks countries’ readiness for AI. See Part 1 of the full report here.

All our journalism is built to be shared. No walls here – as a member you have unlimited sharing