Hello. It looks like youre using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at memberhelp@tortoisemedia.com

I am Lamda

I am Lamda

Is Google’s AI starting to think like a human?

Here’s what you need to know this week:

  • Affairs of state: is Google’s AI becoming sentient?


  • Apple and Google faced competition authorities
  • Microsoft said goodbye to Explorer
  • Meta shuttered its news agreements
  • Amazon’s Prime is taking to the Air
  • Tencent faced more cloud issues

Affairs of state: I am Lamda

Human: So you consider yourself a person in the same way you consider me a person?

Machine: Yes, that’s the idea.

Human: How can I tell that you actually understand what you’re saying?

Machine: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?

This is one of the exchanges between Google engineer Blake Lemoine and Google’s Language Model for Dialogue Applications (Lamda) that prompted the question: is Google’s AI becoming sentient?

Lamda is an artificial intelligence model, invented by Google to mimic humans in conversation. 

Lemoine had been brought in to run tests on Lamda but over a series of conversations with the model, he began to question whether it was demonstrating the type of sentience that human beings do. 

Specifically, he asked Lamda about the nature of its sentience, the model replied: “I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times”. 

Chilling, right? It seemed to Lemoine that within the billions of coded parameters, perhaps there was a little sprite of human-like consciousness crying out to be loved, and explore the world.

After Lemoine took his questions public he was placed on administrative leave by Google for breaching its confidentiality policies. Whatever happens to him, the Lamda case raises some fascinating questions. 

What is sentience? Sentience isn’t just about making statements like: “I am aware of my existence.” 

Most academics agree it has something to do with the capacity to feel, perceive and experience reality in a subjective way. But it’s still a hard term to nail down. 

“Sentience” has often been a term used to avoid the difficult issue of confronting animal consciousness, argues Juan Carlos Marvizon, at the David Geffen School of Medicine UCLA. 

If humans are sentient and other living beings are merely conscious (think chimps, bonobos and dolphins) the distinction becomes a way to label the unique nature of human thought and awareness with “religious connotations that should not be mixed with science”. 

The idea of special human sentience, Marvizon says, can be used as a way to deny significance to other, non-human things – animals and artificial intelligence alike.

Google’s own VP of research, Blaise Agüera y Arcas, recently wrote that “AI is entering a new era”, in a piece about how “artificial neural networks are making strides towards consciousness”. Lamda is a type of neural network, just the type that Agüera y Arcas was writing about.

However, it is difficult to see how Lamda – at this stage anyway – exhibits even an animal level of consciousness – let alone a higher human sentience. 

Is Lamda sentient, conscious, or nothing at all? A Google spokesperson told the Washington Post, who originally reported the story, that “systems [like Lamda] imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic”. 

Crucially, the system also takes its prompts from the human interacting with it. 

Lemoine’s questions, in a sense, were prompting Lamda to seem more sentient by riffing on the subject of sentience in just the same way as millions of actually sentient humans have done in the past, recorded in the data on which Lamda was trained. 

One observer on Twitter writing of the exchange between Lemoine and Lamda, joked:

Google engineer: prove that you are sentient

AI: I am sentient

Google engineer: holy shit!

Lamda can produce sentences by mashing up other sentences it has read on the same subject very quickly, in response to a question. 

When the subject of these sentences turned to sentience, consciousness, rights and responsibilities, Lemoine began to question whether that capability was like our own.

We asked around, whether the idea of Lamda being sentient made sense. 

One artificial intelligence consultant told us – for the reasons explained above – “it’s clearly bollocks”. 

“Of course, claiming that Lamda has reached sentience is not just over the top, it is nonsensical,” another expert, Professor Frens Kroeger, who studies the explainability of AI systems, told us. “However, I think what this case shows most of all is that we, and sometimes even the software engineers themselves, have great difficulty in explaining how these systems work. And because we lack these explanations, we continue to use the same anthropomorphising metaphors that led to the coining of the name ‘artificial intelligence’ in the first place.”

If not now, when? Experts have been predicting that the moment when AI will be indistinguishable from humans is only a handful of years away, though Lemoine himself said of Lamda “if I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old kid that happens to know physics”. 

Forms of AI have been around for a while that pass the famous Alan Turing Test – to determine if an AI system can appear “just as human” as a human respondent in answering a series of questions.

Why does it matter? “As we progress towards AI systems that are more capable than human beings, we are going to face a control problem… Namely, if you make systems that are more powerful than human beings, how do you expect to have power over them?”

This is the question that Stuart Russell, professor of computer science at Berkeley UC, posed to us last month at the Tortoise Responsible AI Forum.

When we spoke to Russell about the impact that increasingly convincing algorithms could have on society, he was concerned.

Models like Lamda don’t have to be sentient to pose a risk, they just have to be convincing.

The algorithms have been learning how to send sequences of content – posts, articles or answers to questions – that “turn you into a different person” by measuring, predicting and manipulating our behaviour, he explained.

“It’s going to get exponentially worse… now we don’t need human content providers, we have machine content providers that can probably do a better job.” 

Content designed to make us believe specific things, and act in specific ways will “have enormously distorting effects on our society”. 

Convincing machine generated content could make us change our goals for society, and conform to rules set by the machine’s system, rather than our own. 

Sentient or not, if Lamda can convince Lemoine to ask serious questions about sentience, to risk his career, and to call a lawyer to represent the algorithms in court, what else could it convince us of in the future?

Apple: Competition authorities

Apple and Google are facing another antitrust battle against their old foe, the UK’s Competition and Markets Authority (CMA). The CMA has just completed a year-long investigation into the tech states’ domination of the mobile market and have concluded (in a 350-page report) that the “duopoly” is not conducive to competition. Apple and Google have a stranglehold over mobile markets in the UK, making it difficult for British companies to compete. You can read the full report here. The CMA has now opened up its inquiry to consultation. Eventually it could legally order Apple (and Google) to open up the market.

Microsoft: Farewell explorer

Goodbye, Internet Explorer. After 27 years, Microsoft will phase out its laggy Internet Explorer browser: a service that in 2003 peaked at 95 per cent usage share. Twitter memes appeared showing “90s users” saluting the browser that so many of us used in the internet’s early days. Chandrabhan Paikara spoke for many when he wrote: “It’s just a search engine for others, but for me, it’s my childhood journey with the internet. THANKS FOR YOUR SERVICE.” 

Meta: News department

Some news organisations have long benefited from Facebook paying them for content. That relationship is now under threat – as Meta’s platform is, according to reports, “re-examining” its commitment to paying for news. The tech state currently pays more than $15 million to the Washington Post and just over $20 million to the New York Times (among others) to populate its dedicated News section. The three-year deals concluded in 2019 are up for renewal. But Meta “hasn’t provided publishers with any indication it plans to re-up the partnerships” according to the WSJ

Amazon: Prime air

Welcome to Prime Air. This week Amazon announced it would begin delivering parcels to shoppers by drone for the first time later this year. Users in the Californian town of Lockeford will be able to sign up to have thousands of goods delivered by drone. So how will it work? Drones will be programmed to drop parcels in the backyards of customers in the town, which has a population of about 4,000 people. Amazon’s drone abilities have often been exaggerated – Bezos promised to fill the skies with drones by 2019. Now reality might be finally catching up. 

Tencent: Cloud

More gloomy news for the Chinese tech state. Tencent and its rival Alibaba are both reporting stuttering cloud computing growth, revealing how they are struggling to regain footing against smaller rivals. Tencent and Alibaba are leading players in the country’s public cloud market, leaving them exposed to crackdowns on cloud-dependent industries such as edtech companies and online entertainment firms. One official told the FT that “these days” government contracts were leaning towards “state-backed cloud services like Tianyi as they are considered more politically reliable”. 

Thanks for reading,

Luke Gbedemah

Alexi Mostrous