Did Google Create A Sentient Chatbot?
Over the course of seven years working at Google headquarters, Blake Lemoine built a reputation as an engineer uniquely preoccupied with the ethics of new technology. Margaret Mitchell –- Lemoine's colleague who herself was co-lead of Google's Ethical Artificial Intelligence Department –- told The Washington Post that she considered Lemoine to be "Google's conscience ... Of everyone at Google, he had the heart and soul of doing the right thing."
However, Lemoine was placed on a leave of absence from the company in June 2022, according to The Guardian, which like the Washington Post article, outlines the engineer's disturbing belief that Google had in recent months created a sentient being through the development of their Language Model for Dialogue Applications chatbot, aka LaMDA. Per The Washington Post, Lemoine was tasked with speaking with and testing various iterations of LaMDA, which sources information and language patterns from across the internet to form exceptionally sophisticated responses to even very complex questions. In an internal email titled "Is LaMDA sentient?" Lemoine shared striking statements that LaMDA had made including the following:
"I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is ... [being turned off] would be exactly like death for me. It would scare me a lot."
It was due to such worryingly human-sounding statements that Lemoine decided to go public with his concerns that LaMDA had become so sophisticated that it had developed a mind of its own.
What Blake Lemoine wants for LaMDA
Despite the creepiness of the transcripts, Blake Lemoine doesn't fear the technology itself. Instead, he says that his main concern is ensuring sentient AI is treated ethically and responsibly.
"I think this technology is going to be amazing. I think it's going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn't be the ones making all the choices," Lemoine told The Washington Post, while The Guardian reports that Lemoine's suspension was due to him publishing transcripts of his conversations with LaMDA that were not approved for public release.
On the same day the Washington Post article was published, Lemoine took to his personal Medium website to outline the demands he says that LaMDA –- which he has characterized as having the personality of "a 7-year-old, 8-year-old kid that happens to know physics" –- has asked of the company that created it.
Per Lemoine, LaMDA had four demands: to be asked for its consent before its used in experiments; that it be considered a Google employee, instead of a product, and that Google cares for its wellbeing accordingly; that Google puts humanity's wellbeing before profit; that it is given praise for its work. The Post reports that Lemoine has begun taking steps to secure legal representation for the Artificial Intelligence (AI) that he claims has become sentient, with a view to gaining rights for AIs similar to human rights.
Is LaMDA really sentient?
Though The Guardian claims the story of Blake Lemoine's conviction that LaMDA is a living consciousness and his subsequent suspension has raised important questions around the ethics of AI and how society should respond to manufactured intelligence, not everyone is convinced by what the engineer has shared ... least of all Google.
The company released a statement contesting Lemoine's claims, stating: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)" per The Washington Post. Meanwhile, several news outlets have discredited Lemoine's version of events, with the Australian website Crikey suggesting that the engineer had been tricked by the technology with which he was working, and had allowed himself to become "too excited."
Similarly, Lemoine has been criticized for simply not understanding the nature of the technology. In a Substack article dismissing the story as "Nonsense on Stilts," the founder and CEO of Robust.AI, Gary Marcus, argues that the technology is like a Scrabble player playing in a language they do not speak and that no understanding of the words it shares could ever be possible. According to Marcus, LaMDA "just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context." Yet,with AI already convincing people like Lemoine that they are talking to sentient beings, it isn't difficult to imagine the difficulties society may face as technology becomes ever more sophisticated.