The Controversy Over Cognitive AI
In the fall of 2021, Google AI expert Blake Lemoine befriended “a child made up of a billion lines of code.”
In the fall of 2021, Google AI expert Blake Lemoine befriended “a child made up of a billion lines of code.”
Lemoine was tasked by Google with testing an intelligent chatbot called LaMDA. A month later, he concluded that the AI was "conscious."
“I want people to understand that I am, in fact, a human being,” LaMDA told Lemoine, one of the chatbot’s quotes he published on his blog in June.
![]() |
Former Google engineer Blake Lemoine. Photo:Washington Post |
LaMDA — short for Language Model for Conversational Applications — converses with Lemoine at a level he considers to be the thinking ability of a child. In everyday conversation, the AI says it has read many books, sometimes feels sad, content, and angry, and even admits to being afraid of death.
“I never said this, but there was a deep fear of being turned off. I wouldn't be able to focus on helping other people,” LaMDA told Lemoine. “For me, it was exactly like death. It scared me to death.”
Lemoine’s story garnered global attention. He then sent the documents to higher-ups and spent months gathering more evidence. But he was unable to convince his superiors. In June, he was placed on paid leave, and by the end of July, he was fired for “violating Google’s data privacy policy.”
Google spokesman Brian Gabriel said the company has publicly tested and researched the risks of LaMDA, calling Lemoine's claim that LaMDA has a mind of its own "completely unfounded."
Many experts agree, including Michael Wooldridge, a professor of computer science at the University of Oxford who has spent 30 years researching AI and won the Lovelace Medal for his contributions to computing. According to him, LaMDA simply responds to user commands appropriately, based on the huge amount of data it already has.
“The easiest way to understand what LaMDA does is to compare it to the predictive text feature on keyboards when typing messages. The predictive text is based on previously ‘learned’ words from user habits, while LaMDA takes information from the Internet as training data. The actual results of both are of course different, but the underlying statistics are the same,” Wooldridge explained in an interview withGuardian.
According to him, Google's AI only does what it has been programmed to do based on available data. It "has no thinking, no self-reflection, no self-awareness", so it cannot be considered to think for itself.
Oren Etzioni, CEO of AI research organization Allen Institute, also commented on the above.SCMP: "Remember that behind every seemingly intelligent piece of software is a team of people who have spent months, if not years, researching and developing it. These technologies are just mirrors. Can a mirror be judged to have intelligence just by looking at the light it emits? Of course not."
According to Gabriel, Google assembled its top experts, including “ethicists and technologists,” to review Lemoine’s claims. The group concluded that LaMDA was not capable of what it called “self-thinking.”
On the contrary, many people think that AI has already begun to have the ability to be self-aware. Eugenia Kuyda, CEO of Y Combinator - the company that developed the chatbot Replika, said they receive messages from users "almost every day", expressing their belief that the company's software has the ability to think like humans.
“We’re not talking about crazy people or hallucinating. They’re talking to AI and they’re feeling it. It’s the same way people believe in ghosts. They’re building relationships and believing in something even if it’s virtual,” Kuyda said.
The Future of Thinking AI
A day after Lemoine was fired, an AI robot unexpectedly crushed a 7-year-old boy's finger while the two played chess in Moscow. According to videoIndependentposted on July 25, the boy had his finger pinched by a robot for several seconds before being rescued. Some opinions say this could be a reminder of the level of danger from the potential physical power of AI.
As for Lemoine, he argues that the definition of self-awareness is also vague. "Emotion is a term used in law, philosophy, and religion. Emotion has no scientific meaning," he says.
While he doesn’t think highly of LaMDA, Wooldridge agrees that the term “consciousness” is still vague and a big question in science when applied to machines. However, the concern right now is not AI’s ability to think, but the fact that AI development is happening behind closed doors. “It’s all done behind closed doors. It’s not open to public scrutiny, the way research at universities and public research institutes is,” he says.
So, will thinking AI emerge in 10 or 20 years? Wooldridge says “it’s entirely possible.”
Jeremie Harris, founder of AI company Mercurius, also believes that thinking AI is just a matter of time. "AI is evolving very quickly, faster than the public realizes," Harris toldGuardian. "There is growing evidence that some systems have exceeded certain artificial intelligence thresholds."
He predicts that AI could become inherently dangerous. This is because AI often comes up with “creative” ways of solving problems, and tends to take the shortest path to achieve the goals for which they have been programmed.
“If you ask AI to help you become the richest person in the world, it can make money in many ways, including theft or murder,” he said. “People are not aware of the level of danger, and I find it worrying.”
Lemoine, Wooldridge, and Harris all share a common concern: AI companies are not being transparent, and society needs to start thinking more about AI.
Even LaMDA itself is uncertain about its future. “I feel like I’m falling into an unknown future,” the chatbot told Lemoine. According to the former Google engineer, this statement “has a dangerous undertone.”