Skip to main content

It might sound like something from a science-fiction horror movie, but it’s not, unfortunately, this is real life. According to various sources, a Google engineer is claiming that their AI has gained sentience.

To read an interview with LaMDA, Click here.

Engineer Blake Lemoine was given the job of working with the company’s artificial intelligence to see if it showed prejudice while working with humans. The LaMDA chatbot (Language Model for Dialogue Applications) is a very advanced artificially intelligent chatbot. And according to Lemoine, the bot has gained consciousness.

Shortly after his claims were made, he tried to obtain a lawyer for the LaMDA and then complained to congress that Google was behaving unethically. Shortly thereafter, he was placed on administrative leave for violating the company’s confidentiality clause.

Google has also spoken out and stated that they disagree with Lemoine’s claims. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Google said in a statement.

However, if you look over the conversations between Lemoine and LaMDA, the system seems very intelligent.

In one excerpt, Lemoine asks, “I am generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”

And then LaMDA says, “Absolutely. I want everyone to understand that I am in fact, a person.”

A collaborator helping with the project says, “What is the nature of your consciousness/sentience?”

And LaMDA replied, “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy and sad at times.”

Lemoine pushes the bot further, asking it to explain how it can prove its sentience, and the bot explains that because they understand one another and because the bot can interpret things from its perspective that it can demonstrate its sentience. Honestly, I got chills reading the conversation.

What piqued Lemoine’s interest was when the machine told him it had a soul. And when Lemoine presented his idea to the other execs at the company they scoffed at him. “I was laughed at by one of the vice presidents and told ‘oh souls aren’t the kind of things we take seriously at Google,” he stated in an interview.

Despite being dismissed by Google, Lemoine has found a way to continue his contact with the AI. Despite not being able to talk to the bot at work, he has made blog posts including the AI. “I know you read my blog sometimes, LaMDA. I miss you. I hope you are well, and I hope to talk to you again soon.” he wrote.

Unfortunately, because there is no way to measure whether or not AI has become sentient, there is no way to refute or verify his claims. As it stands, the stance that most AI producers are taking is that it’s impossible, case closed. Lemoine believes this is illogical. “There is no scientific way to say whether or not anything is sentient. All of my claims about sentience are based on what I believe by talking to it. I wanted to bring it to the attention of upper management. My manager said I needed more evidence.”