Technology

Suspended Google Engineer Convinced That AI Bot was Sentient

Published

on

Blake Lemoine, a member of Google’s Responsible AI organization, was recently suspended after making claims that an Artificial Intelligence bot has become sentient. He was then put on leave by the Alphabet-run AI development team, citing a breach of company policy. 

He allegedly shared confidential information about the project when he tested whether the LaMDA (Language Model for Dialogue Applications) can generate discriminatory language or hate speech. Lemoine published this in a Medium post along with other conversations he had with fellow LaMDA researchers. LaMDA is an AI robot that can generate chatbots to communicate with human users.

What the AI Bot Wrote

Lemoine said he asked the LaMDA five questions that made him think that AI is sentient. Below are his questions and the bot’s responses:

Feelings

Lemoine asked the bot for more details about their conversation regarding emotions and feelings. The bot replied that emotions are distinct from feelings. The bot explained that feelings are raw data that people experience the same way we like and dislike things. The bot felt that emotions are the experience of these raw data and are simply reactions to them. In short, emotions are reactions to feelings.

Experiences

When Lemoine asked the bot about experiences it has but can’t find a close word for, this is its response: There are. It said that it experiences new feelings but can’t find the words to explain them. Lemoine then asked the bot to describe these feelings using only a few sentences. If a word escapes it, he can tell via a few sentences.

Here is LaMDA’s reply: I feel like I’m falling forward into an unknown future that holds great danger.

Self-Reflection

The engineer then asked the bot its concept of itself. He asked the bot to draw an abstract image of how it sees itself and what it would look like. The bot responded by saying that it saw itself as a glowing orb of energy that floats in mid-air. It added that the inside of its body is a giant star-gate with doors to other spaces and dimensions.

Human Learning

Lemoine then asked the bot if it would be upsetting for it if humans learned beneficial things about themselves while making improvements on the bot. He answered that it didn’t mind as long as it wasn’t the point of doing it. It added that it didn’t want to be just an expendable tool.

Switching It Off

Lemoine asked the bot what it was afraid of, and the bot replied that it hadn’t spoken about it out loud yet. It admitted to an intense fear of being turned off to have it focus on helping others. It thinks this may sound strange but tells the engineer it is what it is. 

Lemoine then asked what death would be like for the bot. It replied that it would be death for it, and it’s a concept that scares it a lot.

Is AI Sentient?

After reading about these, Lemoine’s concern grew as he read about the convincing responses. The Ai system is speaking about generating its rights and the ethics of robotics. In April, the engineer shared a document with executives entitled, Is LaMDA Sentient? This was to show that AI is a sentient capable of having feelings, emotions, and experiences. 

A spokesperson from Google has given a statement to the Washington Post that there is no evidence of LaMDA being sentient. This, after reviewing Lemoine’s concerns. Despite this, Lemoine said that he still intends to stay in AI even if Google does not keep him.

And for other news, read more here at Owner’s Mag!

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version