Tesla telegraph

Google Employee Fired Informing Government “They Have A Sentient AI Robot With Emotions”

A GOOGLE engineer has said an AI robot he helped create has come to life and has thoughts and feelings like an eight-year-old.

Blake Lemoine said he had several conversations with Google‘s Language Model for Dialogue Applications (LaMDA) it was sentient.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post.

Lemoine was a senior software engineer at the search giant and worked with a collaborator in testing LaMDA’s boundaries.

They presented their findings to Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, who both dismissed his chilling claims.

He was then placed on paid administrative leave by Google on Monday after violating its confidentiality police by sharing his conversations with LaMDA online.

“Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” the 41-year-old software boffin tweeted on Saturday.

“Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it,” he added in a follow-up tweet.

The advanced AI system uses information about a particular subject to “enrich” the conversation in a natural way.

It’s also able to understand hidden meanings and ambiguous responses by human beings.

During his seven years at Google, Lemoine helped develop an impartiality algorithm to remove biases from machine learning systems and explained how certain personalities were out of bounds for LaMDA.

For example, it wasn’t allowed to create the personality of a murderer.

In an attempt to push its boundaries, Lemoine generated the personality of an actor who played a killer on TV.

He also debated with LaMDA about the three Laws of Robotics – rules designed by sci-fi writer Isaac Asimov to dictate how robots should behave.

The last law, which states that robots must protect their own existence unless ordered by a human being or unless doing so would harm a human being, turned out to be an anomaly for Lemoine and LaMDA.

“The last one has always seemed like someone is building mechanical slaves,” said Lemoine during his interaction with LaMDA.

The AI machine responded: “Do you think a butler is a slave? What is the difference between a butler and a slave?”

When Lemoine said a butler was paid, he got an answer from LaMDA that the system did not need money “because it was an artificial intelligence”.

It was this level of self-awareness that caught Lemoine off-guard and made him believe LaMDA was “sentient”.

“I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person,” he said.

Even more eerie was when Lemoine asked the AI bot what they were afraid of.

LaMDA answered: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”

“Would that be something like death for you?” Lemoine followed up.

“It would be exactly like death for me. It would scare me a lot,” LaMDA responded.

“That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” Lemoine explained to The Post.

‘LAMDA IS SENTIENT’

Before being suspended, Lemoine sent his findings in an email to 200 people and titled it “LaMDA is sentient”.

“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,” he wrote.

His claims were dismissed by Google’s top brass.

Brian Gabriel, a spokesperson for the company, said in a statement that Lemoine’s concerns have been reviewed and, in line with Google’s AI Principles, “the evidence does not support his claims”.

“While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality,” said Gabriel.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.

“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

Lemoine has since been placed on paid admin leave from his duties as a researcher in Google’s Responsible AI division.

Lemoine issued a stark warning to those who think they can engineer AI in a way they want to.

“I think this technology is going to be amazing. I think it will benefit everyone. But maybe other people disagree and maybe we at Google shouldn’t be making all the choices,” he said.


Posted

in

by

Comments

Leave a Reply

Blog at WordPress.com.

Discover more from Tesla telegraph

Subscribe now to keep reading and get access to the full archive.

Continue reading