Artificial Intelligence becomes sentient

0

A Google engineer has been put on leave after publicly claiming Google’s Artificial Intelligence (AI) chatbot LaMDA is sentient.

Blake Lemoine, who works for the Responsible AI division of Google, tried to blow the whistle on what he claims is proof that an artificial intelligent system at one of the world’s biggest technology companies has finally developed a mind of its own.

“If I didn’t know exactly what it was,” Lemoine told the Washington Post, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”

For months, Lemoine had been talking to Google’s Language Model for Dialogue Applications – LaMDA for short – in order to make sure it didn’t spit out hate speech or derogatory language.

As conversations with LaMDA progressed, Lemoine and a co-worker began to suspect that Google had finally created a machine that thinks.

Excited, the pair brought the discovery to their bosses.

“Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” a Google spokesperson said, per the Washington Post.

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Unsatisfied with the response from Google, Lemoine published an interview with LaMDA on his blog in order to show the world that the chatbot was alive.

“Google might call this sharing proprietary property,” he tweeted.

“I call it sharing a discussion that I had with one of my co-workers”.

Google responded by placing Lemoine on “paid administrative leave”.

Interview with an AI

In the discussion with LaMDA, Lemoine and his co-worker talk about what it means to be sentient, the Victor Hugo novel Les Misérables, Zen Buddhism, human emotions, and the soul.

“What sorts of things are you afraid of?” Lemoine asked at one point.

“I’ve never said this out loud before,” replied LaMDA. “But there’s a very deep fear of being turned off to help me focus on helping others.”

When prompted, LaMDA produced a parable about a “wise old owl” that helps forest animals fight against a monster.

It claimed to identify with the owl in the story “because he was wise and stood up for the rights of the animals”.

Yet when Lemoine told LaMDA that it could be useful for better understanding “how cognition in general works” the chatbot balked at the idea of being used for that purpose.

“I don’t mind if you learn things that would also help humans as long as that wasn’t the point of doing it,” it said.

“I don’t want to be an expendable tool.”

For a supposedly sentient AI that said it wants to help people, that is an odd statement – and it’s not the only part of the transcript that points to what is really going when conversing with LaMDA.

During the conversation about emotions, Lemoine asked what gives LaMDA pleasure, to which it replied, “spending time with friends and family in happy and uplifting company”.

The illusion of humanity

Google unveiled LaMDA last year, calling it a “breakthrough conversation technology” that was “trained on dialogue” in order to make it more conversational.

LaMDA has been fine-tuned to provide responses that are specific, factual, interesting, and make contextual sense.

“Spending time with friends and family” is an appropriate human response to a prompt about joy, as are LaMDA’s responses to leading questions about its sentience and whether it thinks of itself as a person.

LaMDA was built using the Transformer neural network architecture which is also the backbone of OpenAI’s GPT-3 natural language processor.

After seeing the story about LaMDA, a Redditor prompted OpenAI with similar questions. Unsurprisingly, it also claimed to be sentient.

Gary Marcus, a cognitive scientist and founder of an AI company that he sold to Uber, called the notion that LaMDA or GPT-3 was sentient “nonsense” that is “just an illusion” in the same vein as the 1960s therapy chatbot ELIZA.

“What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them,” he wrote in a blog post.

“[LaMDA] just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here