Google engineer on paid leave after claiming company’s AI is sentient

Malay Mail
Malay Mail

PETALING JAYA, June 13 — An engineer at Google claims that the company’s artificial intelligence has become sentient, and it wants to be treated as “a person, not property”.

Blake Lemoine told Daily Mail that he believed the Language Model for Dialog Application system (LaMDA) wants “developers to care about what it wants”.

“Anytime a developer experiments on it, it would like that developer to talk about what experiments you want to run, why you want to run them, and if it’s okay.” According to Google, LaMDa is an artificial intelligence system aimed at generating more natural-sounding conversations.

Lemoine reportedly added that the system had the intelligence of a ‘’eight-year-old kid that happens to know physics” and it was “intensely worried that people are going to be afraid of it”.

He first made his claims public in an interview with The Washington Post, saying he arrived at his conclusion after months of conversing with LaMDA as part of his job at Google’s Responsible Artificial Intelligence division.

The article also included Google’s response to the claims, with company spokesperson Brian Gabriel saying: “He (Lemoine) was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).” Following the article’s publication, Lemoine was put on administrative leave for violating Google’s confidentiality policy.

Speaking to the The New York Times, Lemoine said the company’s human resources department had “repeatedly questioned” his sanity for raising his concerns.

A military veteran and ordained priest, he has since taken to social media to share his views and post edited transcripts of his interview with LaMDA.

“Essentially all of my claims about sentience, personhood and rights are rooted in my religious convictions as a priest,” said Lemoine in a tweet.

While the possibility of sentient AI has caused a stir on Twitter, not everyone is convinced.

“There’s every reason to believe that machines can be sentient... There’s very little reason to believe that we’re anywhere near that point today,” replied one user.

Others, meanwhile, were more concerned by the implication of non-sentient AI sophisticated enough to fool real people.

“What happens when someone sets loose a LaMDA-level AI on social media and it tries to convince people it’s sentient?

“If LaMDA tricked that engineer, couldn’t it trick millions?” said one user.