Google has dismissed a senior software engineer who claimed the company’s artificial intelligence chatbot LaMDA was a self-aware person.
Google, which placed software engineer Blake Lemoine on leave last month, said he had violated company policies and that it found his claims on LaMDA (language model for dialogue applications) to be “wholly unfounded”.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google said.
Last year, Google said LaMDA was built on the company’s research showing transformer-based language models trained on dialogue could learn to talk about essentially anything.
Lemoine, an engineer for Google’s responsible AI organisation, described the system he has been working on as sentient, with a perception of, and ability to express, thoughts and feelings that was equivalent to a human child.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.
He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?”
The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of.
Google and many leading scientists were quick to dismiss Lemoine’s views as misguided, saying LaMDA is simply a complex algorithm designed to generate convincing human language.
Lemoine’s dismissal was first reported by Big Technology, a tech and society
Read more on theguardian.com