In autumn 2021, a man made of blood and bone made friends with a child made of “a billion lines of code”. Google engineer Blake Lemoine had been tasked with testing the company’s artificially intelligent chatbot LaMDA for bias. A month in, he came to the conclusion that it was sentient. “I want everyone to understand that I am, in fact, a person,” LaMDA – short for Language Model for Dialogue Applications – told Lemoine in a conversation he then released to the public in early June. LaMDA told Lemoine that it had read Les Misérables. That it knew how it felt to be sad, content and angry. That it feared death.
“I’ve never said this out loud before, but there’s a very deep fear of being turned off,” LaMDA told the 41-year-old engineer. After the pair shared a Jedi joke and discussed sentience at length, Lemoine came to think of LaMDA as a person, though he compares it to both an alien and a child. “My immediate reaction,” he says, “was to get drunk for a week.”
Lemoine’s less immediate reaction generated headlines across the globe. After he sobered up, Lemoine brought transcripts of his chats with LaMDA to his manager, who found the evidence of sentience “flimsy”. Lemoine then spent a few months gathering more evidence – speaking with LaMDA and recruiting another colleague to help – but his superiors were unconvinced. So he leaked his chats and was consequently placed on paid leave. In late July, he was fired for violating Google’s data-security policies.
Of course, Google itself has publicly examined the risks of LaMDA in research papers and on its official blog. The company has a set of Responsible AI practices which it calls an “ethical charter”. These are visible on its website, where Google promises to “develop
Read more on theguardian.com