The tech world has been abuzz with Hollywood actor Scarlett Johansson accusing OpenAI of using a voice “eerily similar" to her own in the latest version of its artificial intelligence (AI) chatbot. While OpenAI has denied the voice is hers, the controversy sparked a debate around artists’ legal right to control the use of their likeness in the age of AI. Eleven years ago, Johansson starred in the Spike Jonze movie Her, playing the voice of Samantha, an AI chatbot with whom the protagonist falls helplessly in love.
That the voice of Samantha is now taking OpenAI to task is a delicious irony. This, in many ways, was a controversy that OpenAI brought upon itself. After repeatedly stating that it had no intention to anthropomorphize its products, it did just that.
The new voice interface was so realistic that it seemed indistinguishable from a real person, down to responses that were downright flirtatious. While I only have the OpenAI demos to go by, I would not be surprised if these near-perfect facsimiles of human emotion and empathy will take us across the uncanny valley. This is the latest step in a journey that began long ago.
In 1966, Joseph Weizenbaum developed a natural language processing program called Eliza, which he had designed to simulate a Rogerian psychotherapist. By simply rephrasing user inputs, it encouraged users to respond with further information. So life-like was the resulting interaction that he once came back to office to find his secretary engaged in what she believed was a “real conversation" with the program.
Read more on livemint.com