Google’s chief executive has said concerns about artificial intelligence keep him awake at night and that the technology can be “very harmful” if deployed wrongly.
Sundar Pichai also called for a global regulatory framework for AI similar to the treaties used to regulate nuclear arms use, as he warned that the competition to produce advances in the technology could lead to concerns about safety being pushed aside.
In an interview on CBS’s 60 minutes programme, Pichai said the negative side to AI gave him restless nights. “It can be very harmful if deployed wrongly and we don’t have all the answers there yet – and the technology is moving fast. So does that keep me up at night? Absolutely,” he said.
Google’s parent, Alphabet, owns the UK-based AI company DeepMind and has launched an AI-powered chatbot, Bard, in response to ChatGPT, a chatbot developed by the US tech firm OpenAI, which has become a phenomenon since its release in November.
Pichai said governments would need to figure out global frameworks for regulating AI as it developed. Last month thousands of artificial intelligence experts, researchers and backers – including the Twitter owner Elon Musk – signed a letter calling for a pause in the creation of “giant” AIs for at least six months, amid concerns that development of the technology could get out of control.
Asked if nuclear arms-style frameworks could be needed, Pichai said: “We would need that.”
The AI technology behind ChatGPT and Bard, known as a Large Language Model, is trained on a vast trove of data taken from the internet and is able to produce plausible responses to prompts from users in a range of formats, from poems to academic essays and software coding. The image-generating equivalent, in systems
Read more on theguardian.com