ChatGPT creator, OpenAI's statement which says that it does not have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. In a tweet, Sharma said, “In less than 7 years we have a system that may lead to the disempowerment of humanity => even human extinction.
I am genuinely concerned with the power some set of people & select countries have accumulated - already." OpenAI plans to invest significant resources and create a new research team that will seek to ensure its artificial intelligence remains safe for humans - eventually using AI to supervise itself, it said on Wednesday. "The vast power of superintelligence could ...
lead to the disempowerment of humanity or even human extinction," OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote in a blog post. "Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." Superintelligent AI - systems more intelligent than humans - could arrive this decade, the blog post's authors predicted. Humans will need better techniques than currently available to be able to control the superintelligent AI, hence the need for breakthroughs in so-called "alignment research," which focuses on ensuring AI remains beneficial to humans, according to the authors.
OpenAI, backed by Microsoft, is dedicating 20% of the compute power it has secured over the next four years to solving this problem, they wrote. In addition, the company is forming a new team that will organize around this effort, called the Superalignment team.
Read more on livemint.com