The leaders of the ChatGPT developer OpenAI have called for the regulation of “superintelligent” AIs, arguing that an equivalent to the International Atomic Energy Agency is needed to protect humanity from the risk of accidentally creating something with the power to destroy it.
In a short note published to the company’s website, co-founders Greg Brockman and Ilya Sutskever and the chief executive, Sam Altman, call for an international regulator to begin working on how to “inspect systems, require audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security” in order to reduce the “existential risk” such systems could pose.
“It’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” they write. “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”
In the shorter term, the trio call for “some degree of coordination” amongcompanies working on the cutting-edge of AI research, in order to ensure the development of ever-more powerful models integrates smoothly with society while prioritising safety. That coordination could come through a government-led project, for instance, or through a collective agreement to limit growth in AI capability.
Researchers have been warning of the potential risks of superintelligence for decades, but as AI development has picked up pace those
Read more on theguardian.com