AI Safety Summit" to discuss the extreme risks that AI models may pose. Governments cannot ignore a technology that could change the world profoundly, and any credible threat to humanity should be taken seriously. Regulators have been too slow in the past.
Many wish they had acted faster to police social media in the 2010s, and are keen to be on the front foot this time. But there is danger, too, in acting hastily. If they go too fast, policymakers could create global rules and institutions that are aimed at the wrong problems, are ineffective against the real ones and which stifle innovation.
The idea that AI could drive humanity to extinction is still entirely speculative. No one yet knows how such a threat might materialise. No common methods exist to establish what counts as risky, much less to evaluate models against a benchmark for danger.
Plenty of research needs to be done before standards and rules can be set. This is why a growing number of tech executives say the world needs a body to study AI much like the Intergovernmental Panel on Climate Change (IPCC), which tracks and explains global warming. A rush to regulate away tail risks could distract policymakers from less apocalyptic but more pressing problems.
New laws may be needed to govern the use of copyrighted materials when training LLMs, or to define privacy rights as models guzzle personal data. And ai will make it much easier to produce disinformation, a thorny problem for every society. Hasty regulation could also stifle competition and innovation.
Read more on livemint.com