how to regulate it. It started off with the US springing a surprise with Joe Biden’s executive order requiring AI majors to be more transparent and careful in their development. This was followed by the Global AI Summit convened by Rishi Sunak; attended by 28 countries (China included) and boasting the star power of Elon Musk, Demis Hassabis and Sam Altman, it led to a joint communique on regulating Frontier AI.
The EU is racing to be next, China has something out, and India is making the right noises. OpenAI announced a team to tackle Super Alignment, declaring that, “We need scientific and technical breakthroughs to steer and control AI systems much smarter than us." The race to develop AI has turned into a race to regulate it. There is certainly some optimism here—that governments and tech companies are awake to the dangers that this remarkable technology can pose to humankind, and one cannot but help applaud the fact that they are being proactive about managing the risks.
Perhaps they have learnt their lessons from the ills that social media begat, and want to perform better this time. Hopefully, we will not have an AI Hiroshima before people sit up to the dangers of it. However, I am not so sanguine about this.
On closer look, most of this concern and regulation seems to be directed towards what is loosely called Frontier AI—that time in the future when AI will become more powerful than humans and perhaps escape our control. The Bletchley Park AI Summit was very clear on this; it focused on Frontier AI. The OpenAI initiative is also about alignment between human and superintelligent AI values—thus the term ‘super alignment.’ Most of the narrative around regulating AI seems to be focused on this future worry.
Read more on livemint.com