OpenAI and Google are really worried that their products are dangerous and pose severe, unpredictable risks to public safety, they could stop developing them. It is reasonable, therefore, to suspect that their calls for regulation of so-called foundation models are partly motivated by the desire to lock-in their dominant market positions. Even the narrative of the dangers and risks of generative AI takes us in direction determined by a few big companies in Silicon Valley and the intellectual and media eco-system they have promoted.
So the world is concerned with the big question of what to do about the imminent arrival of an all-powerful artificial general intelligence that poses a challenge to human civilization. In response, a lot of intelligent people around the world are working on the alignment problem of how to ensure that intelligent machines do what is in the human interest. Some of the smartest minds in the US have polarized themselves into warring ‘AI safety’ and ‘AI ethics’ tribes, the former concerned with reducing harm and the latter with eliminating biases and discrimination.
These issues are serious and grave, but they distract attention from more immediate questions of public policy. The challenge before India, like for other countries, is how to govern the AI industry. The technology and products are already here, they are quite different from what we’ve seen before, and the industry structure is not properly defined.
Read more on livemint.com