Regulating artificial intelligence (AI) has joined outer space as the latest battleground for dominance among the world’s advanced nations. Eagerness to dominate is no guarantee of clarity, though. The larger picture on what is to be done, and how, remains unclear even after the recent British summit on AI – which saw participation from all major AI powers, including the US and China – the Biden administration’s executive order on AI safety, and the European Union Artificial Intelligence Act.
There is no consensus regulation that India can adopt as its own. It must, thus, invest in R&D in AI and in regulating it effectively. The US government’s executive order on AI requires companies to ensure that their AI products will not be used to produce weapons of mass destruction, biological or otherwise.
It urges the creation of a watermark on all AI-generated works of art, to warn the viewer that what they see is machine-made rather than the direct output of human creativity. It also requires the results of all AI testing to be communicated to the government. This may well prove to be the most useful element of the directive, if it is shared with the global research community.
The mandate against use of a company’s AI model to produce a WMD is too broad and unenforceable. Take, for example, Alphabet subsidiary Deep Mind’s forecast of the likely shapes of 200 million proteins. In theory, this sets the ground for major breakthroughs in developing new drugs, as the shape of a protein and how it folds are major determinants of how it would interact with other proteins, including those of harmful organisms.
Read more on livemint.com