There is something propitious about the timing of Christopher Nolan’s latest movie release. An Oppenheimer moment, highlighting the moral dilemma and consequences of putative good deeds ending up harming humanity, has become especially relevant for a world grappling with the duality of artificial intelligence (AI) or Big Data.
A thin line separating good from harmful in both cases has accelerated the need for an appropriate regulatory regime. India also needs to frame AI regulation, but it hinges on the operative word ‘appropriate’.
There have been all kinds of noises—some supportive, many disapproving—after the electronics and information technology minister Rajeev Chandrasekhar recently asserted that India will definitely regulate AI: “Our approach towards AI regulation, or indeed any regulation, is that we will regulate it through the prism of user harm." Around the same time, the Telecom Regulatory Authority of India (TRAI) released a report recommending the setting up of an ‘independent regulator’ for AI: “The Authority recommends that for ensuring development of responsible Artificial Intelligence (AI) in India, there is an urgent need to adopt a regulatory framework by the Government that should be applicable across sectors. The regulatory framework should ensure that specific AI use cases are regulated on a risk-based framework where high risk use cases that directly impact humans are regulated through legally binding obligations." The need to regulate AI has become all the more compelling after evidence emerged that the emerging technology industry lacks the capacity to self-regulate, especially when some of its members push the envelope excessively or indulge in customer-gouging.
Read more on livemint.com