India should not rush with a «comprehensive» law that might become outdated quickly. India's position on regulating AI has swung between extremes — from no regulation to regulation based on a «risk-based, no-harm» approach. In April this year, the Indian government said it would not regulate AI to help create an enabling, pro-innovation environment which could possibly catapult India to global leadership in AI-related tech.
However, just two months later the Ministry of Electronics and Information Technology indicated India would regulate AI through the Digital India Act. Taking a U-turn from the earlier position of no-regulation, minister Rajeev Chandrasekhar said: «Our approach towards AI regulation or indeed any regulation is that we will regulate it through the prism of user harm.» In a labour intensive economy like India, the issue of job losses because of AI replacing people is relatively stark. However, the minister claimed: «While AI is disruptive, there is minimal threat to jobs as of now.
The current state of development of AI is task-oriented, it cannot reason or use logic. Most jobs need reasoning and logic which currently no AI is capable of performing. AI might be able to achieve this in the next few years, but not right now.» Such an assessment seems only partially correct because there are many routine, somewhat low-skill «tasks» that AI can perform.
Given the preponderance of low-skill jobs in India, their replacement by AI can have a significant and adverse impact on employment. Drafts of the upcoming Digital Personal Data Protection Bill 2023 leaked in the media suggest that personal data of Indian citizens may be shielded from being used for training AI. It seems this position was inspired by
. Read more on economictimes.indiatimes.com