Mint explains why. Firms building AI models, large language models and generative AI applications should ensure their platforms don’t generate “unlawful content" that violates the IT Act of 2000. It asked firms to not “permit any bias or discrimination or threaten the integrity of the electoral process." It said “under testing/unreliable" models would need “explicit permission" of the Centre—and a ‘consent popup’ would be mandatory for such AI applications.
The advisory was triggered by opinionated results from Google’s Gemini AI on Prime Minister Narendra Modi. So the Centre asked firms to take onus of such results—or get permission. Big Tech firms building apps on AI will need to label their models as “under testing", which experts say is subjective and vaguely defined.
Because AI models are constantly trained on ever-expanding datasets, models may remain under testing for a long time, thus leaving this open to interpretation. Experts say explicit government oversight may also restrict firms’ ability to freely offer cutting-edge AI tech to users, and could restrict how readily Indian users may access the newest applications based on OpenAI’s GPT, Meta’s Llama and Google’s Gemini. What this means for enterprise access to the latest AI tech is unclear.
For now, yes. On 4 March, minister of state for IT Rajeev Chandrasekhar tweeted the advisory “is only for large platforms, and will not apply to startups". He said it will act as an “insurance policy to platforms who can otherwise be sued by consumers".
Read more on livemint.com