deepfakes as they cannot thwart their creation and original circulation, experts said, urging policymakers to step in to address the damage they cause.
“Criminal provisions under the IT Act and the IPC only partially address the harms which arise from deep fakes,” said Siddharth Deb, manager – public policy, TQH Consulting. “Policymakers must identify interventions that address this and reduce the psychological impact on victims.”
Last week, a deepfake video of actor Rashmika Mandanna went viral on social media, prompting the government to step in.
Minister of state for IT Rajeev Chandrasekhar urged victims to file police complaints and «avail remedies provided under the Information Technology rules». An advisory was also sent to social media platforms reminding them that they may lose ‘safe harbour immunity’ under the Act if they fail to remove within 36 hours deepfake content that has been reported.
But these are after-the-fact remedies, said experts. “Deepfakes and AI-generated misinformation may do damage at the time they are spread, which cannot be undone,” said Jaspreet Bindra, founder and managing director of IT consultant firm The Tech Whisperer Ltd, UK. Around 95% of deepfakes are pornographic in nature, he said.
The controversy has given rise to calls for stronger regulation of AI.
“There is a clear need for a law on AI to govern the complexities relating to AI and related applications,” said Arjun Goswami, director — public policy at