NEW DELHI : India’s ministry of electronics and information technology on Tuesday issued an advisory on regulating artificial intelligence (AI)-generated content, commonly known as deepfakes, for all technology companies operating in India. Following meetings with tech companies on 22-23 November, Union IT minister Ashwini Vaishnaw and minister of state for IT Rajeev Chandrasekhar issued the advisory.
The move is in response to a series of deepfake incidents targeting prominent actors and politicians on social media platforms. “Content not permitted under the IT Rules, in particular those listed under Rule 3(1)(b), must be clearly communicated to the users in clear and precise language, including through its terms of service and user agreements; the same must be expressly informed to the user at the time of first registration, and also as regular reminders, in particular, at every instance of login, and while uploading or sharing information onto the platform," the ministry said.
Intermediaries will also be required to inform users about the penalties that will apply to them, if they are convicted of perpetrating deepfake content knowingly. “Users must be made aware of various penal provisions of the Indian Penal Code 1860, the IT Act, 2000 and such other laws that may be attracted in case of violation of Rule 3(1)(b).
In addition, terms of service and user agreements must clearly highlight that intermediaries are under obligation to report legal violations to law enforcement agencies under the relevant Indian laws applicable to the context," it added. Rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, state that intermediaries, including the likes of Meta’s
. Read more on livemint.com