OpenAI says it can now detect images spawned by its software—most of the time The penalties, covering sections 66C and 66D of the Information Technology Act, 2000; section 123(4) of Representation of People Act, 1951; and sections 171G, 465, 469 and 505 of the Indian Penal Code, could lead to several years of imprisonment and fines for the perpetrators. However, global enforcement of such regulations has so far been challenging, considering the evolving nature of AI and digital content.
To address the challenges, 20 leading technology companies, including Adobe, OpenAI, IBM, LinkedIn, Snap, and TikTok, signed an accord on 16 February in Munich to identify AI-altered political content, limit their distribution, and enhance “cross-industry resilience" to identify “deceptive AI-based election content". The C2PA initiative, comprising Adobe, Google, Intel, Microsoft and Sony, among others, seeks to advancing the efforts further.
The central theme will be to develop standards to identify content credentials, and attach “tamper-proof metadata" to reveal all the details, including the origin of content, be it in text, image, video, or audio formats. Also Read: AI Tracker: OpenAI brings ChatGPT memory feature to Plus users While efforts to identify content sources have faced intense scrutiny, the dangers associated with deepfakes—altered versions of existing videos crafted to convey alternative political messages—have prompted governments and the industry to curb the menace, especially with the increasing integration of AI features into mainstream social media and mobile apps.
Read more on livemint.com