how to prevent its misuse. The policy will be regulating the use of artificial intelligence (AI)-generated content on platforms such as YouTube. According to the company, the policy is focusing on creator disclosures when it alters reality, and mandates labelling of electoral advertisements crafted by GenAI, among others, besides enabling creators to watermark their content using Google’s tools.
“We are working on the right way to tackle this issue. On YouTube, we plan to have disclaimers about deepfakes or synthetic content for video descriptions, and in certain more sensitive cases, within videos themselves," said Saikat Mitra, vice-president and head of trust and safety for Asia-Pacific at Google, in an interview with Mint. “YouTube’s also working on policy (which will) expect content creators to disclose that this content has synthetic media and altering reality, and the consequences of not following that," he added.
While Mitra did not elaborate on the punitive measures he said that as per existing policies accounts have been suspended and content taken down if it defies YouTube’s compliance guidelines. Concern over synthetic content gained prominence after the targeting of public figures, including Prime Minister Narendra Modi and actor Katrina Kaif, through deepfake videos across social media platforms. On 23 November, union information technology (IT) minister Ashwini Vaishnaw met industry representatives, including Google, to discuss regulations to specifically target AI-generated fake content.
Read more on livemint.com