YouTube has issued a fresh set of guidelines in a blog today in order to tackle artificial intelligence (AI) generated videos being published on the platform.
In the blog post that is titled, “Our approach to responsible AI innovation,” YouTube stated that the platform has had policies for a long time towards forbidding “technically manipulated content” as they don’t want such content to misdirect audiences. The platform also went on to say that AI-generated content can cause harm especially if the viewers don’t know if they are altered.
As per reports, Jennifer Flannery O’Connor and Emily Moxley both Vice Presidents of Product Management at YouTube said that these guidelines are “especially important” if the videos discuss volatile issues like elections, conflicts or about important political personalities.
They also added that YouTube content creators who won’t disclose the fact that their videos are AI-generated will be suspended from the platform’s Partner Program while other penalties will also be levied. They further stated that they will discuss these guidelines with creators before launching them so that “they understand these new requirements.”
These new guidelines have been announced just a month after YouTube launched some new features on the platform like Dream Screen which is a searchable stockpile for various AI related materials especially for YouTube Shorts videos.
In relation to the same, YouTube’s CEO, Neal Mohan was asked if this would lead towards deep fakes and other deceiving content to which he reportedly replied by saying that “there might be challenges” surrounding the use of such technologies.
The blog post further stated that there will be two ways in which YouTube will disclose whether