Meta Platforms on Wednesday. In a blog post, Meta said that it would require advertisers to disclose if their altered or created ads portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person that does not exist.
Advertisers would also have to disclose if these ads show events that did not take place, alter footage of a real event, or even depict a real event without the true image, video, or audio recording of the actual event, said the company. Meta has also said that it will continue to use independent fact-checking partners to review misinformation, and will not allow ads to run if they are rated as false, altered, partly false or missing context.
If artificial intelligence was used to alter an ad in a way that can mislead people, the content will not be allowed, it added. In June this year, Google’s YouTube said that it would stop taking down content that promotes false claims about the 2020 US presidential election.
In August, X permitted political ads again after banning them for several years. Several US lawmakers had showed concerns about the use of artificial intelligence to create content that falsely depicts candidates in political ads to influence voters with a slew of generative AI tools to make cheap and deepfake content.
The Facebook-owner has already blocked its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures. In October, Meta Platforms’ top policy executive Nick Clegg said that the use of generative AI in political advertising was “clearly an area where we need to update our rules".Milestone Alert!
Livemint tops charts as the fastest growing news website in the world