The owner of Facebook and Instagram says it'll put labels on political ads created using artificial intelligence
WASHINGTON — Facebook and Instagram will require political ads running on their platforms to disclose if they were created using artificial intelligence, their parent company announced on Wednesday.
Under the new policy by Meta, labels acknowledging the use of AI will appear on users' screens when they click on ads. The rule takes effect Jan. 1 and will be applied worldwide.
The development of new AI programs has made it easier than ever to quickly generate lifelike audio, images and video. In the wrong hands, the technology could be used to create fake videos of a candidate or frightening images of election fraud or polling place violence. When strapped to the powerful algorithms of social media, these fakes could mislead and confuse voters on a scale never seen.
Meta Platforms Inc. and other tech platforms have been criticized for not doing more to address this risk. Wednesday's announcement — which comes on the day House lawmakers hold a hearing on deepfakes — isn't likely to assuage those concerns.
While officials in Europe are working on comprehensive regulations for the use of AI, time is running out for lawmakers in the United States to pass regulations ahead of the 2024 election.
Earlier this year, the Federal Election Commission began a process to potentially regulate AI-generated deepfakes in political ads before the 2024 election. President Joe Biden's administration last week issued an executive order intended to encourage responsible development of AI. Among other provisions, it will require AI developers to provide safety data and other information about their programs with the government.
The
Read more on abcnews.go.com