The US Federal Trade Commission (FTC) proposed making new updates on an artificial intelligence (AI) deepfake rule on February 16. The government agency said the proposed rule changes would protect users from AI impersonations.
According to the ‘Rule on Impersonation of Government and Businesses’ document, AI deepfakes that impersonate businesses and governments could face legal action.
The FTC said the changes are necessary due to the prevalence of impersonations of businesses, government officials, and parastatals.
The endgame is to protect customers from possible harm incurred from generative AI platforms.
The updated rule will come into effect 30 days following its publication in the Federal Register.
For now, public comments are welcome for the next 60 days. Once the rule is enacted, the FTC will be empowered to go after scammers who defraud users by impersonating legitimate businesses or government agencies.
The AI industry has come a long way since the famous launch of ChatGPT in November 2022 by the OpenAI team. The company, led by Sam Altman, has recently launched a new product called Sora.
Sora uses AI prompts to generate realistic videos with highly detailed scenes, complex camera motions, and vibrant emotions.
Introducing Sora, our text-to-video model.
Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/7j2JN27M3W
Prompt: “Beautiful, snowy… pic.twitter.com/ruTEWn87vf
— OpenAI (@OpenAI) February 15, 2024
Powerful AI tools like those offered by OpenAI and Google have increased productivity for many people and businesses.
However, they have also become an effective tool in the hands of cybercriminals. With the tool,
Read more on cryptonews.com