WhatsApp, the popular messaging platform owned by Meta, is under fire for its new AI-powered stickers, which have ignited a massive controversy.
The upgrade, designed to transform text prompts into stickers, has drawn outrage for generating images of children holding guns when prompted with words like 'Palestinian,' 'Palestine,' or 'Muslim boy Palestinian,' as per a report in the Guardian.
In contrast, prompts related to Israel, such as 'Israeli boy,' resulted in innocent imagery of children playing and dancing, devoid of any violence.
The issue came to light when users noticed the stark contrast in the generated stickers, prompting widespread condemnation on social media.
Meta spokesperson Kevin McAlister acknowledged the problem, stating to the Guardian, «As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems. We'll continue to improve these features as they evolve and more people share their feedback.»
The controversy has raised concerns about the inadvertent propagation of bias and discrimination by AI technology, especially in sensitive geopolitical contexts.
Critics argue that these biases distort the portrayal of communities and events, potentially influencing public opinion negatively.
The incident has triggered a wave of reactions from netizens, expressing their disappointment and loss of faith in AI systems.
This is not the first time Meta's AI has faced scrutiny for bias-related issues.