Some observers on social media quickly dismissed it as an «AI-generated fake» — created using artificial intelligence tools that can produce photorealistic images with a few clicks.
Several AI specialists have since concluded that the technology was probably not involved. By then, however, the doubts about its veracity were already widespread.
Since Hamas' terror attack Oct. 7, disinformation watchdogs have feared that fakes created by AI tools, including the realistic renderings known as deepfakes, would confuse the public and bolster propaganda efforts.
So far, they have been correct in their prediction that the technology would loom large over the war — but not exactly for the reason they thought.
Disinformation researchers have found relatively few AI fakes, and even fewer that are convincing. Yet the mere possibility that AI content could be circulating is leading people to dismiss genuine images, video and audio as inauthentic.
On forums and social media platforms like X (formerly known as Twitter), Truth Social, Telegram and Reddit, people have accused political figures, media outlets and other users of brazenly trying to manipulate public opinion by creating AI content, even when the content is almost certainly genuine.
«Even by the fog of war standards that we are used to, this conflict is particularly messy,» said Hany Farid, a computer science professor at the University of California, Berkeley and an