covid-19, anti-maskers in the era of Spanish flu waged a disinformation campaign. They sent fake messages from the surgeon-general via telegram (the wires, not the smartphone app). Because people are not angels, elections have never been free from falsehoods and mistaken beliefs.
But as the world contemplates a series of votes in 2024, something new is causing a lot of worry. In the past, disinformation has always been created by humans. Advances in generative artificial intelligence (AI)—with models that can spit out sophisticated essays and create realistic images from text prompts—make synthetic propaganda possible.
The fear is that disinformation campaigns may be supercharged in 2024, just as countries with a collective population of some 4bn—including America, Britain, India, Indonesia, Mexico and Taiwan—prepare to vote. How worried should their citizens be? It is important to be precise about what generative-AI tools like ChatGPT do and do not change. Before they came along, disinformation was already a problem in democracies.
The corrosive idea that America’s presidential election in 2020 was rigged brought rioters to the Capitol on January 6th—but it was spread by Donald Trump, Republican elites and conservative mass-media outlets using conventional means. Activists for the BJP in India spread rumours via WhatsApp threads. Propagandists for the Chinese Communist Party transmit talking points to Taiwan through seemingly legitimate news outfits.
All of this is done without using generative-AI tools. What could large-language models change in 2024? One thing is the quantity of disinformation: if the volume of nonsense were multiplied by 1,000 or 100,000, it might persuade people to vote differently. A second
. Read more on livemint.com