Realistic AI-generated images and voice recordings may be the latest threat to democracy, but they’re part of a long-standing family of deceptions. The way to fight so-called deepfakes isn’t to develop some rumour-busting form of AI or train the public to spot fake images. A better tactic would be to encourage a few well-known critical thinking methods—refocusing our attention, reconsidering our sources and questioning ourselves.
Some of those critical thinking tools fall under the category of ‘System 2’ or slow thinking, as described in Daniel Kahneman’s Thinking, Fast and Slow. AI is good at fooling the fast thinking ‘System 1’: the mode that often jumps to conclusions. We can start by refocusing attention on policies and performance, rather than gossip and rumours.
So what if former President Donald Trump stumbled over a word and then blamed AI manipulation? So what if President Joe Biden forgot a date? Neither incident tells you anything about either man’s policy record or priorities. Obsessing over which images are real or fake may be a waste of the time and energy. Research suggests that we’re terrible at spotting fakes.
“We are very good at picking up on the wrong things," said computational neuroscientist Tijl Grootswagers of the University of Western Sydney. People tend to look for flaws when trying to spot fakes, but it’s the real images that are most likely to have flaws. People may unconsciously be more trusting of deepfake images because they’re more perfect than real ones, he said.
Read more on livemint.com