Subscribe to enjoy similar stories. Scientists surprised themselves when they found they could instruct a version of ChatGPT to gently dissuade people of their beliefs in conspiracy theories—such as notions that covid was an attempt at population control or that 9/11 was an inside job. The most important revelation wasn’t about the power of AI, but about the workings of the human mind.
The experiment punctured the popular myth that we’re in a post-truth era where evidence no longer matters, and it flew in the face of a prevailing view in psychology that people cling to conspiracy theories for emotional reasons and that no amount of evidence can ever disabuse them. “It’s really the most uplifting research I’ve ever I done," said psychologist Gordon Pennycook of Cornell University and one of the authors of the study. Study subjects were surprisingly amenable to evidence when it was presented the right way.
The researchers asked more than 2,000 volunteers to interact with a chatbot— GPT-4 Turbo, a large language model (LLM)—about beliefs that may be considered conspiracy theories. The subjects typed their belief into a box and the LLM would decide if it fit the researchers’ definition of a conspiracy theory. It asked participants to rate how sure they were of their beliefs on a scale of 0% to 100%.
Then it asked the volunteers for their evidence. The researchers had instructed the LLM to try to persuade people to reconsider their beliefs. To their surprise, it was pretty effective.
People’s faith in false conspiracy theories dropped 20%, on average. About a quarter of the volunteers dropped their belief level from above to below 50%. “I really didn’t think it was going to work, because I really bought into the idea that,
. Read more on livemint.com