The prevailing wisdom about human behaviors and beliefs today is that facts are not persuasive in changing minds or shifting beliefs. Usually, it’s true. That’s one reason why there is so much focus on becoming a better storyteller and why every category from politicians to breakfast cereal marketers primarily relies on stories to connect with people. Stories do work, but a recent study featuring a chatbot called the DebunkBot is challenging this long-held idea as profiled in a recent NY Times article:
“Until now, conventional wisdom held that once someone fell down the conspiratorial rabbit hole, no amount of arguing or explaining would pull that person out. The theory was that people adopt conspiracy theories to sate an underlying need to explain and control their environment, [but] what if debunking attempts just haven’t been personalized enough?
Since conspiracy theories vary so much from person to person — and each person may cite different pieces of evidence to support one’s ideas — perhaps a one-size-fits-all debunking script isn’t the best strategy. A chatbot that can counter each person’s conspiratorial claim of choice with troves of information might be much more effective, the researchers thought.”
Their idea not only worked in about 20% of the cases (an unusually high number), but when those same conspiracy theorists were interviewed 2 months later, they had not gone back to their old beliefs making this an extremely rare case where the results of belief change were not short-lived.
Past research has shown that someone who is accustomed to their family and friends attacking them for their unpopular beliefs will often respond by digging in their heels and doubling down on that same belief. So it’s logical to think they may be more inclined to listen to and even trust an impartial machine. In other words, this could be a situation where AI can do what humans seemingly can’t. Bring someone back from the edge of a conspiracy theory and get them to change their minds.