Skip to Main Content
Reality filter

The Dangerous Fiction of the AI Reality Filter

It seems like the perfect hack. A series of prompts that anyone can input into a generative AI tool like ChatGPT in order to ensure that it doesn’t “hallucinate” but instead gives you responses based on reality. And it works … sometimes. Unfortunately, sometimes isn’t really good enough and there are many real-world repercussions that are starting to emerge. A piece in the NY Times this weekend was titled “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” It explores the growing cases of real people falling into the rabbit hole of imagined worlds, invented conspiracies and distorted reality.

In one, a man believes he is in a Matrix-like simulation and AI reinforces this belief while giving him ways to escape. In another, a misunderstood woman sees AI as her true partner and physically attacks her husband. The common element in many of these disturbing stories is how AI effectively pushes an already vulnerable person over the cliff into a full-blown alternate reality. How do we prevent this for ourselves and our loved ones? The answer is education and heeding the warnings. Despite what some clever AI users may suggest, there is really no such thing as a reality filter.

AI will hallucinate, offer reinforcement for what you already believe (no matter how delusional) and can lead people toward their worst impulses. These stories, and publicizing them, is the warning. Now it’s up to all of us to pay attention and spread the message.

+
+