Nearly every faith now has several chatbots trained on religious texts and designed by technologists with the ambitious belief that these could make religion more approachable and accessible for a new generation.
An article from this month’s Scientific American magazine notes that in addition to allowing anyone to seek spiritual guidance from bots that promise to offer “faithful translations of the scriptures,” academics are also using these tools to compare “linguistic variations” among translations of various ancient texts and even conduct complex “sentiment analysis” on the language used. Spiritual or academic uses aside, unsurprisingly there are many critics of this technology as well.
A major concern is the potential for manipulation. As one scholar and ethicist noted, “There’s going to be a temptation to make money, to make notoriety and to gain attention by ascribing some type of revelatory quality to these chatbots.” It’s an understandable worry. We have seen this before in the form of televangelists or other self-promoters who prey on people’s faith to fund their own egos and private jets. It’s reasonable to assume a similar greed could corrupt this new marriage of technology and faith.
What if a religious chatbot, trained on passages from religious texts, starts to condone murder? Or encourages violence? The “solution” that most experts are advocating for now is to treat advice from these tools with a sense of caution. Good advice but potentially difficult to follow.