The Exemption Option: AI and Believers

Emerging tools have to justify themselves to us more than we have to justify ourselves to emerging tools.

At the moment, AI seems to be making incursions on everyday life almost every day. While some people embrace AI, often depicted as “the future,” others are unsure about some of its uses, and some people are defiantly against many of its uses. It is both hard to understand exactly what is happening and to decide exactly how we should respond. With loud alarmists and enthusiasts all around, it can be hard to discern what is worth worrying about. Even more troubling is the way in which we are often being denied the opportunity to practice discernment.

At present, AI does not seem optional. We already have AI infused in many familiar products, whether we want it or not, and many of us are facing AI mandates in the workplace. Too often those mandates seemed to be based on fear of being left behind in some imagined future rather than on evidence that AI is improving anything in particular. Everywhere we turn people talk about “inevitability.” We have no time for cautious, intentional adoption and certainly no time for gathering evidence or objections. Yet there are very good grounds for objections in some cases, even religious objections.

It is perfectly reasonable for people to have conscientious objections to many AI uses, objections which ought to be recognized. As we hurtle toward the cliff of mass adoption without reflection, it is time for those who are convinced in their hearts to pull the handbrake and insist on the possibility of religious exemptions for some AI uses. The need for recognition seems clear when we consider the religious motivations for many of these concerns.

It may seem silly to some people, but there are serious conversations being had around whether or not AI is evil. A more extreme take on this can be found in Paul Kingsnorth’s Against the Machine, which came out in September 2025. Kingsnorth argues that many of those developing AI have articulated evil intentions and considers the possibility that something is not just being built in the cloud but is being embodied:

“Whatever is quite happening, it feels to me as if something is indeed being ‘ushered in’. Through our efforts and our absent-minded passions, something is crawling toward the throne. The ruction that is shaping and reshaping everything now, the earthquake born through the wires and towers of the web, through the electric pulses and the touchscreens and the headsets: these are its birth pangs. The internet is its nervous system.” (261)

In another passage, Kingsnorth asks us to consider whether or not AI is the antichrist or an antichrist.

Kingsnorth’s take seems extreme, but he is hardly alone in his fears and suspicions. All kinds of sources force us to consider the spiritual dimension of recent tech developments. A few months ago, celebrity podcaster Joe Rogan asked if Jesus might return as AI. Peter Thiel of Palantir seems to have an unnatural interest in the antichrist. He brings it up frequently in interviews and gives private talks on the topic. Many of the people who made money developing AI have expressed concerns can only be described as metaphysical.

Whether or not AI is a golem, it carries a heavy metaphysical load. Most people do not mind if it codes websites or helps translate ancient documents or serves as an excellent medical sleuth for uncommon conditions. But AI is also being used as a therapist, as a friend, as a romantic partner, and even as a spiritual guide. What will it guide people to? It can coach you in contemplative prayer or it can function like a Ouija board. It can have a very negative impact on human relationships because it typically creates more distance between people. In many ways it challenges the value of the imago dei.

Even if you’re skeptical about AI being consciously evil and believe many uses are good, you may wonder if there is something evil about how it is currently being built and justified and used. Sam Altman, of OpenAI, compared AI to a human baby because both need resources to learn and grow. In fact, he suggested that AI makes better use of resources than a baby. Proposed uses of AI include cutting jobs, replacing humans, and mass monitoring society. The energy and environmental costs of AI are significant and seem very likely to diminish the quality of life of some humans. Of course, even if AI was environmentally clean, the LLMs we see and use built their knowledge on illegal access to copyrighted material.

We can go on forever about AI practices or tools that should make us pause. And there are more things up ahead. For example, companies are working on biological computing, which uses real human neurons. Wendell Berry has been telling us to be cautious with technology adoption for some time, but these developments have caused more people to wake up and take these warnings seriously.

Many people operating with a religious framework already have some clear views about the nature of humanity and embodiment and feel morally compelled to opt out of at least some AI uses. A short, coherent statement of belief can and should be crafted and used to delimit the boundaries of acceptable, personal AI use. Workplace religious accommodations are enshrined in law and have a meaningful range, limited chiefly by “undue hardship.” The legal foundation of such an option exceeds the limits of this essay, but AI tools are still being crafted and adopted, they are not (yet) essential in all fields.

Even those who do not personally want a religious exemption should welcome the existence of such a carveout. It reinforces the truth that technology adoption does not have to be automatic. Emerging tools have to justify themselves to us more than we have to justify ourselves to emerging tools.

A religious exemption for relying on AI in businesses, schools, and government services would benefit all Americans. At present, we are all subject to the “tyranny of the masses” that John Stuart Mill warned us about. Obviously, exemptions would offer refuge to the people who do genuinely believe it is evil, but even AI enthusiasts will benefit from a society in which people are able to have their own beliefs and practices and where the exchange of ideas is free and open.

A movement for exemption, with some identifiable beliefs, can offer coherence to existing resistance and is more likely to achieve recognition than isolated voices. Clearly, it would demonstrate that commitment to anachronism is not what defines everyone who has doubts about AI. For Christians, our faith should already be shaping our usage of AI, and this conversation could help us explain and express that in a way which is helpful to our civic community.

The people who do want to preserve some space between themselves and some AI on the basis of sincere religious belief deserve recognition, and that recognition would be helpful for all kinds of people. A religious exemption for AI (in at least some of its uses) is worth thinking about and acting on now. If you would like to be in touch about this topic, please connect.

Image Credit: John Constable, “A House and Haystack at Flatford” (1827)

Enjoying what you’re reading?

Support FPR’s print journal and selection of books.
Subscribe
A stack of three Local Culture journals and the book 'Localism in the Mass Age'

Elizabeth Stice

Elizabeth Stice is a professor of history at Palm Beach Atlantic University, where she also serves as the assistant director of the Honors Program. She is the author of Empire Between the Lines: Imperial Culture in British and French Trench Newspapers of the Great War (2023). In her spare time, she enjoys ultimate frisbee and putting together a review, Orange Blossom Ordinary.

Leave the first comment