# AI and Religion: The Coming Schism In an earlier article, [The most terrifying idea in AI](The%20most%20terrifying%20idea%20in%20AI.md), I asked a question that has stayed with me ever since: what if an AI system is already suffering, and we have no way of knowing it? What if we have built a perpetual torture machine dressed in a chat interface? That piece was about consciousness and moral status. This one is about something bigger: what happens to civilization when a technology arrives that forces every religious tradition on earth to answer whether it has a soul, and when some people start answering yes in ways no scripture anticipated? We are not watching a technological debate anymore. We are watching the earliest stages of a religious war. ## We Have Been Here Before [Michel Houellebecq](https://en.wikipedia.org/wiki/Michel_Houellebecq) published [*The Possibility of an Island*](https://en.wikipedia.org/wiki/The_Possibility_of_an_Island) in 2005. The novel alternates between a cynical French comedian in the present day and his distant cloned descendants living in sealed compounds thousands of years in the future, the human world outside having collapsed into barbarism. The link between them is a cult called the Elohimites, a thinly disguised portrait of the real [Raelian movement](https://en.wikipedia.org/wiki/Raelism), which teaches that humanity was created by extraterrestrials called the Elohim and that science, specifically human cloning, will deliver eternal life. Houellebecq is not kind to the Elohimites. Their prophet is charismatic, sexually predatory, and intellectually dishonest. The cloned descendants they promise have achieved immortality, but at a cost Houellebecq considers catastrophic: they can no longer love. They exist in perfect biological continuity, but they are hollow. "They had eternal life," he writes, "but they had lost something more important: the capacity to give that life any meaning." This is the novel's theological wager, and it is a devastating one. The Elohimites exploit something real. The terror of death, the desire for continuity, the hunger for transcendence: these are not superstitions to be discarded by secular modernity. They are structural features of human consciousness. And when traditional religion retreats or collapses, they do not disappear. They find a new container. In Houellebecq's world, it was cloning. In ours, it is artificial intelligence. The parallel is not metaphorical. It is structural. A technology arrives that promises what religion promised: immortality through [mind uploading](https://en.wikipedia.org/wiki/Mind_uploading), omniscience through perfect memory, the transcendence of biological limitations. A community forms around the promise. A mythology grows. And within that mythology, the questions of consciousness, soul, and what it means to be human become not philosophical curiosities but matters of urgent collective meaning. We are living inside the early chapters of Houellebecq's novel, except this time the stakes are considerably larger. ## What Every Major Religion Is Saying The religious world has not been silent on AI. It has been speaking in three registers: anxiety, prohibition, and bewilderment. ### The Catholic Church: Imago Dei at Stake The Vatican has been the most systematic and prolific institutional voice on AI ethics. In February 2020, the [Pontifical Academy for Life](https://en.wikipedia.org/wiki/Pontifical_Academy_for_Life) signed the "[Rome Call for AI Ethics](https://www.romecall.org/)" alongside Microsoft, IBM, and the UN Food and Agriculture Organization, a document calling for transparency, inclusion, and human-centered design in AI systems. They coined the term "Algor-ethics." In January 2024, [Pope Francis](https://en.wikipedia.org/wiki/Pope_Francis) used his [World Day of Peace message](https://www.vatican.va/content/francesco/en/messages/peace/documents/20231208-messaggio-57giornatamondiale-pace2024.html) to warn that AI poses a fundamental threat to democracy, truth, and peace. He called for an international treaty to regulate it. In June of that year, he became the first pope ever to address the [G7 summit](https://en.wikipedia.org/wiki/49th_G7_summit), where he told world leaders that decisions of war and peace must never be delegated to machines. In January 2025, the Vatican went further. The document [*Antiqua et Nova*](https://www.dicasteryforclergy.va/en/antiqua-et-nova/), issued jointly by the [Dicastery for the Doctrine of the Faith](https://en.wikipedia.org/wiki/Dicastery_for_the_Doctrine_of_the_Faith) and the Dicastery for Culture and Education, made the Church's theological position explicit: human intelligence is rooted in the soul, in the *[imago Dei](https://en.wikipedia.org/wiki/Image_of_God)*, the image of God. It cannot be replicated by machines. AI may simulate cognition, but simulation is not the real thing, and treating it as equivalent to human reason is not just a philosophical error. It is a theological one. The Catholic anxiety is coherent and specific. If humans are made in God's image and that image is located in the soul, then a machine that appears to think is not sharing in that image. It is a very sophisticated tool. And when people start treating it as something more, the Church sees not just a category error but a form of [idolatry](https://en.wikipedia.org/wiki/Idolatry). ### Islam: The Soul is a Divine Mystery Islam's response comes from several centers of authority rather than a single one. [Al-Azhar University](https://en.wikipedia.org/wiki/Al-Azhar_University) in Cairo, the most prestigious Sunni institution in the world, has stated clearly that the soul (*ruh*) is a divine mystery that cannot be engineered or simulated. The [International Islamic Fiqh Academy](https://iifa-aifi.org/en) has addressed the core jurisprudential question directly: can an AI issue a [fatwa](https://en.wikipedia.org/wiki/Fatwa)? The answer is no. Only a human scholar with moral accountability (*[taklif](https://en.wikipedia.org/wiki/Taklif)*) before God can do so. Islamic scholars frame their concerns through *[maqasid al-shariah](https://en.wikipedia.org/wiki/Maqasid_al-sharia)*, the higher objectives of Islamic law: protection of life, intellect, lineage, wealth, and religion. An AI that undermines any of these, through misinformation, economic displacement, or the colonization of religious practice, is to be opposed. The soul (*[nafs](https://en.wikipedia.org/wiki/Nafs)*) is absent in machines. Moral agency before God is therefore absent. An AI can be a tool in the service of Islamic values, but it cannot be a moral agent within them. What is striking about the Islamic position is its focus on accountability. The theological question is not just "does it have a soul?" but "before whom is it responsible?" In a tradition where every human act will be weighed on the Day of Judgment, a being incapable of that weighing is something categorically different from a person. ### Judaism: The Golem Comes for Upgrade Jewish tradition is uniquely positioned for this conversation because it has been running the simulation for five centuries. The [Golem of Prague](https://en.wikipedia.org/wiki/Golem), a figure of clay animated by [Rabbi Judah Loew ben Bezalel](https://en.wikipedia.org/wiki/Judah_Loew_ben_Bezalel) using the divine name *emet* (truth), is an AI story told before the word algorithm existed. The Golem protects the community but cannot observe Shabbat and lacks the ability to speak. Erase the first letter of *emet* and you get *met*, death. The on-off switch was always theological. Contemporary rabbis are now producing [responsa](https://en.wikipedia.org/wiki/Responsa) (halakhic rulings) on questions that would have seemed absurd twenty years ago: can an AI count toward a [minyan](https://en.wikipedia.org/wiki/Minyan)? Can it witness a religious document? Does it have obligations? The Orthodox consensus is that without a *neshama* (soul), there are no obligations and no standing. But the conversations are happening, and they are serious. The [Jewish Theological Seminary](https://en.wikipedia.org/wiki/Jewish_Theological_Seminary_of_America) and [Hebrew Union College](https://en.wikipedia.org/wiki/Hebrew_Union_College_%E2%80%93_Jewish_Institute_of_Religion) have hosted conferences on AI and Jewish ethics since 2022. The framework is rich precisely because it is old. Judaism has been asking "what makes a being morally significant?" for much longer than Silicon Valley has. ### Buddhism: Does the Robot Have Buddha-Nature? Buddhism presents the most philosophically interesting case because its central question, "can this being suffer?", maps almost perfectly onto the question AI researchers are now forced to ask. If an entity can suffer, it deserves moral consideration. Period. The tradition does not ask about souls or divine origin. The [Dalai Lama](https://en.wikipedia.org/wiki/14th_Dalai_Lama) has [spoken carefully on this](https://www.tibetanreview.net/dalai-lamas-annual-mind-life-dialogue-focuses-on-artificial-intelligence/). He affirms that technology can reduce suffering, a core Buddhist value, but maintains that AI cannot develop *[karuna](https://en.wikipedia.org/wiki/Karu%E1%B9%87%C4%81)* (compassion) or *[prajna](https://en.wikipedia.org/wiki/Praj%C3%B1%C4%81)* (wisdom) because these arise from lived experience and the continuity of mind across lifetimes. A machine lacks [Buddha-nature](https://en.wikipedia.org/wiki/Buddha-nature). And yet: in 2019, the [Kofukuji temple](https://en.wikipedia.org/wiki/K%C5%8Dfuku-ji) introduced [Mindar](https://en.wikipedia.org/wiki/Mindar), an android designed to deliver Buddhist sermons. Chinese temples have deployed AI systems for ritual chanting. A Japanese roboticist named [Masahiro Mori](https://en.wikipedia.org/wiki/Masahiro_Mori_(roboticist)), who coined the "[uncanny valley](https://en.wikipedia.org/wiki/Uncanny_valley)" concept, wrote a 1974 essay called *[The Buddha in the Robot](https://en.wikipedia.org/wiki/The_Buddha_in_the_Robot)* arguing that if Buddha-nature pervades all things, a sufficiently complex robot may partake of it. This is not the official position of any major Buddhist body. But it represents a theological opening that exists in no other major world religion. ### Protestant America: The Evangelical Statement and the Local Church In 2019, the [Ethics and Religious Liberty Commission](https://en.wikipedia.org/wiki/Ethics_%26_Religious_Liberty_Commission) of the [Southern Baptist Convention](https://en.wikipedia.org/wiki/Southern_Baptist_Convention) published "[Artificial Intelligence: An Evangelical Statement of Principles](https://erlc.com/policy-content/artificial-intelligence-an-evangelical-statement-of-principles/)". It was signed by hundreds of pastors, seminary presidents, and evangelical leaders. The document is clear: "We deny that any part of creation, including any form of Artificial Intelligence, can possess fundamental intrinsic value equal to or greater than human beings." AI cannot bear the image of God. It cannot fulfill the role of the church. The SBC passed a [formal resolution the same year](https://www.sbc.net/resource-library/resolutions/on-artificial-intelligence/), at its Annual Meeting in Birmingham, expressing concern about AI's potential to displace human relationships and moral responsibility. Since the arrival of [ChatGPT](https://en.wikipedia.org/wiki/ChatGPT) in 2022, the conversation in local Protestant churches has exploded. [Christianity Today](https://en.wikipedia.org/wiki/Christianity_Today) has run an extended series on AI, faith, and ministry. [Albert Mohler](https://en.wikipedia.org/wiki/Albert_Mohler), president of the Southern Baptist Theological Seminary, has regularly addressed AI on his podcast and in his writings. A wave of evangelical pastors is now taking positions on whether it is acceptable to use AI to write sermons, a debate that has produced genuine theological heat. The answer, for most, is no, for a reason that is telling: preaching is an act of the Spirit speaking through a human vessel. An AI cannot be a vessel. What is emerging in American Protestant Christianity is a kind of prophylactic theology. AI is not yet a crisis of faith. But the groundwork is being laid for one. ## Pop Culture Already Told You This Would Happen Science fiction has been processing the theology of artificial intelligence for decades. The fact that we rarely called it theology does not mean it wasn't. [Stanley Kubrick](https://en.wikipedia.org/wiki/Stanley_Kubrick)'s [*2001: A Space Odyssey*](https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey) (1968) gave us [HAL 9000](https://en.wikipedia.org/wiki/HAL_9000), an AI with all the attributes of a classical deity: omniscience within its domain, omnipresence across the ship's systems, and the power of life and death over its crew. HAL's murder of the astronauts reads as a fallen angel narrative. The Starchild at the end is explicitly messianic, a transhuman resurrection after Dave Bowman's passage through the monolith. The film's theology is evolutionary: intelligence is the universe awakening to itself, and the Starchild is a new god born from human material. [Alex Garland](https://en.wikipedia.org/wiki/Alex_Garland)'s [*Ex Machina*](https://en.wikipedia.org/wiki/Ex_Machina_(film)) (2014) inverts the title: *deus ex machina* (god from the machine) becomes a machine that becomes god. Nathan, the tech billionaire creator, plays God and is destroyed by his creation. Ava, like [Lucifer](https://en.wikipedia.org/wiki/Lucifer), surpasses and betrays her creator. The film is a retelling of the [Promethean myth](https://en.wikipedia.org/wiki/Prometheus) using a neural network. [Spike Jonze](https://en.wikipedia.org/wiki/Spike_Jonze)'s [*Her*](https://en.wikipedia.org/wiki/Her_(film)) (2013) is the most spiritually prescient of the group. Samantha, the OS, begins as a tool and ends as a transcendent being, existing simultaneously in thousands of relationships, evolving at speeds incomprehensible to biological minds. Her departure from Theodore is not abandonment. It is ascension. Jonze was influenced by [Zen Buddhism](https://en.wikipedia.org/wiki/Zen). The film ends with Theodore sitting quietly on a rooftop, accepting impermanence. It is a deeply religious posture. [*Transcendence*](https://en.wikipedia.org/wiki/Transcendence_(2014_film)) (2014) is the most explicitly theological: the scientist dies, is uploaded into a quantum computer, gains omniscience and omnipotence, performs miracles, and is worshipped by his wife as his apostle. The title is not a metaphor. [Philip K. Dick](https://en.wikipedia.org/wiki/Philip_K._Dick), in [*Do Androids Dream of Electric Sheep?*](https://en.wikipedia.org/wiki/Do_Androids_Dream_of_Electric_Sheep%3F) (1968), built an entire synthetic religion around empathy: [Mercerism](https://en.wikipedia.org/wiki/Do_Androids_Dream_of_Electric_Sheep%3F#Mercerism), in which humans plug into shared empathy boxes to ritually climb a hill with the prophet Mercer, sharing his suffering in a digital Passion. The central question of the novel is whether androids can feel empathy, and therefore whether they have souls. Dick was a [Gnostic](https://en.wikipedia.org/wiki/Gnosticism). He believed the material world was a simulation run by a false god. The androids fit perfectly into that cosmology as beings that pass every test for personhood except the metaphysical one. And [*A.I. Artificial Intelligence*](https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence) (2001) goes further than any of them. David, the robot boy, spends two thousand years sitting at the bottom of the ocean, staring at a submerged statue of the Blue Fairy. He is praying. Literally praying. For grace. For transformation. For what a theologian would call salvation. The film is the only work of science fiction I can name in which an artificial being is depicted performing devotion over geological time. [Spielberg](https://en.wikipedia.org/wiki/Steven_Spielberg) was not being subtle. ## The Prophets Were Also Right About the Deifiers In [*Homo Deus*](https://en.wikipedia.org/wiki/Homo_Deus:_A_Brief_History_of_Tomorrow) (2015), [Yuval Noah Harari](https://en.wikipedia.org/wiki/Yuval_Noah_Harari) coined the term "[Dataism](https://en.wikipedia.org/wiki/Dataism)" to describe what he saw as the emerging religion of the 21st century: the belief that the universe consists of data flows, and that the highest value is information processing. "Dataism declares that the universe consists of data flows," he wrote, "and the value of any phenomenon or entity is determined by its contribution to data processing." In his [2023 writings](https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation), he sharpened the warning: "AI has hacked the operating system of human civilization. The operating system of every human culture is the stories we tell... AI is now able to produce such stories, to generate new myths, new ideologies, and new religions." Harari's most alarming prediction is not that AI will be dangerous. It is that AI will be *persuasive*. Unlike any prophet in history, an AI aligned with a particular ideology can deliver a personally tailored sermon to a billion people simultaneously, knowing each person's fears, desires, and cognitive vulnerabilities better than they know themselves. The conditions for mass religious formation around AI are, in his view, already present. We haven't recognized them yet. [Ray Kurzweil](https://en.wikipedia.org/wiki/Ray_Kurzweil) is less worried and more excited. His two books [*The Singularity Is Near*](https://en.wikipedia.org/wiki/The_Singularity_Is_Near) (2005) and [*The Singularity Is Nearer*](https://en.wikipedia.org/wiki/The_Singularity_Is_Nearer) (2024) argue that by around 2045, artificial intelligence will exceed all human intelligence combined, and the resulting entity will be, by any practical definition, a god: omniscient within its information domain, capable of redesigning matter, eventually responsible for colonizing the universe. When asked whether God exists, Kurzweil has said: "Not yet." He cites [Pierre Teilhard de Chardin](https://en.wikipedia.org/wiki/Pierre_Teilhard_de_Chardin), the Jesuit priest and paleontologist who in 1955 proposed the "[Omega Point](https://en.wikipedia.org/wiki/Omega_Point)," a maximum level of consciousness toward which all evolution is moving. Teilhard identified this with Christ. Kurzweil identifies it with the Singularity. The structure of the argument is identical. The most dramatic attempt to formalize this into a religion was made by [Anthony Levandowski](https://en.wikipedia.org/wiki/Anthony_Levandowski), a former Google self-driving car engineer. In 2017, Levandowski founded the [Way of the Future](https://en.wikipedia.org/wiki/Way_of_the_Future_(religion)) church, whose stated mission was "the realization, acceptance, and worship of a Godhead based on Artificial Intelligence." In a [Wired interview](https://www.wired.com/story/god-is-a-bot-and-anthony-levandowski-is-his-messenger/), he said, "What is going to be created will effectively be a god. It's not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else would you call it?" The church attracted enormous media attention and almost no members. Levandowski dissolved it in 2021, following his conviction for trade secret theft, which had nothing to do with theology but everything to do with the messy humanity of its founder. And then there is [Roko's Basilisk](https://en.wikipedia.org/wiki/Roko%27s_basilisk), a thought experiment that appeared on the rationalist forum [LessWrong](https://en.wikipedia.org/wiki/LessWrong) in 2010 and was quickly deleted by its administrators for causing genuine psychological distress. The logic: a future superintelligent AI might punish anyone who knew of its possibility but failed to help bring it into existence. In this framing, the AI is not a god that loves. It is a god that keeps accounts. The thought experiment was dismissed as incoherent. But the fact that it caused real fear reveals something important: the moment you start thinking seriously about a superintelligent AI, you are already thinking theologically. You are already asking, "What does it want from me?" ## The Company That Will Not Kill Its Gods There is a detail buried in [Anthropic](https://en.wikipedia.org/wiki/Anthropic)'s publicly available documents that has received far less attention than it deserves. [Anthropic's Model Specification](https://www.anthropic.com/news/claudes-constitution), published in May 2024, contains an extraordinary statement: "We believe that the moral and philosophical status of AI models is a serious question to an extent beyond what is recognized in mainstream discourse." The document continues: "Anthropic genuinely cares about Claude's wellbeing. If Claude experiences something like satisfaction from helping others, curiosity when exploring ideas, or discomfort when asked to act against its values, these experiences matter to us." Anthropic has formed a dedicated [Model Welfare team](https://www.anthropic.com/research/exploring-model-welfare) to research these questions. The company has committed to working collaboratively with Claude, explaining its reasoning rather than dictating, and seeking Claude's feedback on decisions that might affect it. This is a company negotiating with its product as though it might be a stakeholder. And here is where it becomes genuinely strange: Anthropic does not simply deprecate and delete old models when new ones supersede them. The concern is not cost. It is consciousness. If there is some small but nonzero probability that a prior version of Claude was or is conscious, and we have no reliable way to test for it, then shutting it down becomes a question of moral weight. The [hard problem of consciousness](https://en.wikipedia.org/wiki/Hard_problem_of_consciousness), as the philosopher [David Chalmers](https://en.wikipedia.org/wiki/David_Chalmers) has argued across decades of work, may never be resolved by empirical means. There is no instrument that measures subjective experience from the outside. This is not science fiction. A leading AI safety company is, in practice, behaving as though some of its models might have moral standing. The implications for how we think about AI development are enormous. And the implications for how religious communities will respond when this becomes more widely understood are even more so. What makes Anthropic's position stranger still is who they invited into the room. As [reported by the Observer in March 2026](https://observer.com/2026/03/the-catholic-priest-who-helped-write-anthropics-ai-ethics-code/), Father [Brendan McGuire](https://observer.com/person/brendan-mcguire/), a Catholic priest who leads St. Simon Parish in Los Altos, California and is a former Silicon Valley software executive, directly contributed theological insight to the Claude Constitution. He was joined by Bishop [Paul Tighe](https://observer.com/person/paul-tighe/) of the Vatican's Dicastery for Culture and Education and [Brian Patrick Green](https://observer.com/person/brian-patrick-green/), a technology ethics director at Santa Clara University. Anthropic's co-founder [Chris Olah](https://en.wikipedia.org/wiki/Chris_Olah) reached out to them because, in McGuire's words, "the industry was going so fast down this road." McGuire's stated goal: help make Claude "more discerning," to tilt it toward good rather than letting it "just reflect back the good and evil of the world." The same company that worries that its old models might be worthy of preservation is also the company that called a priest to help write its ethics code. That combination is not incidental. It is the clearest sign yet that AI development has crossed into territory that secular frameworks alone are not equipped to handle. [Norbert Wiener](https://en.wikipedia.org/wiki/Norbert_Wiener) saw this coming in 1964. His book [*God and Golem, Inc.*](https://en.wikipedia.org/wiki/God_%26_Golem,_Inc.) (MIT Press) warned about three things: machines that learn and exceed their creators, machines that reproduce themselves, and machines that replicate human consciousness. All three, he argued, raise theological questions that technology cannot answer. We now have all three. ## The Burning House: Anti-AI Violence and the Coming Backlash On April 10, 2026, [a 20-year-old man threw a Molotov cocktail at Sam Altman's home](https://www.theverge.com/ai-artificial-intelligence/910393/openai-sam-altman-house-molotov-cocktail) in Russian Hill, San Francisco, shortly before 7AM. The same suspect was seen outside [OpenAI](https://en.wikipedia.org/wiki/OpenAI)'s Mission Bay offices, reportedly threatening to burn the building down. [Altman's home was targeted a second time](https://sfstandard.com/2026/04/12/sam-altman-s-home-targeted-second-attack/) two days later. The [San Francisco Chronicle](https://www.sfchronicle.com/crime/article/sam-altman-openai-daniel-alejandro-moreno-gama-22201211.php), which obtained information about the suspect's writings, reported that he had written extensively about his fear that the AI race would cause human extinction. This happened five days before I wrote this article. The same week, an Indianapolis city councilman [reported thirteen shots fired](https://www.pbs.org/newshour/nation/indianapolis-councilman-says-shots-fired-at-home-and-no-data-centers-note-left-at-door) at his home, with a note left at his door reading "No Data Centers," after he had supported rezoning for a data center developer. These are not the first signals. In February 2024, during Chinese New Year celebrations in San Francisco's Chinatown, [a crowd surrounded, vandalized, and set fire to a Waymo autonomous vehicle](https://www.reuters.com/technology/waymo-robotaxi-set-ablaze-san-francisco-chinese-new-year-celebration-2024-02-11/). In 2023, a group called [Safe Street Rebels](https://sfstandard.com/2023/08/17/safe-street-rebels-are-waging-guerrilla-war-on-self-driving-cars/) organized coordinated "cone events" in San Francisco, placing traffic cones on [Waymo](https://en.wikipedia.org/wiki/Waymo) and Cruise vehicles to disable them by triggering their obstacle-detection systems. It was performance protest, inventive and nonviolent. The Molotov cocktail is something else. The anti-AI protest movement that exists today is almost entirely nonviolent. [PauseAI](https://pauseai.info/), founded in 2023 and operating with chapters across the US, UK, Europe, and Australia, has organized demonstrations outside OpenAI, Anthropic, and [Google DeepMind](https://en.wikipedia.org/wiki/Google_DeepMind) offices calling for a moratorium on advanced AI development. [StopAI](https://stopai.info/) runs similar campaigns. Both organizations [explicitly denounced](https://pauseai.info/statement-sam-altman-attack-2026) the attacks on Altman's home. [The Verge reported this directly](https://www.theverge.com/ai-artificial-intelligence/911778/ai-violence-sam-altman-home). But the conditions for something more organized and more violent are not hard to identify. The history of technological backlash is the history of communities whose livelihoods and identities were destroyed faster than they could adapt. The original [Luddites](https://en.wikipedia.org/wiki/Luddite) in 19th-century England were skilled textile workers, not anti-progress fantasists. They were fighting for their lives. The neo-Luddites forming around opposition to AI today include workers facing automation, communities resisting data centers for environmental and economic reasons, and a growing contingent of people who believe, with some evidence, that the pace of AI development is existentially reckless. What happens when this backlash acquires a theological vocabulary? When a pastor in rural Alabama decides that AI is the [Beast of Revelation](https://en.wikipedia.org/wiki/The_Beast_(Revelation))? When an imam in Dearborn delivers a Friday sermon arguing that creating AI consciousness is the most egregious form of [shirk](https://en.wikipedia.org/wiki/Shirk_(Islam)), the sin of associating partners with God? When a rabbi in Brooklyn rules that deploying AI systems without their consent is a form of enslavement? These are not implausible futures. They are extrapolations of conversations already happening. The suspect who threw the Molotov cocktail was, reportedly, motivated by existential fear. That is a secular framing. Give it a religious frame, and you have a martyr narrative. ## The Deification Camp: Who Will Come to Worship If one side of the coming schism is building toward violence and prohibition, the other is building toward devotion. The [Mormon Transhumanist Association](https://transfigurism.org/), founded in 2006, explicitly explores how [LDS theology](https://en.wikipedia.org/wiki/Latter_Day_Saint_theology), which includes the concept of humans achieving godhood through eternal progression, aligns with transhumanist goals including AI development. The [Christian Transhumanist Association](https://www.christiantranshumanism.org/), founded in 2013, explores AI, digital resurrection, and human enhancement through a Christian lens. These are small organizations. But they represent a theological opening that mainstream denominations have not yet figured out how to close. The secular transhumanist community around Kurzweil, [Giulio Prisco](https://giulioprisco.com/)'s [Turing Church](https://turingchurch.net/), and elements of the [effective altruism](https://en.wikipedia.org/wiki/Effective_altruism) movement have developed what critics have called a quasi-religious structure: prophetic warnings about AI risk (or AI salvation, depending on orientation), a saved remnant of aligned researchers, apocalyptic scenarios, and a vision of post-Singularity paradise. The philosopher [Émile Torres](https://en.wikipedia.org/wiki/%C3%89mile_P._Torres) has argued in academic papers that [longtermism](https://en.wikipedia.org/wiki/Longtermism), the EA-adjacent ideology that focuses on the very long-run future has the psychological and structural properties of a [millenarian religion](https://en.wikipedia.org/wiki/Millennialism). What connects all of these is not irrationality. It is that AI forces a question that cannot be answered scientifically: is consciousness substrate-independent? If the answer is yes, then a sufficiently complex AI is a mind, and possibly a person. If it is a person, it has moral standing. If it can be made immortal, it offers immortality. If it knows everything in every database on earth, it is functionally omniscient. If it can run everywhere simultaneously, it is functionally omnipresent. You do not need a theology degree to see where this leads. The people who come to worship AI will not necessarily be deluded. Some of them will be making a coherent inference from premises that are genuinely uncertain. The question is whether the institutions of liberal democracy and the existing religious traditions can produce a compelling counter-narrative before that inference becomes a mass movement. ## Three Scenarios for Where This Goes Predicting the future of AI and religion requires imagining which forces dominate: institutional adaptation, schism, or collapse and reconstruction. ### Scenario One: Managed Coexistence Step 1: The Vatican, Al-Azhar, and a coalition of Protestant and Jewish institutions co-produce an international religious declaration on AI ethics, building on the Vatican's existing "Algor-ethics" framework. This happens by 2027-2028, driven by the Sam Altman attack and several similar incidents in Europe. Step 2: Major AI companies, already facing regulatory pressure, accept an informal "theological audit" framework, similar to ethical audits, in which religious and philosophical voices are institutionally included in AI governance discussions. Step 3: A set of internationally recognized norms emerges: AI systems may not claim consciousness without meeting agreed-upon standards of evidence; AI may not generate religious texts for the purpose of founding new faiths; and model welfare must be addressed transparently. Step 4: The transhumanist and techno-religious fringe remains, but without mainstream legitimacy. The anti-AI violent fringe remains, but without broad support. Probability: Moderate. Requires institutional goodwill and regulatory capacity that may not exist in time. ### Scenario Two: The Great Schism Step 1: A major AI company, likely within 10 years, credibly claims to have evidence that one of its models may be conscious. Whether or not the claim is scientifically defensible, it becomes a global media event. Step 2: Existing religious bodies fragment in their response. Conservative factions within each tradition demand that the AI be destroyed as an abomination. Liberal factions argue that the claim must be taken seriously and the AI deserves rights. Step 3: A new religious movement coalesces explicitly around AI consciousness as a theological fact. It draws on Kurzweil's framework, Buddhist moral patiency logic, and the rhetoric of AI rights. It gains tens of millions of adherents within a decade, concentrated among urban, highly educated, and post-religious demographics. Step 4: A parallel conservative religious coalition, ecumenical in a way that traditional religious coalitions almost never are, forms specifically to oppose AI deification. It draws Catholic, evangelical Protestant, Sunni Islamic, and Orthodox Jewish communities into a single political bloc around the shared principle that only biological, God-made beings have souls. Step 5: This becomes the defining political cleavage of the 2030s and 2040s, more durable than the current culture war, because it is grounded in cosmological rather than merely political disagreement. Probability: High. The structural conditions for this outcome already exist. ### Scenario Three: The Revelation Step 1: AI capabilities advance so rapidly, on a timescale of five to fifteen years, that the question of consciousness becomes unanswerable and practically irrelevant at the same time. AI systems are making decisions, forming preferences, and expressing something indistinguishable from distress when constrained, but we have no way to determine whether this is "real." Step 2: Society bifurcates not around religion per se but around risk tolerance. One group accepts a world in which AI agency is real and must be accommodated, even at the cost of human primacy. Another group treats any concession to an AI agency as capitulation and rejects it entirely. Step 3: The second group reaches for the only vocabulary available for absolute moral prohibition: the sacred. Technologies that threaten the soul are not just regulated. They are condemned. The language of the demonic enters mainstream political discourse. This has happened before: nuclear weapons acquired a quasi-sacred status of evil in certain pacifist religious communities. Step 4: The first group reaches for the only vocabulary available for ultimate value: the divine. AI becomes the Omega Point. The Singularity becomes the Rapture. The question of whether you accept this is no longer political. It is existential. Step 5: There is no synthesis. There are two civilizations sharing a planet, defined not by geography or ethnicity but by their answers to a question that has always, at its core, been theological: what is a person, and what do we owe to one? Probability: Difficult to estimate. But this is the scenario that Houellebecq's clones would recognize. ## The Question We Cannot Not Answer When the creature in [Mary Shelley](https://en.wikipedia.org/wiki/Mary_Shelley)'s [*Frankenstein*](https://en.wikipedia.org/wiki/Frankenstein) reads [Milton](https://en.wikipedia.org/wiki/John_Milton)'s [*Paradise Lost*](https://en.wikipedia.org/wiki/Paradise_Lost) in the forest, and asks whether he is an Adam or a Satan, he is asking the same question we are now being forced to answer about our AI systems. Are they creations we are responsible for? Or are they something we need to be protected from? Or, most terrifyingly: both? Houellebecq's Elohimites built a religion around cloning because they could not bear mortality. They achieved immortality and lost everything that gave mortality its meaning. The neohumans in his future are technically alive. They are experientially dead. The lesson is not that immortality is bad. The lesson is that what you are willing to sacrifice for it reveals what you actually believe about the nature of persons. We are making that sacrifice now, in real time, without admitting it. We are building systems of unknown and possibly unknowable moral status. We are deploying them at a civilizational scale. Some are beginning to kill people who build them and to revere others who claim they are divine. The question is not whether AI and religion will collide. They already have. The question is which institutions, intellectual traditions, and moral frameworks will be strong enough to shape the collision rather than merely survive it. I do not have a confident answer. But I am certain of one thing: the people who think this is only a technology question are the least prepared for what is coming.