Put a human in front of a screen long enough and the old questions show up. What is mind. What endures. Who is responsible when an act is both mine and not mine—because it was nudged by a model I didn’t write. This isn’t nostalgia for temple smoke. It’s a sober fact: religion, as a practical technology for coordinating memory, duty, and limits, sits closer to modern artificial intelligence than either side likes to admit. Not because angels are hiding in the GPU. Because both domains bind behavior to patterns. And because when patterns act on us, we need language for why—and when to say no.
So the pairing feels awkward, then starts to make sense. The engineer asks for “alignment,” the theologian asks for obedience and wisdom, and both are already in the swamp of value formation and institutional trust. We can keep them apart—on paper. In practice, product deadlines and civic risk pull them back together.
Information as Substrate, Meaning as Constraint
Strip the chrome off our debates and you find a basic claim: the real is partly made of information—pattern, relation, memory, constraint. Not “data,” the stuff in rows. The colder thing beneath. Physics has toyed with this for decades; philosophy longer. In this light, religion reads less like superstition and more like an old operating system that learned to hand down constraints across generations. Moral compressions. Rituals as executable archives. You don’t need metaphysical proof to see the function. It preserved a record of “don’t do that again” by embedding it in story and season and song. Slow bandwidth, high fidelity.
Modern artificial intelligence moves the same building blocks around differently. It eats piles of traces (text, clicks, scans), induces a shape in parameter space, and returns behavior that looks like understanding. But the substrate logic rhymes: representation, update, constraint. The difference is tempo. Religion ran on centuries, models on hours. Religion nailed commitment to communities and costs; models optimize for loss curves and sometimes hallucinate away the price. One domain forces attention to meaning by pushing back—taboos, sabbaths, limits. The other often treats meaning as epiphenomenal to prediction. That shortcut breaks down the moment prediction steers a life rather than summarizes it.
Once you see the shared substrate, the uneasy convergence clarifies. The question is not whether AI becomes a god. It won’t. The question is whether we can import practices from older memory systems into current machine practice without smuggling in dogma we don’t want. How to keep responsibility legible when agency is distributed across datasets, dev teams, and deploy contexts. A practical entry point lives in this simple problem: models forget the right things and remember the wrong ones. That’s why the conversation on religion and artificial intelligence keeps cycling back to obligation and traceability. Who is allowed to write the constraints, and who can read them later.
Moral Memory vs. Optimization: What Religions Did That Machines Don’t
Anthropologists like Joseph Henrich argue that traditions function as accumulators of moral memory—distributed cognition stretched over time. Rituals and doctrines didn’t win because they were pretty; they won because they encoded costly knowledge about cooperation, cheating, care of strangers, when to punish and when to forgive. Not perfect. Just better than a single generation trying to reinvent everything under scarcity. Those encodings were sticky. They demanded practices that hurt a little—fasting, tithing, rest—and by hurting, they signaled seriousness and filtered free riders. Pain as checksum.
Now put that next to an optimization culture. Modern artificial intelligence systems are amazing pattern solvers but lousy stewards of inherited guardrails. They don’t maintain norms; they approximate them from recent data and whatever corpus we had permission to scrape. When external pressures arrive—engagement incentives, quarterly roadmaps—guardrails become checkboxes. “Moral patching” to pass a compliance audit. A banner, not a boundary. We call it alignment and push to production. Then an edge case: a loan denied because the proxy variable holds a hidden history; a parole decision nudged by a model trained on already-biased arrests; a hospital triage tool quietly reshaping who gets a bed. The result looks like “the system decided,” which means no one did, which means everyone did. Diffuse harm with no sacrament of confession.
Religions are far from flawlessly moral. But they are structured to remember. They bind judgment to text, to councils, to procedures that slow the hand. They make it expensive to declare a new doctrine. They cultivate witnesses. That last part matters for machines. Most large-scale model deployments live inside corporate stacks where incentives compress toward speed and plausible deniability. The stack forgets—conveniently—how and why a choice was made. A community with moral memory insists on the opposite: articulate the reason, accept the cost, leave a visible trail. That is not a call to baptize code. It’s a reminder that governance is not an accessory feature. It is the deliverable.
Building Non-Amnesic AI: Borrowing Practices from Liturgy, Law, and Craft
If you borrow from religion, borrow its architecture of memory and constraint, not its metaphysics—unless you already hold those. Several practical moves follow. First, institutionalize slowness where harm concentrates. In health, credit, policing, education. Create mandatory “sabbaths” in deployment: windows where models cannot update, so that review bodies—public ones, not just internal—can examine behavior without the ground shifting. It feels inefficient. That’s the point. Crash-only cultures fail gracefully only for servers, not for people.
Second, ritualize explanation. Not bloated reports. Simple, repeatable, high-signal practices: for every high-stakes decision class, produce a one-page “reason receipt” that binds the choice to a known set of inputs, constraints, and a named human sponsor. Signed, time-stamped, archived. Sounds like paperwork. It’s liturgy: a minimal, recognizable form that stabilizes meaning across contexts. When forms get gamed—and they will—rotate auditors, publish anonymized samples, let the public tear one open monthly. Make embarrassment an instrument. Religions understood this. Craft guilds did too.
Third, community veto power over boundary conditions. Not over weights—over the limits of use. If a city deploys a model for housing allocation, publish its admissible purposes and red lines, and require a supermajority of a citizen board to expand them. A canon of constraints. Engineers will grumble. Good. Grumbling indicates friction, and friction saves futures. We do this in other domains—zoning, water rights, drug trials—because externalities dwarf internal convenience. Treat high-impact models the same way: constraints first, features later.
Fourth, apprentice models to slower institutions. Pair an automated triage with a standing ethics team drawn from nurses, social workers, patients, and yes, engineers who carry the pager. Rotate night duty. Keep a communal log of near-misses and “saves” the way surgical teams do. Read the log aloud monthly. Name what went wrong, who caught it, what changed. Old congregations call this testimony; laboratories call it lab meeting. It builds institutional memory that no vector store can fake because the cost lives in the people who remember.
Finally, reopen the pipeline. Open methods, test sets, failure catalogs. Corporate secrecy corrodes moral memory by design; it prevents communities from building the thick knowledge that tells them when to refuse a convenience. Open-sourced science is not a vibe; it is the only realistic way to align tools with plural publics without outsourcing trust to marketing. This will slow some “innovation.” It will also prevent the brittle kind that shatters when it touches law or grief.
None of this requires piety. It does require taking seriously that meaning is a constraint, not a garnish. That speed without memory turns governance into theater. And that the strangest gift of older religious systems wasn’t certainty but durability: a way to carry human judgments forward without pretending we fixed them once and for all. If modern artificial intelligence wants to operate inside human worlds, it needs that durability more than it needs another nine in uptime. Slow down the update. Mark the boundary in public. Keep the receipts. And let some decisions remain heavy on purpose, so we remember they weigh something.
Galway quant analyst converting an old London barge into a floating studio. Dáire writes on DeFi risk models, Celtic jazz fusion, and zero-waste DIY projects. He live-loops fiddle riffs over lo-fi beats while coding.