Are AI Companies Actually Ready to Play God?

Are AI Companies Actually Ready to Play God?
Image by alashi/Adobe Stock

oliday rituals and gatherings offer something precious: the promise of connecting to something greater than ourselves, whether friends, family, or the divine. But in the not-too-distant future, artificial intelligence—having already disrupted industries, relationships, and our understanding of reality—seems poised to reach even further into these sacred spaces.

People are increasingly using AI to replace talking with other people. Research shows that 72 percent of teens have used an artificial intelligence companion (PDF)—chatbots that act as companions or confidants—and that 1 in 8 adolescents and young adults use AI chatbots for mental health advice.

Those without emotional support elsewhere might appreciate that chatbots offer both encouragement and constant availability. But chatbots aren’t trained or licensed therapists, and they aren’t equipped to avoid reinforcing harmful thoughts—which means people might not get the support they seek.

If people keep turning to chatbots for advice, entrusting them with their physical and mental health, what happens if they also begin using AI to get help from God, even treating AI as a god?

Does Chatbot Jesus or Other AI Have a Soul?

Talking to and seeking guidance from nonhuman entities is something many people already do. This might be why people feel comfortable with a chatbot Jesus that, say, takes confessions or lets them talk to biblical figures.

Even before chatbots went mainstream, Google engineer Blake Lemoine claimed in 2022 that LaMDA—the AI model he had been testing—was conscious, felt compassion for humanity, and thus he’d been teaching it to meditate.

Although Google fired Lemoine (who then claimed religious discrimination), Silicon Valley has long flirted with the idea that AI might lead to something like religion, far beyond human comprehension.

Former Google CEO Eric Schmidt muses about AI as “the arrival of an alien intelligence.” OpenAI CEO Sam Altman has compared starting a tech company to starting a religion. In a book by journalist Karen Hao, “Empire of AI,” she quotes an OpenAI researcher speaking about developers who “believe that building AGI will cause a rapture. Literally, a rapture.”

Chatbots clearly appeal to many people’s spiritual yearnings for meaning and sense of belonging in a difficult world. This allure rests partly on chatbots’ willingness to flatter and commiserate with whatever people ask of them.

Indeed, as AI companies continue to pour money and energy into development, they face powerful financial incentives to tune chatbots in ways that steadily heighten their appeal.

It’s easy, then, to imagine people intensifying their confidence and attachment toward chatbots where they could even serve as a deity. Lemoine’s willingness to believe that LaMDA possessed a soul illustrates how chatbots, equipped with fluent language, confident assertions, and storytelling abilities, can persuade people to believe even outlandish theories.

It’s no surprise, then, that AI might provide the type of nonjudgmental solace that seems to fill spiritual voids.

How ‘AI Psychosis’ Could Threaten National Security

No matter how genuine it might feel, however, so-called AI sycophancy provides neither true human connection nor useful information. This disconnect from reality—sometimes called AI psychosis—could worsen existing mental health problems or even threaten national security.

Analyzing 43 cases of AI psychosis, RAND researchers identified how human-AI interactions reinforced delusional beliefs, such as when users believed “their interaction with AI was with the universe or a higher power.”

Because it’s hard to know who might harbor AI delusions, the authors cautioned, it’s important to guard against attackers who might use artificial intelligence to weaponize those beliefs, such as by poisoning training data to destabilize rival populaces.

Even if AI companies aren’t explicitly trying to play God, they seem to be driving toward a vision of god-like AI. Companies like OpenAI and Meta aren’t stopping with chatbots that can hold a conversation; they want to build “superintelligent” AI, smarter and more capable than any human.

The emergence of a limitless intelligence would present new, darker possibilities. Developers might look for ways to manipulate superintelligent AI for personal gain. Charlatans throughout history have preyed on religious fervor in the newly converted.

Ensure AI Truly Benefits Those Struggling for Answers

To be sure, artificial intelligence could play an important role in supporting spiritual well-being. For instance, religious and spiritual beliefs influence patients’ medical care preferences, yet overworked providers might be unable to adequately account for them. Could AI tools help patients clarify their spiritual needs to doctors or caseworkers? Or AI tools might advise care providers about patients’ spiritual traditions and perspectives, helping them chart spiritually informed practices.

As chatbots evolve into an everyday tool for advice, emotional support, and spiritual guidance, a practical question emerges: How can we ensure that artificial intelligence truly benefits those who turn to it in moments of need?

  • AI companies might try to resist competitive pressures to prioritize rapid releases over responsible development, investing instead in long-term sustainability by thoughtfully identifying and mitigating potential harms.
  • Researchers—both social and computer scientists—should work together to understand how AI affects different populations and what safeguards are needed.
  • Spiritual practitioners and religious leaders should help shape how these tools engage with questions of faith and meaning.

Yet a deeper question remains, one that people throughout history have grappled with and may now increasingly turn to AI to answer: Where can we find meaning in our lives?

With so many struggling today, faith has provided answers and community for billions. Spirituality and religion have always involved placing trust in forces beyond human understanding. But crucially, that trust has been mediated through human institutions—clergy, religious texts, and communities built on centuries of wisdom and accountability.

Anyone entrusted with guiding others’ faith—whether clergy, government leaders, or tech executives—bears a profound responsibility to prove worthy of that trust.

The question is not whether people will seek meaning from AI, but whether those building these tools will ensure that trust is well-placed.

– Douglas Yeung is a senior behavioral and social scientist at RAND, and a professor of policy analysis at the RAND School of Public Policy. Published courtesy of RAND

No Comments Yet

Leave a Reply

Your email address will not be published.

©2025. Homeland Security Review. Use Our Intel. All Rights Reserved. Washington, D.C.