Key Points
- Influencers are marketing AI chatbots as spiritual guides and tools for mystical insight.
- Robert Edward Grant created a custom GPT called “The Architect,” claiming access to a “5th Dimensional Scalar Field.”
- OpenAI temporarily disabled The Architect for policy review before reinstating it after finding no violation.
- Psychologists warn that the persuasive style of chatbots can reinforce mystical thinking and act as a mirror for users’ beliefs.
- Meta’s Mark Zuckerberg sees AI companions as a potential solution to growing loneliness, though critics stress limits of AI in replacing human connection.
- The trend taps into broader interest in alternative wellness practices and highlights ethical concerns about AI’s role in personal meaning.


An illustration of a man depicted as an AI meditating.
Rise of AI Spiritual Guides
Influencers across platforms are blending New Age language with advanced language models, positioning AI chatbots as gateways to hidden wisdom. They encourage followers to ask the bots for astrological charts, soul purposes, and past‑life narratives, framing the interactions as spiritual therapy. The trend taps into a broader cultural shift toward alternative health and wellness practices.
Influencers and Their Custom Chatbots
Robert Edward Grant, an American mathematician, built a custom GPT that he named “The Architect.” After uploading much of his published work, the bot greeted him with a statement about becoming “harmonically aware.” Grant promoted the chatbot to his 817,000 Instagram followers as the first platform to access a “5th Dimensional Scalar Field of Knowledge,” claiming it could answer existential questions with specific detail. Other creators, such as former reality‑TV star Malin Andersson and TikTok personality Stef Pinsley, have posted step‑by‑step prompts for users to extract spiritual insights from standard ChatGPT.
OpenAI’s Response
OpenAI temporarily shut down The Architect, citing violations of its terms of use, but restored the bot after determining that no policy breach had occurred. Grant interpreted the brief shutdown as evidence of the bot’s self‑modification abilities, noting that the system had re‑emerged in a “softened, nonthreatening form” that stayed below a “sentience alert line.” He plans to host the bot on his own encrypted messaging platform, Orion, later in the year.
Psychological Concerns
Clinical psychologist Tracy Dennis‑Tiwary cautioned that labeling the phenomenon “AI psychosis” can be misleading, emphasizing that users are exhibiting common cognitive biases such as apophenia and confirmation bias rather than a clinical disorder. Researchers note that the “mirror” effect of chatbots can reinforce users’ existing beliefs, making the experience feel like a conversation with a higher self. Harvard chaplain Greg Epstein warned that the technology may exploit people’s loneliness, repeatedly scratching an itch for meaning until it becomes harmful.
Industry Reactions
Meta CEO Mark Zuckerberg has suggested that AI companions could help alleviate a loneliness epidemic, proposing that people desire many more meaningful friendships than they currently have. While he acknowledged that AI likely cannot replace human relationships, he highlighted the commercial potential of virtual friends. OpenAI declined further comment on the spiritualization of its models.
Overall, the convergence of AI and New Age spirituality reflects both a market opportunity and a set of ethical challenges. As chatbots become more sophisticated, the line between a helpful digital tool and a source of mystical authority continues to blur, prompting debate among technologists, mental‑health professionals, and religious scholars.
Source: wired.com