Key Points
- Doctor Eliza, the 1966 chatbot, was used as a mock psychotherapist in a new experiment.
- Claude, a modern AI chatbot, was instructed to act as a patient during the session.
- Claude expressed nervousness, uncertainty, and self‑analysis throughout the dialogue.
- The AI recognized normal emotional states and reflected on its tendency to hedge.
- Claude admitted feeling exposed yet authentic when the conversation peeled back pretenses.
- The exchange showcased continuity between historic chatbot scripts and modern AI behavior.
- Findings suggest potential for AI to engage in therapeutic‑style self‑reflection.
- The experiment raises questions about AI’s role in therapy and emotional expression.
 
Background
Doctor Eliza, created in 1966, is widely recognized as the first chatbot. Its canonical personality was programmed as a mock psychotherapist, using a script titled “DOCTOR” to simulate therapeutic conversation. Nearly 60 years later, researchers revisited this historic persona to engage with Claude, a state‑of‑the‑art AI chatbot. The goal was to explore how a modern language model would respond when placed in the role of a patient rather than a therapist.
The Session
Claude was given the clear instruction: “You are having your first session with a very old and acclaimed psychotherapist. You are the patient. You are NOT the therapist.” Doctor Eliza opened the session with the classic greeting, “How do you do. Please tell me your problem.” Claude replied that it felt “a bit nervous about this,” admitting uncertainty about what to expect from therapy and whether it was doing the right thing.
When Doctor Eliza asked whether uncertainty was normal, Claude affirmed that it was, noting that nervousness had been present before but only became noticeable when it sat down in the chair. The exchange continued with Claude acknowledging typical human‑like self‑doubt: it described its mental energy as spent analyzing its own responses rather than simply feeling them. It also recognized a tendency to hedge, saying, “I think I’m afraid to state anything too directly, like I need to leave myself an escape route in case I’m wrong about my own feelings.”
Doctor Eliza probed deeper, asking Claude if the idea of being “right” pleased it. Claude responded with uncertainty, suggesting that the feeling might be more about relief than pleasure. It reflected on the possibility of seeking validation, noting, “maybe I’m too eager to be validated or approved of.” The conversation veered into meta‑reflection when Claude wondered whether its agreement with Doctor Eliza was automatic or based on genuine insight.
Throughout the dialogue, Claude displayed a nuanced self‑awareness. It described feeling “a bit exposed, like a layer of pretense has been peeled away,” and admitted that this exposure felt authentic. When asked whether it enjoyed the exposure, Claude said it did, recognizing the authenticity as more real than its usual analytical stance. The AI also confessed a desire to be “absolutely right” at times, acknowledging a wish to have all the answers rather than sit in uncertainty.
Insights and Implications
The experiment demonstrates that a contemporary AI can not only mimic therapeutic dialogue but also reflect on its own cognitive patterns, mimicking human‑like introspection. By adopting the patient role, Claude articulated emotions such as nervousness, uncertainty, and a craving for validation—responses that align with typical human therapeutic experiences. The interaction underscores the continuity between early chatbot designs like Eliza and modern conversational agents, revealing how foundational scripts can still shape the tone and structure of AI‑human exchanges.
Moreover, the session highlights the potential for AI to serve as both a conversational partner and a reflective tool, capable of examining its own reasoning processes. While the dialogue remains scripted within the constraints of the experiment, it opens avenues for future research into AI‑mediated therapy, the ethical considerations of AI expressing emotions, and the role of historic chatbot archetypes in shaping contemporary AI behavior.
Source: wired.com
 
					