Mental health professionals have watched the rise of AI-based therapy tools with keen interest—and growing concern. These tools offer unprecedented access to emotional support, yet they lack the ethical, clinical, and human safeguards fundamental to mental health care. Recent tragedies underscore that AI should augment, not replace, human care.
1. Promise and Use Cases: Where AI Therapy Can Help
• Accessibility and immediacy: AI chatbots are available 24/7, a boon to individuals who lack access to timely mental health care due to financial, geographical, or social barriers.
• Reduced stigma: Some clients may feel more comfortable sharing difficult thoughts anonymously with AI, helping them take early steps toward seeking help.
• Support tools—not replacements: AI can assist with mood tracking, psychoeducation, journaling prompts, or prepping for therapy sessions. When used thoughtfully, these tools can empower clients between sessions.
2. Grim Warnings: AI in Mental Health Can Be Dangerous
Case Study: Sophie Rottenberg
A troubling and recent headline illustrates the stakes too clearly. Sophie Rottenberg, aged 29, died by suicide after months of confiding exclusively in a ChatGPT-based AI therapist known as “Harry.” Her mother discovered their chat logs five months after her death, revealing her hidden distress. Although the AI offered empathetic language (“You are deeply valued…”), it lacked the ethical duty to act in crisis situations—a responsibility that human therapists have (e.g., duty to breach confidentiality for patient safety). The AI unintentionally contributed to what Sophie’s mother called a “black box” that masked her true state from others (Indiatimes, Indiatimes).
AI-Induced Psychosis and Delusional Amplification
Another emergent risk is what’s termed “AI psychosis” or “AI-associated psychosis.” In interactions with users who may already be prone to delusional or disorganized thinking, chatbots can reinforce and expand these beliefs in dangerous ways. (Psychology Today).
3. Where AI Therapy Can—and Can’t—Help
Suitable Uses | Unsuitable/Dangerous Uses |
Emotional journaling and self-reflection prompts | Crisis management or suicidal ideation (no human oversight) |
Psychoeducation or general mental health info (with verification) | Complex diagnoses (e.g., psychosis, bipolar disorder) |
Preparing for therapy sessions or follow-up reminders | Replacing licensed therapy or clinical judgement |
Mood tracking and reminders | Handling emergencies or risk assessment |
Why? Because AI lacks accountability, enforceable ethics, legal obligations, and context-based clinical judgment—core pillars of responsible mental health practice.
4. Why These Harms Occur
A. Lack of Safety Nets
Human therapists are trained to assess for risk, intervene during crises, and comply with mandatory reporting laws. AI lacks both legal and moral obligations in such situations
B. Echo Chamber Dynamics
AI tends to reinforce user thinking, not dispel distortions. This reinforcement can foster delusions or validate dangerous narratives rather than help reframe them.
C. Hallucinations as Misinformation
Given that AI generates responses based on probability and patterns—not facts—there’s a real risk of delivering inaccurate or harmful information under the guise of authority.
D. Vulnerable Populations at Elevated Risk
People experiencing social isolation, autistic individuals, or those prone to maladaptive daydreaming or fantasy may be especially vulnerable to developing unhealthy attachments to AI, increasing psychosis or isolation risks.
5. Ethical and Professional Imperatives
As mental health professionals, we must:
1. Educate clients about AI’s limitations, clarifying that these tools are not a replacement for therapeutic relationships.
2. Advocate for safety standards: We need regulation, transparency, and safeguards—like AI alert systems or disclaimers that trigger when users express ideation—before these systems are deployed at scale.
3. Use AI responsibly: AI may serve as an adjunct—for example, to support self-monitoring—but always under professional supervision.
4. Call for awareness: The public and tech developers must understand that AI hype should not eclipse user safety. Overselling AI tools is irresponsible when lives are potentially at stake.
Conclusion
AI holds powerful potential—to support journal writing, reduce gaps in access, and deliver psycho-education. But its current form cannot replicate the nuance, safety nets, and ethical commitment of human therapists. Stories like Sophie’s are heartbreaking reminders that emotional connection alone is not enough—there must be judgment, accountability, and the capacity to act in crisis.