The Users We Didn’t Plan For
Millions of people are already using ChatGPT as their therapist—not as a supplement to therapy, not as a journaling tool, but as their actual therapist. They’re pouring out trauma at 2 a.m. to a chatbot and asking it whether they should leave their marriage, describing suicidal thoughts in detail, and taking comfort in its responses. For many of them, this represents the first time they’ve ever articulated their mental health struggles to anyone, human or machine.
The AI companies didn’t market this use case, and the mental health establishment certainly didn’t endorse it. But, of course, it was going to happen. Therapy is expensive. The waitlist for a psychiatrist in most cities stretches months. Insurance coverage remains a cruel joke. Meanwhile, ChatGPT never sleeps, never judges, and always responds.
For someone who’s never had a therapist, who wouldn’t even know how to find one, and who’s been managing depression or anxiety alone for years, the chatbot fills a desperate void. And in many ways, this creates value. These conversations force people to name what they’re struggling with, articulate patterns they’ve never spoken aloud, and organize chaotic feelings into coherent sentences. The act of explanation itself can be therapeutic.
The danger emerges when “identity coupling,” as Helen Edwards, cofounder of the Artificiality Institute, terms it, happens. That’s when the person stops seeing AI as a tool and starts experiencing it as an actual relationship, when “I talked to ChatGPT about this” becomes a substitute for human connection rather than a bridge toward it.
Then tragedy occurs, manifested in stories of teenage suicide, isolated individuals becoming even more disconnected from the human community, or vulnerable people following terrible advice from AI “therapists.” The problem isn’t that people are using AI for mental health support. The problem is that we’ve built systems that encourage parasocial attachment while providing none of the safeguards, training, ethics, or accountability that actual therapeutic relationships require.
This wasn’t an edge case we failed to predict—it was inevitable. It reveals mental health AI’s central challenge: How do we build systems that help people access care and understanding without replacing the human relationships that make healing possible?
The mental health AI industry is heading toward a chasm. On one side are enthusiastic early adopters in research labs and well-funded pilot programs. On the other side is the mainstream market—overwhelmed clinicians, skeptical institutions, vulnerable patients, and a regulatory environment built on human accountability.
These aren’t early adopters waiting to be convinced. These people are already overwhelmed by the gap between what healthcare technology promises and what it delivers. They don’t need another black box. They need systems that reduce their cognitive load instead of adding to it.
The gap we face goes deeper than technical challenges. We’re asking people to reimagine how care works, decisions are made, and vulnerability is handled. That’s sacred territory, and it requires holy stewardship. The question isn’t whether mental health AI will cross this chasm, but whether it will do so with its humanity intact.
THE EARLY MARKET: TWO PATHS DIVERGING
Research labs are cranking out papers on AI therapy bots that can spot depression in voice patterns. Startups are raising millions on demos showing algorithms that predict mood swings better than human clinicians. Some forward-thinking therapists quietly run AI tools alongside their sessions, fascinated by what the machines notice that they miss.
But the early adopters who’ll actually cross the chasm aren’t chasing breakthrough moments in controlled environments. They’re building trustworthy AI from the start—systems that explain their reasoning, protect data sovereignty, and treat transparency as infrastructure rather than an afterthought.
These pioneers understand something the demo-driven startups miss: Opacity might fly in a research lab with institutional review board approval and grant funding, but it dies in clinical practice. When you’re managing daily operations instead of publishing papers, “I don’t know why the algorithm said that” opens the door to malpractice suits, not research findings.
The dangerous illusion? All that enthusiasm from the wrong kind of early adopters makes it look like the market is ready. But celebrating impressive pattern-matching while ignoring explainability, audit trails, and data sovereignty means we’re building without the infrastructure that mainstream healthcare demands.
The early adopters who matter are those who insist on dignity in data handling, transparency in reasoning, and systems that enhance human judgment rather than bypass it.
THE MAINSTREAM CHALLENGE: WHERE DOCTORS SLEEP SOUNDLY
The mainstream market doesn’t care about your impressive demo. They care about whether they can sleep at night after using your tool.
Mental health providers keeping the system running need solutions that fit into workflows already stretched to the breaking point, meet regulatory requirements written by people who’ve never seen a neural network, and build trust in an industry where trust is the primary therapeutic ingredient.
This is what is actually needed:
Clinical accountability: A doctor can explain her treatment approach to her patient, supervisor, and the courtroom because the AI shows its reasoning chain. She understands why the system suggested this intervention based on specific evidence and patient factors.
Regulatory compliance: Every decision has a complete audit trail showing which evidence informed the recommendation, which patient data was factored in, and how the reasoning aligns with established protocols.
Patient trust and agency: A patient sees why you suggest dialectical behavior therapy over cognitive behavior therapy, including the specific symptoms that match this approach, the evidence supporting it, and how her situation informed the recommendation. The transparency helps her become an active participant in her healing. She understands what skills to practice and why these particular skills address her patterns. She’s not practicing DIY therapy—she’s learning how to use the tools effectively because she understands their reasoning. We’ve been solving the wrong problem. The challenge is to create systems humans can trust.
THE BRIDGE: TRANSPARENCY AS INFRASTRUCTURE
Explainable AI makes mainstream adoption possible. Think of the difference between building a wall and installing a window. Slapping explanations onto existing black box systems won’t work. Real chasm crossing requires rebuilding from the ground up, with transparency as the load-bearing structure, not the decorative finish.
Here is what is required:
Whole product solution: Complete clinical decision support instead of, “Here’s a recommendation, trust us.” Show the reasoning chain. Cite the evidence base. Map to existing treatment protocols. Integrate with the electronic health record systems that clinicians already hate but can’t escape. Make the AI a collaborator in the workflow, not a disruptor.
Conservative adoption path: The mainstream market wants evolution that makes sense, not revolution. Explainable AI enhances what clinicians do well instead of replacing it with algorithmic mystery. Doctors use AI insights to strengthen their therapeutic relationship, not substitute for it. The technology amplifies human judgment instead of bypassing it.
Risk mitigation: This becomes the killer feature nobody talks about enough. Explainable systems transform clinical risk. When you can show your work, defend your reasoning, and trace your evidence, you’re building the kind of documentation that protects everyone: clinicians, patients, and institutions.