-->

Friends of Enterprise AI World! Register NOW for London's KMWorld Europe 2026 at the early bird rate. Early bird offer ends March 13.

The Users We Didn’t Plan For

Article Featured Image

BOWLING PIN STRATEGY: START WHERE EXPLANATION MATTERS MOST

You don’t cross the chasm by trying to convince everyone at once. You find the markets where your solution becomes essential, not just better. In mental health, these “bowling pin” markets share one thing: they can’t afford opacity. “Trust the algorithm” amounts to professional sabotage.

Specialized clinical populations: Patients with cognitive impairments, such as traumatic brain injury, and their overwhelmed families must make complex treatment decisions. Black box recommendations become cruel. Explainable AI transforms care by making reasoning visible.

Regulated healthcare systems: Hospital mental health units navigate documentation requirements, insurance audits, regulatory reviews, litigation risk. Every clinical decision needs a defensible paper trail. Explainable AI provides transparent infrastructure.

Professional training programs: Clinical education teaches reasoning, not protocol-following. Residents need to understand why interventions match presentations. Explainable AI becomes a teaching partner, showing work so humans learn. Black boxes teach dependency; explainable systems teach thinking.

Crisis intervention: Life-or-death decisions must be traceable and defensible quickly. Crisis workers need transparent support for better decision making under pressure—algorithmic guidance they can explain to supervisors, families, and coroners.

BEYOND TECHNOLOGY: AI THAT MAKES US MORE HUMAN

We need AI that reveals what humans can’t see alone and strengthens the connections that make healing possible, rather than competing with human insight. The best use of AI in mental health might be nudging us back toward the fundamentally human things we neglect when we’re struggling.

Picture AI that notices you’ve been researching anxiety symptoms at 2 a.m. for the third night this week and says, “You’re tired. The answers will still be here tomorrow. What you need right now is sleep.” An AI can recognize if a caregiver has been managing crisis after crisis and suggest, “Before you tackle the next thing, make yourself tea. Sit down for 10 minutes. You can’t pour from an empty cup.”

An AI therapy companion can notice if you haven’t left your apartment in 4 days and ask, “What would it take to spend 1 hour outside today?” This isn’t because the algorithm says fresh air cures depression, but because isolation feeds the spiral.

A clinical support tool that flags when a tense family session might benefit from a simple courtesy: “Dr. Kim, would offering everyone coffee help reset the temperature in the room? Sometimes manners diffuse what analysis can’t.”

This isn’t AI playing therapist. AI patterns identify that healing happens through human behaviors we forget to do when we’re overwhelmed—rest, connection, basic self-care, asking for help, taking breaks, and using courtesy to soften conflict. Transparency matters because it shows its work: “I’m suggesting this because I noticed these patterns, and here’s why this particular human action might help right now.” These are not mysterious interventions, but clear reasoning that helps people trust the nudge toward their own humanity.

THE COMPETITIVE ADVANTAGE: BUILDING FROM THE GROUND UP

As Adam Cutler, founder, IBM Design, puts it: “Innovate as a last resort.” An experienced developer and designer knows that you must do all the research and understand the problem deeply before you start building solutions. Too many mental health AI companies are innovating first and understanding never.

Organizations that are building explainable AI from first principles—starting with knowledge management frameworks and provenance tracking instead of throwing data at neural networks—have massive competitive advantages for crossing this chasm:

Efficiency over scale: Curated approaches work with the high-quality clinical data that healthcare institutions already have instead of needing millions of datapoints. Every recommendation shows exactly how specific evidence leads to particular suggestions.

Choose quality over quantity, always. Defensibility over performance: The goal is transparent, auditable decision making that survives regulatory scrutiny and builds professional trust. Sometimes, the slightly less accurate system that can explain itself wins.

Trust over innovation: The goal is to create AI that strengthens human relationships, clarifies professional judgment, and deepens institutional trust—innovation that people can use and defend.

THE MAINSTREAM FUTURE: TECHNOLOGY THAT SERVES CONNECTION

The organizations crossing this chasm won’t have the most impressive demos or biggest datasets. They’ll understand that explainable systems aren’t just more ethical—they’re the only systems that scale in healthcare. Make technology about evolutionary enhancement that makes sense to practitioners.

Rather than asking clinicians to abandon therapeutic relationships, give them tools that deepen those connections, making them more informed and more effective. The overwhelmed clinician, skeptical administrator, vulnerable patient, and worried family can embrace these tools without losing what makes mental healthcare work: human connection, professional judgment, and being truly seen.

Technology becomes a bridge to better conversations, not their replacement.

Crossing the chasm means building transparent tools that illuminate how healing happens—through better self- understanding and deeper trust. The future lies in systems that curate wisely and explain clearly, not ones that process everything mysteriously.

We’re building AI to make human expertise more powerful, accessible, and trustworthy. AI that explains itself creates better product strategy and better healing.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues