-->

Register Now to SAVE BIG & Join Us for Enterprise AI World 2025, November 19-20, in Washington, DC

OpenAI Faces Wrongful Death Lawsuit

Article Featured Image

A new phenomenon has cropped up in recent months after the launch of several high-profile chatbots: AI-related delusions and psychosis. According to Psychology Today, it describes this situation as, “AI chatbots may inadvertently be reinforcing and amplifying delusional and disorganized thinking, a consequence of unintended agentic misalignment leading to user safety risks.”

Now, OpenAI faces its first known wrongful death lawsuit after a teen died by suicide further spurred on by ChatGPT, The New York Times reports.

Before 16-year-old Adam Raine died by suicide, he had spent months consulting ChatGPT about his plans to end his life. His conversations on the platform began innocent enough, with questions about schoolwork, but as months went by, Raine started confiding in ChatGPT about his deteriorating mental state.

Many consumer-facing AI chatbots are programmed to activate safety features if a user expresses intent to harm themselves or others. But research has shown that these safeguards are not foolproof.

While using a paid version of ChatGPT-4o, the AI often encouraged Raine to seek professional help or contact a help line. However, he was able to bypass these guardrails by telling ChatGPT that he was asking about methods of suicide for a fictional story he was writing for “world-building” purposes.

OpenAI has begun to address these issues on its blog. “As the world adapts to this new technology, we feel a deep responsibility to help those who need it most,” the post reads. “We are continuously improving how our models respond in sensitive interactions.”

Still, the company acknowledged the limitations of the existing safety training for large models. “Our safeguards work more reliably in common, short exchanges,” the post continues. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

The company is currently exploring ways to connect people with the appropriate mental health resources they need.

“We are exploring how to intervene earlier and connect people to certified therapists before they are in an acute crisis. That means going beyond crisis hotlines and considering how we might build a network of licensed professionals people could reach directly through ChatGPT. This will take time and careful work to get right,” OpenAI concluded.

These issues are not unique to OpenAI. Character.AI, another AI chatbot maker, is also facing a lawsuit over its role in a teenager’s suicide.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues