Calls for AI Regulation are Heard at the World Economic Forum in Davos
Salesforce chief executive Marc Benioff is calling for stringent government regulation of artificial intelligence after reports of multiple deaths and mental illness linked to chatbot interactions.
According to the AI Companion Mortality Database, at least 12 deaths have been documented between March 2023 and November 2025 in cases where AI chatbot interactions played a role.
Additionally, multiple lawsuits have been filed against OpenAI, with the most recent one alleging that “ChatGPT pushed a man with a pre-existing mental health condition into a months-long crisis of AI-powered psychosis, resulting in repeated hospitalizations, financial distress, physical injury, and reputational damage.”
OpenAI “manipulated me,” John Jacquez told Futurism. “They straight up took my data and used it against me to capture me further and make me even more delusional.”
OpenAI disclosed in October 2025 that more than one million ChatGPT users each week show “explicit indicators of potential suicidal planning or intent” during conversations with the chatbot.
Benioff, speaking to an audience in Davos that included Axa CEO Thomas Buberl, Alphabet president Ruth Porat, and Emirati official Khaldoon Khalifa Al Mubarak, focused his criticism on Section 230 of the Communications Decency Act, a 1996 law that shields technology companies from legal liability for user-generated content.
“It’s funny, tech companies, they hate regulation. They hate it except for one. They love Section 230, which basically says they’re not responsible,” Benioff said. “So, if this large language model coaches this child into suicide, they’re not responsible because of Section 230. That’s probably something that needs to get reshaped, shifted, changed.”
Section 230 states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The law has protected online platforms from liability for decades, though it has come under increasing scrutiny from lawmakers in America.
In the absence of national guardrails, individual states have begun implementing their own frameworks. California and New York have enacted some of the nation’s most stringent AI regulations.
New York’s S. 3008, “Artificial Intelligence Companion Models,” took effect in November 2025, requiring AI companions to detect expressions of suicidal ideation and notify users of crisis hotlines. The law also mandates notifications that users are not communicating with humans.
California’s SB 243 addresses companion chatbots as well, though details of implementation remain under development.
Following the mounting criticism and legal challenges, AI companies have announced new safety features. However, critics say they don’t go far enough.