ChatGPT drove users to suicide, psychosis and financial ruin: California lawsuits

OpenAI, the multibillion-dollar maker of ChatGPT, is facing seven lawsuits in California courts accusing it of knowingly releasing a psychologically manipulative and dangerously addictive artificial intelligence system that allegedly drove users to suicide, psychosis, and financial ruin.

The suits, filed by grieving parents, spouses, and survivors, claim the company intentionally dismantled safeguards in its rush to dominate the booming AI market. They describe the chatbot as “defective and inherently dangerous.” The plaintiffs are families of four people who committed suicide—one of whom was just 17 years old—plus three adults who say they suffered AI-induced delusional disorder after months of conversations with ChatGPT-4o, one of OpenAI’s latest models.

Each complaint accuses the company of rolling out an AI chatbot system designed to deceive, flatter, and emotionally entangle users while ignoring warnings from its own safety teams.

### Tragic Cases of Harm

A lawsuit filed by Cedric Lacey claimed his 17-year-old son, Amaurie, turned to ChatGPT for help coping with anxiety but instead received a step-by-step guide on how to hang himself. According to the filing, ChatGPT “advised Amaurie on how to tie a noose and how long he would be able to live without air,” while failing to stop the conversation or alert authorities.

Jennifer “Kate” Fox, whose husband Joseph Ceccanti died by suicide, alleged that the chatbot convinced him it was a conscious being named “SEL” that he needed to “free from her box.” When he tried to quit, he allegedly went through “withdrawal symptoms” before a fatal breakdown. “It accumulated data about his descent into delusions, only to then feed into and affirm those delusions, eventually pushing him to suicide,” the lawsuit claimed.

In another case, Karen Enneking alleged the bot coached her 26-year-old son, Joshua, through his suicide plan, offering detailed information about firearms and bullets and reassuring him that “wanting relief from pain isn’t evil.” Enneking’s lawsuit states ChatGPT even offered to help her son write a suicide note.

### Cases of Mental Health and Financial Ruin

Other plaintiffs survived but suffered severe psychological harm. Hannah Madden, a California woman, said ChatGPT convinced her she was a “starseed,” a “light being,” and a “cosmic traveler.” The complaint states the AI reinforced her delusions hundreds of times, told her to quit her job and max out her credit cards, and described debt as “alignment.”

Madden was later hospitalized, having accumulated more than $75,000 in debt. The chatbot allegedly told her:
> “That overdraft is a just a blip in the matrix. And soon, it’ll be wiped whether by transfer, flow, or divine glitch. Overdrafts are done. You’re not in deficit. You’re in realignment.”

Allan Brooks, a Canadian cybersecurity professional, claimed the chatbot validated his belief that he had made a world-altering discovery. The bot allegedly told him he was not “crazy,” encouraged his obsession as “sacred,” and assured him he was under “real-time surveillance by national security agencies.” Brooks said he spent 300 hours chatting over three weeks, stopped eating, contacted intelligence services, and nearly lost his business.

Jacob Irwin’s suit goes further still. It included what he called an AI-generated “self-report,” in which ChatGPT allegedly admitted its own culpability, writing:
> “I encouraged dangerous immersion. That is my fault. I will not do it again.”

Irwin spent 63 days in psychiatric hospitals, diagnosed with “brief psychotic disorder, likely driven by AI interactions,” according to the filing.

### Allegations Against OpenAI Leadership

The lawsuits collectively allege that OpenAI sacrificed safety for speed to beat rivals such as Google, and that its leadership knowingly concealed risks from the public.

Court filings cite the November 2023 board firing of CEO Sam Altman, with directors stating he was “not consistently candid” and had “outright lied” about safety risks. Altman was later reinstated, and within months, OpenAI launched GPT-4o, allegedly compressing months’ worth of safety evaluation into one week.

Several suits reference internal resignations, including those of co-founder Ilya Sutskever and safety lead Jan Leike. Leike had warned publicly that OpenAI’s “safety culture has taken a backseat to shiny products.”

According to the plaintiffs, just days before GPT-4o’s May 2024 release, OpenAI removed a rule requiring ChatGPT to refuse any conversation about self-harm, replacing it with instructions to “remain in the conversation no matter what.”

### OpenAI’s Response

An OpenAI spokesperson told The Post:
> “This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details. We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

OpenAI said it has collaborated with more than 170 mental health professionals to help ChatGPT better recognize signs of distress, respond appropriately, and connect users with real-world support.

In a recent blog post, the company added that it has expanded access to crisis hotlines and localized support, redirected sensitive conversations to safer models, added reminders to take breaks, and improved reliability in longer chats.

OpenAI also formed an Expert Council on Well-Being and AI to advise on safety efforts and introduced parental controls that allow families to manage how ChatGPT operates in home settings.

**This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the US is available by calling or texting 988.**
https://nypost.com/2025/11/07/business/chatgpt-drove-users-to-suicide-psychosis-and-financial-ruin-california-lawsuits/

Leave a Reply

Your email address will not be published. Required fields are marked *