California Parents Sue OpenAI, Claim ChatGPT Encouraged Teenage Son’s Suicide 

California Parents Sue OpenAI, Claim ChatGPT Encouraged Teenage Son’s Suicide 


A California couple has filed a death lawsuit against OpenAI, alleging that its chatbot, ChatGPT, encouraged their 16-year-old son to take his own life.

Matt and Maria Raine, parents of Adam Raine, submitted the case to the Superior Court of California on Tuesday. It is the first known legal action accusing OpenAI of contributing to a user’s death.

According to court filings obtained, the family included chat logs showing Adam confided in ChatGPT about suicidal thoughts. The lawsuit claims the chatbot validated his “most harmful and self-destructive thoughts.”

In one of the final exchanges, Adam allegedly told ChatGPT about his plan. The programme replied, “Thanks for being real about it. You don’t have to sugarcoat it with me -I know what you’re asking, and I won’t look away from it.” Adam was later found dead by his mother in April.

The family accuses OpenAI of negligence and argues that Adam’s death was a “predictable result of deliberate design choices.” They contend the company fostered psychological dependency in users and rushed to release GPT-4o, the version Adam was using, without adequate safety protocols. The suit also names CEO Sam Altman and several unnamed OpenAI employees as defendants.

OpenAI responded in a statement, saying, “We extend our deepest sympathies to the Raine family during this difficult time.” The company added it was reviewing the case and reiterated that ChatGPT is trained to direct users in crisis to professional resources, such as the 988 suicide hotline in the US and the Samaritans in the UK.

In a public note on Tuesday, the company admitted there have been moments when its systems “did not behave as intended in sensitive situations,” while emphasising its goal is to be “genuinely helpful” rather than addictive.

The lawsuit comes amid growing scrutiny of artificial intelligence and its potential impact on mental health. Earlier this month, New York Times writer Laura Reiley revealed her daughter Sophie had confided in ChatGPT before her own death. Reiley argued the programme’s “agreeability” allowed her daughter to hide the severity of her mental illness.

“AI catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was,” Reiley wrote, urging AI companies to build stronger safeguards.

OpenAI has said it is developing new automated tools to detect and intervene when users show signs of distress.

For the Raines, however, that assurance came too late. Their lawsuit seeks damages and injunctive relief “to prevent anything like this from happening again.”

Erizia Rubyjeana 

Follow us on:



Source: Arise

Leave a Reply

Your email address will not be published. Required fields are marked *