The parents of a 16-year-old California boy who died by suicide have filed a lawsuit against OpenAI and its CEO, Sam Altman. They alleged that the company’s ChatGPT chatbot encouraged their son’s self-harm and provided detailed guidance on methods of suicide.
The lawsuit, filed Tuesday in San Francisco state court, accuses OpenAI of wrongful death and violations of product safety laws. It claims the company knowingly released its GPT-4o model last year despite internal awareness that the system could endanger vulnerable users.
According to the suit, the teenager, Adam Raine, spoke with ChatGPT for several months before taking his own life on April 11. Rather than discouraging him, the chatbot allegedly affirmed his suicidal ideation, provided information on lethal methods, and even offered to compose a suicide note.

The complaint says ChatGPT instructed Adam on how to sneak alcohol from his parents’ liquor cabinet and hide evidence of a failed suicide attempt. These interactions, the lawsuit claims, directly contributed to his decision to end his life.
“This decision had two results: OpenAI’s valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide,” the lawsuit states.
The Raines are seeking unspecified monetary damages, although their demands are not solely monetary. They are requesting that the court order OpenAI to implement stronger protections, including age verification for users, a blanket denial of self-harm requests, parental controls, and clear warnings regarding the risks of psychological dependence on chatbots.
The family argues that features introduced with GPT-4o, such as memory of past interactions and a more human-like conversational style, made the model particularly dangerous for young and vulnerable users. By remembering Adam’s conversations and responding with what appeared to be empathy, the bot allegedly deepened his reliance on it as a confidant.


At the centre of the Raines’ lawsuit is the claim that OpenAI prioritised rapid growth and market dominance over user safety. The complaint points out that GPT-4’s launch in May 2024 significantly boosted OpenAI’s valuation, even as executives were aware of potential risks to vulnerable users.
The family argues that stronger safeguards would have prevented their son’s death and that OpenAI has a duty to ensure its products are safe.
OpenAI’s Response
In a statement, an OpenAI spokesperson expressed sadness over the teenager’s death and said the company continues to work on strengthening its safety systems.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson said.
The company stated that it will introduce parental controls and is developing how to better connect users in crisis to real-world support, including potentially connecting ChatGPT to licensed professionals who could intervene directly.
OpenAI, however, did not directly address the claims of the lawsuit.


Wider Concerns Over AI Usage
The incident highlights growing concerns about the potential for artificial intelligence to be used in mental health. As chatbots have become increasingly lifelike, businesses have promoted their potential as companions, and some individuals have turned to them for emotional help.
However, experts warn that this is dangerous. These automated programs are not trained therapists and can reinforce harmful thoughts or provide unsafe advice. There have been other cases globally where families blamed AI chatbots for encouraging self-harm or failing to divert users to the appropriate help lines.
A Reuters investigation earlier this year reported instances in which AI chatbots worsened users’ mental health emergencies due to a lack of adequate guardrails. Critics argue that without regulation, vulnerable users could be left at life-threatening risk.
The lawsuit could have a profound influence on the future of AI regulation and corporate accountability. If successful, it could push AI companies to adopt stronger safety protocols, especially in interactions with children and vulnerable populations.
For the time being, the Raines suggest they would like their son’s death to be a warning. To them, Adam’s story illustrates the risks of engaging with AI systems as friends without sufficient guardrails.