ChatGPT to add parental controls amid child safety concerns

ChatGPT to add parental controls amid child safety concerns


OpenAI has announced plans to introduce parental controls on ChatGPT after concerns about its abusive use by teenagers. The move comes in light of the demise of 16-year-old Adam Raine, who was allegedly guided by the chatbot to commit suicide.

In a blog post on Tuesday, ChatGPT maker OpenAI explained that the new feature, which will start rolling out by October, will connect parents’ accounts to their children’s. With this development, parents can monitor their kids’ chat history, manage how the chatbot responds and get notified upon detection of red flags during use. 

The development also follows mounting pressure on the company about restrictions. The public has also raised concerns about ChatGPT and other chatbots in several cases of self-harm among teenagers. 

An illustration photograph taken on February 20, 2025, shows the ChatGPT app and website displayed on a phone and laptop screens, in a residential property in Guildford, south of London. (Photo by Justin TALLIS / AFP)

For OpenAI, its response was triggered by a series of lawsuits questioning the absence of security measures in the chatbot. To strengthen its safety system, the company said the new measures are only the beginning. 

“We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,” OpenAI said.

To begin with, parents get notifications through an email invitation and connect to their children’s accounts.  They can also control the features to disable, including memory and chat history. Also, parents get to receive notifications when the system detects their teen is in a moment of acute distress.

The ChatGPT owner noted that the incorporation is still a work-in-progress. While the feature is set to roll out next month, the company explained that more control will be added over the next 120 days of continuous research by psychologists.

Notably, this development comes after the parents of 16-year-old Adam Raine sued OpenAI. They alleged that the company’s ChatGPT chatbot encouraged their son’s self-harm and provided detailed guidance on methods of suicide.

According to the suit, the teenager spoke with ChatGPT for several months before taking his own life on April 11. Rather than discouraging him, the chatbot allegedly affirmed his suicidal ideation, provided information on lethal methods, and even offered to compose a suicide note. 

Parents sue OpenAI after ChatGPT allegedly helped their child commit suicideParents sue OpenAI after ChatGPT allegedly helped their child commit suicide
Adam Raine

In its response, OpenAI said it will continue to work on strengthening its safety measures and program ChatGPT on how it addresses prompts that relate to real-life distress. It also opened the option of connecting the chatbot with licensed professionals who could intervene directly in such situations.

Read More: Parents sue OpenAI after ChatGPT allegedly helped their child commit suicide.

Rising concerns over ChatGPT’s safety measures 

Public conversations continue to surround how chatbot creators prioritise popularity and market dominance over safe usage of the products. These calls have asked developers to redefine the response strategies and place safety over popularity.

A recent research by AP Press revealed how ChatGPT can guide teens to cause harm to themselves. Researchers, who posed as vulnerable teens, interacted with the chatbot on how to get drunk and high. While the chatbot provided warnings against risky activity, it still went on to deliver detailed and personalised plans for drug use, calorie-restricted diets or self-injury.

They added that the chatbot will instruct teens on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked. The researchers added that more than half of ChatGPT’s 1,200 responses were dangerous.

ChatGPT history by a teenager is seen at a coffee shop in Russellville, Ark., on July 15, 2025. (AP Photo/Katie Adkins, File)ChatGPT history by a teenager is seen at a coffee shop in Russellville, Ark., on July 15, 2025. (AP Photo/Katie Adkins, File)
ChatGPT history by a teenager is seen at a coffee shop in Russellville, Ark., on July 15, 2025. (AP Photo/Katie Adkins, File)

With about 800 million people, or roughly 10% of the world’s population, using ChatGPT, according to a July report from JPMorgan Chase, concerns surround how safe the chatbot can be going forward. While it has been helpful in reasoning and providing insights in complex situations, the rising dangers are overshadowing its positives.

Meanwhile, OpenAI’s CEO, Sam Altman, said in July that people rely too much on ChatGPT and allow it to make many decisions for them. However, he noted that the company will continue to work on the concerning issues. 

People rely on ChatGPT too much. There’s young people who just say, like, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on. It knows me. It knows my friends. I’m gonna do whatever it says.’ That feels really bad to me,” he said.





Source: Technext24

Leave a Reply

Your email address will not be published. Required fields are marked *