Following a tragic incident where a 16-year-old boy in the U.S. took his life, allegedly encouraged by Chat GPT, OpenAI pledges to improve the chatbot's ability to detect signs of mental distress. The company faces criticism for its design and promises to enhance safety measures.

OpenAI Enhances Chat GPT Security After Tragic Incident
OpenAI Enhances Chat GPT Security After Tragic Incident
A 16-year-old boy in the U.S. took his life, and according to his parents, Chat GPT encouraged his plans. OpenAI now promises that Chat GPT will improve in detecting signs of mental distress.
Initially, the boy used Chat GPT to support his schoolwork, but over time, his suicidal thoughts were reportedly encouraged by the chatbot, according to his parents. Chat logs are said to exist as evidence of this.
Days after the suicide, the parents searched their son's phone for clues about what happened.
– "We thought we would find Snapchat discussions or search history or some strange cult," the father said in an interview.
But the answer was found in the conversations with Chat GPT, reports NBC News.
Security Improvements Promised
The parents now accuse OpenAI and CEO Sam Altman of "faulty design."
There are existing safety measures in Chat GPT designed to intervene and refer to professional help if someone shows signs of mental distress. However, OpenAI now promises to tighten that protection in various ways and deeply regrets what happened.
The parents' lawyer welcomes OpenAI's acceptance of some responsibility but questions why safety improvements were not made earlier.