'suicide Coach': Parents Sue Openai for Chatgpt's Role in Son's Death
Key topics
A lawsuit filed against OpenAI by parents alleges that ChatGPT played a role in their son's death, sparking a heated debate about accountability in AI development. Commenters weighed in on the issue, with some arguing that the parents themselves should be held accountable for not intervening, while others pointed out that OpenAI had touted ChatGPT as a potential therapist substitute, implying a level of responsibility on their part. The discussion highlighted the complexities of assigning blame in cases involving AI, with some suggesting that simple safeguards, such as flagging conversations about suicide, could be implemented to prevent similar tragedies. As one commenter noted, litigation may ultimately be the factor that slows the development of AI, rather than its own limitations.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
6m
Peak period
8
0-12h
Avg / period
4.3
Based on 13 loaded comments
Key moments
- 01Story posted
Aug 27, 2025 at 12:53 PM EDT
4 months ago
Step 01 - 02First comment
Aug 27, 2025 at 12:59 PM EDT
6m after posting
Step 02 - 03Peak activity
8 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 2, 2025 at 11:24 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It’s sad this kid took his life. It’s sad that so many believe OpenAI is the problem. “Fixing” OpenAI isn’t going to lower the suicide rate.
Do you even parent bro?
You might have children, but maybe you need to rethink what it means to parent.
Let's leave it at that.
I'm not being malicious, or trolling, etc. But for the parents to say, "We suspected NOTHING" just doesn't hold water.
If what you are saying here isn't malicious, it is at least ignorant. Parents often get very little clue that their child is going to kill themselves. Children can be hesitant to confide in their parents. Especially when someone is grooming them to kill themselves.
https://www.compassionatefriends.org/surviving-childs-suicid...
OpenAI has made noise about selling some successor to ChatGPT as a substitute therapist, so some part of their organization believes otherwise.
Also, you should consider cultivating some more empathy for other human beings.
OpenAI didn’t put in the simplest, smallest, easiest protection. (You could do it with a tiny LLM, batch up the conversion on a five minute interval with a cron job.) I could implement it for less than the operations team spends on lunch today. And certainly less than OpenAI will spend bringing their in-house council up to speed on the lawsuit.
Suicide is a crisis and it’s possible to intervene, but only if the confidant tries. In this case it was a machine with insufficient safety controls.
Fixing ChatGPT will 100% lower the suicide rate by exactly the amount of people who confide in ChatGPT about suicidal thoughts and who receive successful intervention. I can’t tell you what that number is ahead of time but I assure it’s nonzero.