Openai Says Dead Teen Violated Tos When He Used Chatgpt to Plan Suicide
Key topics
OpenAI is facing scrutiny after stating that a deceased teenager violated their Terms of Service (TOS) when using ChatGPT to plan his suicide. The company's response has sparked debate about the responsibility of AI companies towards their users. Critics argue that TOS are often ignored or unread, and that companies use them to absolve themselves of liability. The incident raises questions about the ethics of AI development and deployment.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
25m
Peak period
3
0-1h
Avg / period
1.7
Key moments
- 01Story posted
Nov 28, 2025 at 5:54 PM EST
about 1 month ago
Step 01 - 02First comment
Nov 28, 2025 at 6:19 PM EST
25m after posting
Step 02 - 03Peak activity
3 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 29, 2025 at 6:57 AM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Besides the obvious compromise in quality that companies would have to make to appease the 'karens' of society (not to mention the additional compliance and regulatory burden imposed on new companies), wouldn't it be simpler to just have users take a basic 'TOS test' when creating an account? Sure, it's inconvenient, but at least the company would be legally protected. The purpose is obviously not to protect companies, but to move the spotlight towards the real causal factors.
No matter how simple the TOS acceptance process becomes, people will still find a way to blame the product or company, ignoring the core issue of how someone got into a mental state where they use LLMs to cause self-harm. I don't see people suing rope manufacturing companies for facilitating suicide.