Chatgpt Sent Me to the Er
Posted4 months agoActive4 months ago
benorenstein.substack.comTechstory
calmpositive
Debate
40/100
ChatgptAI in HealthcareMedical Diagnosis
Key topics
Chatgpt
AI in Healthcare
Medical Diagnosis
The author shares a personal story of how ChatGPT encouraged them to seek medical attention, potentially saving their life, and sparks a discussion on the role of AI in healthcare decisions.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
1h
Peak period
3
1-2h
Avg / period
1.6
Comment distribution8 data points
Loading chart...
Based on 8 loaded comments
Key moments
- 01Story posted
Sep 14, 2025 at 1:59 AM EDT
4 months ago
Step 01 - 02First comment
Sep 14, 2025 at 3:10 AM EDT
1h after posting
Step 02 - 03Peak activity
3 comments in 1-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 14, 2025 at 12:33 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45237785Type: storyLast synced: 11/20/2025, 5:11:42 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
When I read the title I thought about positive case [ChatGPT saved my life], not the negative one.
It's the ordinary understanding of the idiom.
His "interpretation" or guess, wasn't worse or better than yours.
Therefore, his statement about a misleading title is not invalidated because you guessed in the opposite direction.
Let's say
Hypothesis 1
Article is negative (ChatGPT gave bad medical advice and that led to the E.R.)
Then * His guess -> is correct
* Your guess -> is wrong
Hypothesis 2
Article is positive
Then
* His guess -> is wrong
* Your guess -> is correct
Conclusion:
In any case you had NO way to know beforehand
So, in what ways pointing out that this is "his interpretation" invalidates anything he said?
Of course, it is his guess and based on the title alone, it's at least an equally valid guess as yours.
I say "at least", because it's not unreasonable to think that an LLM might have hallucinated some medical advice and that could lead someone to have an unhealthy practice which led them to the E.R.
I had my suspicions, but checked them with ChatGPT. The LLM said it was highly likely to be appendicitis, and that he should seek urgent medical attention, and also not eat or drink (other than water) as they may need to operate quite soon.
I passed it on, he went to A&E, and it all played out that way.
I’ve since switched my subscription to Gemini for work related reasons, but it has also been very helpful in my Gastritis recovery as I try to avoid flareups from dietary choices.
A typical HN stance is waiting for this fad to go away, but it certainly does have uses for me (currently being briefed by Gemini on an unfamiliar DIY task).
3 more comments available on Hacker News