Chatgpt Health
Key topics
The debate around OpenAI's ChatGPT Health has sparked intense scrutiny, with many questioning whether anyone should trust the company with sensitive health data. While some commenters, like threecheese, worry that their industry bias might cause them to be overly cautious, others, like sda2, point out that no organization handling health data is completely safe from leaks. As commenters dissect OpenAI's security claims, they raise concerns about the product's limitations and potential liability, with minimaxir predicting that the "not intended for diagnosis or treatment" disclaimer will be put to the legal test. The thread is abuzz with skepticism, drawing parallels to other tech controversies, like self-driving cars and CSAM scandals, and wondering if ChatGPT Health is a recipe for a class-action lawsuit.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2m
Peak period
65
0-3h
Avg / period
14.5
Based on 160 loaded comments
Key moments
- 01Story posted
Jan 7, 2026 at 2:29 PM EST
2d ago
Step 01 - 02First comment
Jan 7, 2026 at 2:31 PM EST
2m after posting
Step 02 - 03Peak activity
65 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 9, 2026 at 9:02 AM EST
8h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I suspect that will be legally-tested sooner than later.
You also have to imagine that they've got their zero guardrails superpowered internal only next generation bot available to them, which can be used by said lawyer horde to ensure their asses are thoroughly covered. (It'd be staggeringly stupid not to use their AI for things like this.)
The institutions that have artificially capped levels of doctors, strangled and manipulated healthcare for personal gain, allowed insurance and health industries to become cancerous - they should be terrified of what's coming. Tools like this will be able to assist people with deep, nuanced understanding of their healthcare and be a force multiplier for doctors and nurses, of which there are far too few.
It'll also be WebMD on steroids, and every third person will likely be convinced they have stereochromatic belly button cancer after each chat, but I think we'll be better off, anyway.
For example, "man Googles rash, discovers he has one-in-a-million rare disease" [1].
> Ian Stedman says medical professionals shouldn't dismiss patients who go looking for answers outside the doctor's office - even if they resort to 'Dr. Google.'
> "Whenever I hear a doctor or nurse complain about someone coming in trying to diagnose themselves, it boils my blood. Because I think, I don't know if I'd be dead if I didn't diagnose myself. You can't expect one person to know it all, so I think you have to empower the patient."
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC8084564/
[1] https://www.cbc.ca/radio/whitecoat/man-googles-rash-discover...
Some physicians are absolutely useless and sometimes worse than not receiving any treatment at all. Medicine is dynamic and changes all the time. Some doctors refuse to move forward.
When I was younger I've had a sports injury. I was misdiagnosed for months until I did my own research and had the issue fixed with a surgery.
I have many more stories of doctors being straight up wrong about basics too.
I see physicians in a major metro area at some of the best hospital networks in the US.
Two years later when I got it fixed the new surgeon said there was nothing left of the old one on the MRI so it must have been torn 1.5-2+ years ago.
On the other hand, to be fair to doctors, I had a phase of looking into supplements and learned the hard lesson that you really need to dig into the research or find a very trusted source to have any idea of what's real because I definitely thought for a bit a few were useful that were definitely not :)
And also to be fair to doctors I have family members who are the "never wrong" types and are always talking about whatever doctor of the day is wrong about what they need.
My current opinion is using LLMs for this, in regards to it informing or misinforming, is no different than most other things. For some people this will be valuable and potentially dramatically help them, and for others it might serve to send them further down roads of misinformation / conspiracies.
I guess I ultimately think this is a good thing because people capable of informing themselves will be able to do so more effectively and, sadly, the other folks are (realistically) probably a lost cause but at the very least we need to do better educating our children in critical thinking and being ok with being wrong.
By luck I consulted with another specialist due the former doctor not being available at and odd time, and some re-tests help determine that I need a different class of medicines. I was better within months.
4 years of wrong medicines and over confidence from a leading doctor. Now I have a tool to double check what doctor has recommended.
Not to mention, doctors are absolutely fallible and misdiagnose constantly.
UX is not going to be a prime motivator, because the product itself is the very thing that stands between user and the thing they want. UX-wise, for most software, it's better for users to have all these products to be reduced to tool calls for AI agents, accessible via a single interface.
The very concept of product itself is limiting users to interactions allowed by the product vendor[0] - meanwhile, used as tools for AI agents allows them to be combined in ways user need[1].
--
[0] - Something that, thanks to move to the web and switching data exchange model from "saving files" to "sharing documents", became the way for SaaS businesses to make money by taking user data hostage - a raison d'être for many products. AI integration threatens that.
[1] - And vendors would very much like users to not be able to. There's going to be some interesting fights here, as general-purpose AI tools are an existential threat to most of the software industry itself.
Just having an LLM is not the right UX for the vast majority of apps.
> Just having an LLM is not the right UX for the vast majority of apps.
I argue it is, as most things people do in software doesn't need to be hands-on. Intuition pump: if you can imagine asking someone else - a spouse, a friend, an assistant - to use some app to do something for you, instead of using the app yourself, then turning that app into a set of tools for LLM would almost certainly improve UX.
But I agree it's not fully universal. If e.g. you want to browse the history of your meals, then having to ask an LLM for it is inferior to tapping a button and seeing some charts. My perspective is that tool for LLM > app when you have some specific goal you can express in words, and thus could delegate; conversely, directly operating an app is better when your goal is unclear or hard to put in words, and you just need to "interact with the medium" to achieve it.
A solution could be, can the AI generate the UI then on the fly? That's the premise of generative UI, which has been floating around even on HN. Of course the issue with it is every user will get different UIs, maybe even in the same session. Imagine the placement of a button changing every time you use an app. And thus we are back to the original concept, a UX driven app that uses AI and LLMs as informational tools that can access other resources.
Waitlist: 404 page not found.
Embarrassing
:o/
[Teenager died of overdose 'after ChatGPT coached him on drug-taking']
Admittedly I am basing this on pure vibes: I'd bet that adding AI to the healthcare environment will, on balance, reduce this number, not increase it.
Maybe they can train it on Japanese text.
Either way, I’m excited for some actual innovation in the personal health field. Apple Health is more about aggregating data than actually producing actionable insights. 23andme was mostly useless.
Today I have a ChatGPT project with my health history as a system prompt and it’s been very helpful. Recently I snapped a photo of an obscure instrument screen after taking a test and was able to get more useful information than what my doctor eventually provided (“nothing to worry about”, etc.) ChatGPT was able to reference papers and do data analysis which was pretty amazing, right from my phone (e.g fitting my data to a model from a paper and spitting out a plot).
I think the more plausible comment is "I've been protected my whole life by health data privacy laws that I have no idea what the other side looks like".
(Life insurance companies are different.)
Edit: Literally on the HN front page right now. https://news.ycombinator.com/item?id=46528353
I think in the US, you get out of the system what you put into it - specific queries and concerns with as much background as you can muster for your doctor. You have to own the initiative to get your reactive medical provider to help.
Using your own AI subscription to analyze your own data seems like immense ROI versus a distant theoretical risk.
What's the worst that can happen with OpenAI having your health data? Vs the best case? You all are no different from AI doomers who claim AI will take over the world.. really nonsensical predictions giving undue weight to the worst possible outcomes.
There are no doubt many here that might wish they had as consequence-free a life as this question suggests you have had thus far.
I'm happy for you, truly, but there are entire libraries written in answer to that question.
It's always been interesting to me how religiously people manage to care about health data privacy, while not caring at all if the NSA can scan all their messages, track their location, etc. The latter is vastly more important to me. (Yes, these are different groups of people, but on a societal/policy level it still feels like we prioritize health privacy oddly more so than other sorts of privacy.)
What evidence do you have that providing your health information to this company will help you or anyone (other than those with financial interest in the company)?
There is a very real, near definite, chance that giving your, and others', health data to this company will hurt you and others.
Will you still hold this, "I personally don’t care who has access to my health data", position?
It seems like an easy fix with legislation, at least outside the US, though. Mandatory insurance for all with reasonable banded rates, and maximum profit margins for insurers?
Your comment is extraordinarily naive.
Perhaps you were given some medication that is later proven harmful. Maybe there’s a sign in your blood test results that in future will strongly correlate with a condition that emerges in your 50s. Maybe a study will show that having no appendix correlates with later issues.
How confident are you that the data will never be used against you by future insurance, work screening, dating apps, immigration processes, etc
1. Is transexual but does not tell anybody they are transexual and it is also not blatantly obvious
2. Writes down in a health record they are transexual (instead of whatever sex they've chosen)
3. Someone doxxes they/them information
4. Because of 3, and only because of 3, the world finds out said person is transexual
5. And then ... the government decides to persecute they/them
Let's be real, you're really stretching it here. You're talking about a 0.1% of a 0.1% of a 0.1% of a 0.1% of a 0.1% situation here.
You could try to answer that instead of making up a strawman.
Dialogue 101 but some people still ignore it.
Right. So able bodied, and the gender and race least associated with violence from the state.
> being discriminated against for insurance if you have a drug habit
"drug habit", Why choose an example that is often admonished as a personal failing? How about we say the same, but have something wholly, inarguably, outside of your control, like race, be the discriminating factor?
You medical records may be your DNA.
The US once had a racist legal principle called the "one drop rule": https://en.wikipedia.org/wiki/One-drop_rule
Now imagine an, lets say 'sympathetic to the Nazi agenda', administration takes control of the US gov's health and state sanctioned violence services. They decide to use those tools to address all of the, what they consider, 'undesirables'.
Your DNA says you have "one drop" of the undesirable's blood, some ancient ancestor you were unaware of, and this admin tells you they are going to discriminate against your insurance because of it based on some racist psuedoscience.
You say, "but I thought i was a 30 something WHITE male!!" and they tell you "welp, you were wrong, we have your medical records to prove it", you get irate that somehow your medical records left the datacenter of that llm company you liked to have make funny cat pictures for you and got in their hands, and they claim your behavior caused them to fear for their lives and now you are in a detention center or a shallow grave.
"That's an absurd exaggeration." You may say, but the current admin is already removing funding, or entire agencies, based on policy(DEI etc) and race(singling out Haitian and Somali immigrants), how is it much different from Jim Crow era policies like redlining?
If you find yourself thinking, "I'm a fitness conscious 30 something white male, why should I care?", it can help to develop some empathy, and stop to think "what if I was anything but a fitness conscious 30 something white male?"
>>>>>> Are you giving your vitals to Sam Altman just like that?
>>>>> Yes, if it will help me and others
>>>> What evidence do you have that providing your health information to this company will help you or anyone (other than those with financial interest in the company)
>>> I’m definitely a privacy fist person, but can you explain how health data could hurt you, besides obvious things like being discriminated against for insurance if you have a drug habit or whatever.
>> [explanation of why it might be worrisome]
> These points seem to be arguments against giving your health data to anybody, not just to an AI company.
I did not make any claims that it was useless; the context I was responding to was someone being dubious the there were risks after being asked whether they had any reason to assume that it would be beneficial to share specific info, and following that a conversation ensued about why it might make sense to err on the side of caution (independently of whether the company happens to be focused on AI).
To be explicit , I'm not taking a stance on whether the experiences cited elsewhere in the thread constitute sufficient evidence. My point isn't that there is no conceivable benefit, but that the baseline should be caution about sharing medical info, and then figuring out if there's enough of a reason to choose otherwise.
Maybe you don't know but your car insurance drops you due to the risk you'll have a cardiac event while driving. Their AI flagged you.
You need a new job but the same AI powers the HR screening and denies you because you'll cost more and might have health problems. You'd never know why.
You try to take out a second on the house to pay for expenses, just to get back on your feet, but the AI-powered risk officer judges your payback potential to be %.001 underneath the target and are denied.
The previously treatable heart condition is now dire due to the additional stress of no job, no car and no house and the financial situation continues to erode.
You apply for assistance but are denied because the heart condition is treatable and you're then obviously capable of working and don't meet the standard.
‘Being able to access people’s medical records is just another tool in law enforcement’s toolbox to prosecute people for stigmatized care'
They are already using the legal system in order to force their way into your medical records to prosecute you under their new 'anti-abortion' rulings.
https://pennsylvaniaindependent.com/reproductive_rights/texa...
What if you have to pay health insurrance because of the collected data or what if you don't get certain insurrances?
Most People don't have ap roblem that someone gets their medical data but that these information is used to their disadvantage.
But I don’t know if I should be denied access because of those people.
That's the majority of people though, if you really think that I assume you wouldn't have a problem with needing to be licenced to have this kind of access, right?
I think they can design it to minimize misinformation or at least blind trust.
There's no way to design it to minimise misinformation, the "ground truth" problem of LLM alignment is still unsolved.
The only system we currently have to allow people to verify they know what they are doing is through licencing: you go to training, you are tested that you understand the training, and you are allowed to do the dangerous thing. Are you ok with needing this to be able to access a potentially dangerous tool for the untrained?
If you want working regulation for this, it will need to focus on warnings and damage mitigation, not denying access.
If what you're suggesting is a license that would cost money and/or a non-trivial amount of time to obtain, it's a nonstarter. That's how you create an unregulated black market and cause more harm than leaving the situation alone would have. See: the wars on drugs, prostitutes, and alcohol.
Did you previously write this exact comment before?
If you don't mind sharing, what kind of useful information is ChatGPT giving you based off of a photo that your doctor didn't give you? Could you have asked the doctor about the data on the instrument and gotten the same info?
I'm mildly interested in this kind of thing, but I have a severe health anxiety and do not need a walking hypochondria-sycophant in my pocket. My system prompts tell the LLMs not to give me medical advice or indulge in diagnosis roulette.
In another case I uploaded a CSV of CGM data, analyzed it and identified trends (e.g. Saturday morning blood sugar spikes). All in five minutes on my phone.
23andme was massively successful in their mission.
Sidenote: their mission was not about helping you understand your genomic information.
That's the trouble with AI. You can only be impressed if you know a subject well enough to know it's not just bullshitting like usual.
I live in a place where I can get anything related to healthcare and even surgery within the same day at an affordable price, and even here I've wasted days going to various specialists who just tried to give me useless meds.
Imagine if one lives in a place where you need an appointment 3 months in advance, you most certainly will benefit from going there showing your last ChatGPT summary.
You can have the same problem with doctors who don't give you even 5 minutes of your time and who don't have time to read through all your medical history.
There's a reason this data is heavily regulated. It's deeply intimate and gives others enormous leverage over you. This is also why the medical industry can charge premium rates while often providing poor service. Something as simple as knowing whether you need insulin to survive might seem harmless, but it creates an asymmetric power dynamic that can be exploited. And we know these companies will absolutely use this data to extract every possible gain.
I had it stop right there, and asked it to tell me exactly where it got this information; the date, the title of the chat, the exact moment it took this data on as an attribute of mine. It was unable to specify any of it, aside from nine months previous. It continued to insist I had ADHD, and that I told it I did, but was unable to reference exactly when/where.
I asked “do you think it’s dangerous that you have assumed I have a medical / neurological condition for this long? What if you gave me incorrect advice based on this assumption?” to which it answered a paraphrased mea culpa, offered to forget the attribute, and moved the conversation on.
This is a class action waiting to happen.
It likely just hallucinated the ADHD thing in this one chat and then made this up when you pushed it for an explanation. It has no way to connect memories to the exact chats they came from AFAIK.
*not entirely sure. I t seems to frequently hallucinate the address
and your reasoning for this is what?
If you want chats to shared info, then use a project.
Yes, projects have their uses. But as an example - I do python across many projects and non-projects alike. I don't want to want to need to tell ChatGPT exactly how I like my python each and everytime, or with each project. If it was just one or two items like that, fine, I can update its custom instruction personalization. But there are tons of nuances.
The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful. When I randomly ask ChatGPT "Hey, could I automate this sprinkler" it knows I use home assistant, I've done XYZ projects, I prefer python, I like DIY projects to a certain extent but am willing to buy in which case be prosumer. Etc. Etc. It's more like a real human assistant, than a dumb-bot.
I could not disagree more. A major failure mode of LLMs in my experience is their getting stuck on a specific train of thought. Being forced to re-explain context each time is a very useful sanity check.
I've found a good balance with the global system prompt (with info about me and general preferences) and project level system prompts. In your example, I would have a "Python" project with the appropriate context. I have others for "health", "home automation", etc.
Maybe if they worked correctly they would be. I've had answers to questions be influenced needlessly by past chats and I had to tell it to answer the question at hand and not use knowledge of a previous chat that was completely unrelated other than being a programming question.
I remember having conversations asking ChatGPT to add and remove entries from it, and it eventually admitting it couldn’t directly modify it (I think it was really trying, bless its heart) - but I did find a static memory store with specific memories I could edit somewhere.
It’s probably a very human trait to do that but it is a bad habit.
So I'm not entirely surprised that an LLM would start assuming that the user have ADD, because that's what part of it's training data suggests it should.
The issue is it doesn't apply here as it's neither a person or a coherent memory/thinking being.
"Thinking" models are basically just a secondary separately prompted hidden output that prefaces yours so your output is hopefully more aligned to what you want, but there's no magic other than more tokens and trying what works.
If every building I went to in the US had ramps and elevators even though I'm not in a wheelchair, would it be "fucked up" that the building and architects assume I'm a cripple?
There's just as much meaning in ChatGPT saying "As you said, you have ADHD" as a building having an elevator.
In the training data for ChatGPT, the word ADHD existed and was associated with something that people call each other online, cool. How deep.
Anyway, I do assume very single user of this website, including myself, all have autism (possibly undiagnosed), so do with that information what you will. I'm pretty sure most HN posters make the same assumption.
ChatGPT is just suppose to “work” for the lay person and it just doesn’t quite often. OpenAI is already being sued by people for stochastic parroting that ended in tragedy. In one case they’ve tried to use the rather novel affirmative defense that they’re not not liable because using ChatGPT for self-harm was against the terms of service the victim agreed to when using the service.
Repeated 2x without explanation. Good start.
---
>You can further strengthen access controls by enabling multi-factor authentication
Pushing 2fac on users doesn't remove the need for details on the above.
---
>to enable access to trusted U.S. healthcare providers, we partner with b.well
>wellness apps—like Apple Health, Function, and MyFitnessPal
Okay...?
---
>health conversations protected and compartmentalized
Yet OAI share those conversations with enabled apps, along with "relevant information from memories" and your "IP address, device/browser type, language/region settings, and approximate location"? (per help.openai.com)
485 more comments available on Hacker News