Chatgpt Terms Disallow Its Use in Providing Legal and Medical Advice to Others
Postedabout 2 months agoActiveabout 2 months ago
ctvnews.caTechstoryHigh profile
heatedmixed
Debate
80/100
AI RegulationChatgptMedical and Legal Advice
Key topics
AI Regulation
Chatgpt
Medical and Legal Advice
OpenAI updates ChatGPT's terms to disallow its use in providing medical and legal advice to others, sparking debate among users about the implications and effectiveness of this change.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5m
Peak period
140
0-12h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 5, 2025 at 1:11 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 5, 2025 at 1:16 PM EST
5m after posting
Step 02 - 03Peak activity
140 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 11, 2025 at 12:35 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45825965Type: storyLast synced: 11/20/2025, 8:14:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
(Turns out I would need permits :-( )
I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.
Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.
If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).
The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.
He literally wrote that. I asked how he knows it's the right direction.
it must be treatment worked. otherwise it is more or less just a hunch
people go "oh yep that's definitely it" too easily. it is the problem with self diagnosing. And you didn't even notice it happened...
without more info this is not evidence.
We took him to our local ER, they ran tests. I gave the LLM just the symptom list that I gave to the ER initially. It replied with all the things they tested for. I gave the test results, and it suggested a short list with diagnostics that included his actual Dx and the correct way to test for it.
By the way unless you used an anonymous mode I wonder how much the model knew from side channels that could contribute to suggesting correct diagnosis...
I use it. Found it to be helpful.
Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.
Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.
Human doctors, on the other hand ... can be tired, hungover, thinking about a complicated case ahead of them, nauseous from a bad lunch, undergoing a divorce, alcoholics, depressed...
We humans have a lot of failure modes.
The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.
For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.
https://en.wikipedia.org/wiki/Deeper_Understanding?wprov=sfl...
The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.
I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.
ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.
They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.
So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.
We had in our family a “doctors are confused!” experience that ended up being that.
I had a long ongoing discussion about possible alternate career paths with ChatGPT in several threads. At that point it was well aware of my education and skills, had helped clean up resumes, knew my goals, experience and all that.
So I said maybe I'll look at doing X. "Now you are thinking clearly! This is a really good fit for your skill set! If you want I can provide a checklist.". I'm just tossing around ideas but look, GPT says I can do this and it's a good fit!
After 3 idea pivots I started getting a little suspicious. So I try to think of the thing I am least qualified to do in the world and came up with "Design Women's Dresses". I wrote up all the reasons that might be a good pivot (i.e. Past experience with landscape design and it's the same idea, you reveal certain elements seductively but not all at once, matching color palettes, textures etc). Of course GPT says "Now you are really thinking clearly! You could 100% do this! If you want I can start making a list of what you will need to produce you first custom dresses". It was funny but also a bit alarming.
These tools are great. Don't take them too seriously, you can make them say a lot of things with great conviction. It's mostly just you talking to yourself in my opinion.
This goes both ways, too. It’s becoming common to see cases where people become convinced they have a condition but doctors and/or tests disagree. They can become progressively better and better at getting ChatGPT to return the diagnosis by refining their prompts and learning what to tell it as well as what to leave out.
Previously we joked about WebMD convincing people they had conditions they did not, but ChatGPT is far more powerful for these people.
1. I described the symptoms the same way we described it to the ER the first time we brought it in. It suggested all the same things that the ER tested for. 2. I gave it the lab results for each of the suggestions it made (since the ER had in fact done all the tests they suggested).
After that back and forth it gave back a list of 3-4 more possibilities and the 2nd item was the exact issue that was revealed by radiology (and corrected with surgery).
And even with all of that info, they often come out with the wrong conclusions at times. Doctors do a critically important role in our society and during covid they risked their lives for us, more than anyone else, i do not want to insult or bring down the amount of hard work doctors do for their society.
But worshipping them as holier than thou gods is bullshit, that almost anyone who has spent some time with going back and forth with various doctors over the course of years will come to the conclusion of.
Having an AI assistant doesnt hurt, in terms of medical hints, we need to make having Personal Responsibility popular again, in society’s obsession for making every thing “idiot proof” or “baby proof” we keep losing all sorts of useful and interesting solutions because our politicians have a strong itch to regulate anything and everything they can get their hands on, to leave a mark on society.
I'd say the same about AI.
And you’d be right, so society should let people use AI while warning them about all the risks related to it, without banning it or hiding it behind 10,000 lawsuits and making it disappear by coercion.
It's an imperfect situation for sure, but I'd like to see more data.
It should empower and enable informed decisions not make them.
I don't agree with the idea that "we need rules to make people use the worse option" — contrary to prevailing political opinion, I believe people should be free to make their own mistakes — but I wouldn't necessarily rush to advocate that everyone start using current-gen AI for important research either. It's easy to imagine that an average user might lead the AI toward a preconceived false conclusion or latch onto one particular low-probability possibility presented by the AI, badger it into affirming a specific answer while grinding down its context window, and then accept that answer uncritically while unknowingly neglecting or exacerbating a serious medical or legal issue.
In my opinion, AI should do both legal and medical work, keep some humans for decision making, and the rest of the doctors to be surgeons instead.
Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, “you need to fill your drainage pipes with sand before pouring concrete over them…”, the danger with these AI products is that you have to really know a subject before it’s properly useful. I find this with programming too. Yes it can generate code but I’ve introduced some decent bugs when over relying on AI.
The plumber I used laughed at my when I told him about there sand thing. He has 40 years experience…
this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?
And I think this is the advice that should always be doled out when using them for anything mission critical, legal, etc.
The chance of different models hallucinating the same plausible sounding but incorrect building codes, medical diagnoses, etc, would be incredible unlikely, due to arch differences, training approaches, etc.
So when two concur in that manner, unless they're leaning heavily on the same poisoned datasets, there's a healthy chance the result is correct based on a preponderance of known data.
I have very mild cerebral palsy[1], the doctors were wrong about so many things with my diagnosis back in the mid to late 70s when I was born. My mom (a retired math teacher now with an MBA back then) had to go physically to different libraries out of town and colleges to do research. In 2025, she could have done the same research with ChatGPT and surfaced outside links that’s almost impossible via a web search.
Every web search on CP is inundated with slimy lawyers.
[1] it affects my left hand and slightly my left foot. Properly conditioned, I can run a decent 10 minute mile up to a 15K before the slight unbalance bothers me and I was a part time fitness instructor when I was younger.
The doctor said I was developmentally disabled - I graduated in the top of my class (south GA so take that as you will)
If I'd follow any of the suggestions I'd probably be in ER. Even after me pointing out issues and asking it to improve - it'd come up with more and more sophistical ways of doing same fundamentally dangerous actions.
LLMs are AMAZING tools, but they are just that - tools. There's no actual intelligence there. And the confidence with which they spew dangerous BS is stunning.
C'mon, just use the CNC. Seriously though, what kind of cuts?
All the circumstances where ChatGPT has given me shoddy advice fall in three buckets:
1. The internet lacks information, so LLMs will invent answers
2. The internet disagrees, so LLMs sometimes pick some answer without being aware of the others
3. The internet is wrong, so LLMs spew the same nonsense
Knowledge from blue collar trades seems often to in those three buckets. For subjects in healthcare, on the other hand, there are rooms worth of peer reviewed research, textbooks, meta studies, and official sources.
Really? You think filling you pipe with sand is comparable to backfilling a trench ?
It pretty obvious why an LLM would be confused by this.
I've had a decent experience (though not perfect) with identifying and understanding building codes using both Claude and GPT. But I had to be reasonably skeptical and very specific to get to where I needed to go. I would say it helped me figure out the right questions and which parts of the code applied to my scenario, more than it gave the "right" answer the first go round.
the damage certain software engineers could do certainly surpasses most doctors
But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
absolutely
if the only way to make people stop building evil (like your example) is to make individuals personally liable, then so be it
Is this an actual technical change, or just legal CYA?
I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.
Being clear that not all lawyers or doctors (in this example) are experts in every area of medicine and law, and knowing what to know and learn about and ask about is usually a helpful way.
While professionals have bodies for their standards and ethics, like most things it can represent a form of income, and depending on the jurisdiction, profitability.
Modern LLMs are already better than the median doctor diagnostically. Maybe not in certain specialties, but compared to a primary care physician available to the average person, I'd take the LLM any day.
doomer's in control, again
See if you can find "medical advice" ever mentioned as a problem:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-proble...
Science should be clearly labelled for those that can read. Everyone else can go eat blueberry leaves if they so choose.
You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.
It's called "false advertising".
https://en.wikipedia.org/wiki/False_advertising
It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.
https://www.theverge.com/podcast/807136/lexisnexis-ceo-sean-...
I guess the legal risks were large enough to outweigh this
I’m waiting for the billboards “Injured by AI? Call 1-800-ROBO-LAW”
The legal profession is far more at threat with AI. AI isn’t going to replace physical interactions with patients, but it might replace your need for a human to review a contract.
I've learned through experience that telling a doctor "I have X and I would like to be treated with Y" is not a good idea. They want to be the ones who came up with the diagnosis. They need to be the smartest person in the room. In fact I've had doctors go in a completely different direction just to discredit my diagnosis. Of course in the end I was right. That isn't to say I'm smarter, I'm not, but I'm the one with the symptoms and I'm better equipped to quickly find a matching disease.
Yes some doctors appreciate the initiative. In my experience most do not.
So now I just usually tell them my symptoms but none of the research I did. If their conclusion is widely off base I try to steer them towards what my research said.
So far so good but wouldn't it be nice if all doctors had humility?
This is not about ego or trying to be the smartest person in the room, it's about actually being the most qualified person in the room. When you've done medical school, passed the boards, done your residency and have your own private practice, only then would I expect a doctor to care what you think a correct diagnosis is.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
https://www.ctvnews.ca/health/article/self-diagnosing-with-a...
The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough.
In one example, the chatbot confidently diagnosed a patient’s rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves.
...
While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice.
“When you do get a response be sure to validate that response,” said Zada.
Which should be standard advice in most situations.
That’s not how companies market AI though. And the models themselves tend to present their answers in a highly confident manner.
Without explicit disclaimers, a reasonable person could easily believe that ChatGPT is an authority in the law or medicine. That’s what moves the needle over to practicing the law/medicine without a license.
I'm pretty sure it's a fundamental issue with the architecture.
LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric.
Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric.
In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them.
You’re right that LLMs favor helpfulness so they may just make things up when they don’t know them, but this alone doesn’t capture the crux of hallucination imo, it’s deeper than just being overconfident.
OTOH, there was an interesting article recently that I’ll try to find saying humans don’t really have a world model either. While I take the point, we can have one when we want to.
Edit: see https://www.astralcodexten.com/p/in-search-of-ai-psychosis re humans not having world models
LLMs hallucinate because they are probabilistic by nature not because the source material is lossy or too big. They are literally designed to create some level of "randomness" https://thinkingmachines.ai/blog/defeating-nondeterminism-in...
For example, the simple algorithm is_it_lupus(){return false;} could have an extremely competitive success rate for medical diagnostics... But it's also obviously the wrong way to go about things.
At least with the LLM (for now) I know it's not trying to sell me bunkum or convince me to vote a particular way. Mostly.
I do expect this state of affairs to last at least until next wednesday.
I don't think it necessarily bears repeating the plethora of ways in which LMs get stuff wrong, esp. considering the context of this conversation. It's vast.
As things develop, I expect that LMs will become more like the current zeitgeist as the effects that have influenced news and other media make their way into the models. They'll get better at smoothing in some areas (mostly technical or dry domains that aren't juicy targets) and worse in others (I expect to see more biased training and more hardcore censorship/steering in future).
Although, recursive reinforcement (LMs training on LM output) might undo any of the smoothing we see. It's really hard to tell - these systems are complex and very highly interconnected with many other complex systems.
It's trivial to get a thorough spectrum of reliable sources using AI w/ web search tooling, and over the course of a principled conversation, you can find out exactly what you want to know.
It's really not bashing, this article isn't too bad, but the bulk of this site's coverage of AI topics skews negative - as do the many, many platforms and outlets owned by Bell Media, with a negative skew on AI in general, and positive reinforcement of regulatory capture related topics. Which only makes sense - they're making money, and want to continue making money, and AI threatens that - they can no longer claim they provide value if they're not providing direct, relevant, novel content, and not zergnet clickbait journo-slop.
Just like Carlin said, there doesn't have to be a conspiracy with a bunch of villains in a smoky room plotting evil, there's just a bunch of people in a club who know what's good for them, and legacy media outlets are all therefore universally incentivized to make AI look as bad and flawed and useless as possible, right up until they get what they consider to be their "fair share", as middlemen.
> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.
Is it also disallowing the use of licensed professionals to use ChatGPT in informal undisclosed ways, as in this article? https://www.technologyreview.com/2025/09/02/1122871/therapis...
e.g. is it only allowed for medical use through an official medical portal or offering?
I've used it for both medical and legal advice as the rumor's been going around. I wish more people would do a quick check before posting.
266 more comments available on Hacker News