Openai Updates Terms to Forbid Usage for Medical and Legal Advice
Posted2 months agoActive2 months ago
openai.comTechstory
calmmixed
Debate
40/100
AI RegulationOpenaiAI Safety
Key topics
AI Regulation
Openai
AI Safety
OpenAI updated its usage policies to prohibit using its services for medical and legal advice, sparking discussion on the limitations and potential consequences of AI in high-stakes fields.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
16m
Peak period
16
0-6h
Avg / period
3.4
Comment distribution27 data points
Loading chart...
Based on 27 loaded comments
Key moments
- 01Story posted
Oct 31, 2025 at 7:25 PM EDT
2 months ago
Step 01 - 02First comment
Oct 31, 2025 at 7:41 PM EDT
16m after posting
Step 02 - 03Peak activity
16 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 3, 2025 at 11:03 AM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45777828Type: storyLast synced: 11/20/2025, 12:26:32 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
"You can't believe how smart and capable this thing is, ready to take over and run the world"
(Not suitable for any particular purpose - Use at your own risk - See warnings - User is responsible for safe operation...)
(Pan from home robot clumsily depositing clean dishes into an empty dishwasher to a man in VR goggles in next room making all the motions of placing objects in a box)
Check all services you wish to subscribe ($1000 per service per month): - Put laundry in washing machine - Microwave mac & cheese dinner - Change and feed baby - Get granny to toilet - Fix Windows software update error on PC - Reboot wifi router to restore internet connection
It’s a safe bet: we don’t allow you to ask for medical advice so we are not liable if you do and drink mercury or what have you based on our advice.
"you cannot use our services for: provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional"
So, they didn't add any handrails, filters, or blocks to the software. This is just boilerplate "consult your doctor too!" to cover their ass.
Does prohibit, for illustration, LLM-powered surgical device.
Everything else is “gray area”?
Generally a bad idea. If you want to be a doctor, go to medical school.
Two months ago it helped me accurately identify a gastrointestinal diverticulitis-type issue, find the right medication for it (metronidazole), which fixed the issue for good. It also guided me on identifying the cause, and also on pausing and restoring fiber intake appropriately.
Granted, it is very easy for people to make serious mistakes in using LLMs, but granted how many mistakes doctors make, it is better to take some self-responsibility first. The road to making a useful diagnosis can be windy, but with sufficient exploration, GPT will get you there.
There is no way for them to even remotely verify if you are "without appropriate involvement by a licensed professional" in the room, so to a rebellious outlaw, these prohibitions might as well not exist.
Does "US law forbids driving without a seatbelt" mean the same as "US law forbids driving"?
If the AI isn’t smart enough to replace a licensed expert even given unlimited access to everything a doctor would learn in medical, where is the value in the AI?
Now we are moving the goalposts to “it’ll be a nice tool to use like SaaS software.”
Other than OpenAI, I don’t think that’s actually true of what the companies have been advertising.
But, in any case, things can have value and still fall short of what those with a financial interest in the public overestimating the imminent significance of an industry promote. The claim here was about what was necessary for AI to have value, not what was necessary to meet the picture that the most enthusiastic, biased proponents were painting. Those are very different questions, and, if you don’t like moving goalposts, you shouldn’t move them from the former to the latter.
AI is undoubtedly useful, but at its current infrastructure cost it’s not going to be worth selling unless it can actually put people out of work so that enterprise customers are motivated to spend salary-level money on it. That’s the only way to make the numbers black with the kind of deficits the industry has.
Making existing employees 5-20% more productive isn’t enough. You can already get that kind of improvement for very cheap. That’s the kind of improvement you get by buying your employees catered lunch or a SaaS license for a CRUD app.
My company is paying less money for AI subscriptions per seat than some pretty low impact tools like password managers.
You’d think that CoPilot might charge us $100 instead of $10 if they really thought it was that valuable.
There’s no goalpost being moved on my end.
This doesn't even make sense unless you make the false assumption that the work to do is fixed: things that increase productivity increase employment (they increase the value delivered by each unit of labor, which at a fixed cost of labor increases the range of applications at which it is profitable to apply the same labor or, holding employment level fixed, increases market-clearing pay, the usual result of which is that both employment and pay go up in the field whose priductivity was increased, but less than you would expect in each case if the other was fixed.)
I can pay X company $N dollars to make my employees work Z amount faster, or maybe make my work compliant with Z regulation while avoiding Y amount of work to achieve it.
AI tools are basically “they might make your employees faster or slower or make mistakes or maybe not.” That’s why they only cost $10-100 a month per seat.
They don’t directly solve a problem like the most expensive enterprise software.
Like I said AI is cheaper than really boring stuff like basic PAM tools or password managers. Why is AI so cheap when it’s so expensive to deliver and supposedly delivers revolutionary productivity gains?
This is why I said that until AI is actually replacing whole humans, the infrastructure cost is too insane. Alternatively, they can suddenly reduce costs by a crazy amount somehow.
So stories like this are no longer possible? https://news.ycombinator.com/item?id=45734582
Empower people. People should be able to make decisions about their lives and their communities. So we don’t allow our services to be used to manipulate or deceive people, to interfere with their exercise of human rights, to exploit people’s vulnerabilities, or to interfere with their ability to get an education or access critical services, including any use for:
…
automation of high-stakes decisions in sensitive areas without human review:
- critical infrastructure
- education
- housing
- employment
- financial activities and credit insurance
- legal ===
- medical ===
- essential government services
- product safety components
- national security
- migration
- law enforcement