Demand for Human Radiologists Is at an All-Time High
Posted3 months agoActive3 months ago
worksinprogress.newsTechstoryHigh profile
calmmixed
Debate
80/100
AI in HealthcareRadiologyMedical Technology
Key topics
AI in Healthcare
Radiology
Medical Technology
The demand for human radiologists is at an all-time high despite advancements in AI, sparking discussions on the role of AI in augmenting or replacing radiologists.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
140
0-12h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 25, 2025 at 9:19 AM EDT
3 months ago
Step 01 - 02First comment
Sep 25, 2025 at 11:02 AM EDT
2h after posting
Step 02 - 03Peak activity
140 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 2, 2025 at 2:08 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45372335Type: storyLast synced: 11/20/2025, 8:09:59 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It's clearly satire with the little jabs like this.
Similar to how a model that can do "PhD-level research" is of little use to me if I don't have my own PhD in the topic area it's researching for me, because how am I supposed to analyze a 20 page research report and figure out if it's credible or not?
There’s wildly varying levels of quality among these options, even though they could all reasonably be called “PhD-level research.”
If we had followed every AI evengelist sugestion the world would have collapsed.
I mean if you change the data to fit your argument you will always make it look correct.
Lets assume we stop in 2016 like he said, where do we get the 1000 radiologist the US needs a year?
The training lasts 5 years, 2021 - 5 = 2016 If they stopped accepting people into the radiologist program but let people already in to finish, then you would stop having new radiologist in 2021.
So 5 + 5 + [0,2] is [10,12] years of training.
That sentence and what you wrote are not 100% the same.
People can't tell what they'll eat next sunday but they'll predict AGI and singualrity in 25 years. It's comfy because 25 years seems like a lot of time, it isn't.
https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
> I'd wager in 25 years we'd get there so I think his opinion still has a large percentage of being correct.
What percent, and which maths and facts let you calculate it ? The only percent you can be sure about is that it's 100% wishful thinking
> It's comfy because 25 years seems like a lot of time, it isn't.
I don't know how old you are but life 25 years ago from a tech perspective was *very* different.
That doesn’t mean that you can’t predict anything with high certainty. You just don’t know whether the status quo will be disturbed. And when you need a status quo disturbance for your prediction, you’re in pure luck category. When your prediction requires lack of status quo changes, then your prediction is safer. And of course sorter the term the better. When ChatGPT came out, Cursor and Claude Code could be predicted, I predicted them. Because no changes in status quo was required and it was a short term prediction. But if there would have been a new breakthrough, then those wouldn’t have been created. When they predicted fully self driving cars, or less people checking X-rays, you needed a status quo change: legal first, but in case of general, fully self driving cars, even technical breakthroughs. Good luck with that.
You cannot predict whether you'll be alive tomorrow, you can be 99.99999% sure, but never 100%. The point is that based on certain information/data you can form a personal opinion, and someone can form a different one on the same data. My opinion is that AI is going to keep booming due to the enormous amount of private and government cash being injected into it with both pacifistic and militaristic application of said AI resulting in it being capable of successful interpretation of radiology imaging in 20 years.
I could see the assumption that one radiologist supervises a group of automated radiology machines (like a worker in an automated factory). Maybe assume that they'd be delegated to an auditing role. But that they'd go completely extinct? There's no evidence of, even historically, a service being consumed that has zero human intervention.
Maybe don't?
But he said it in the context of a Q&A session that happened to be recorded. Unless you're a skilled politician who can give answers without actually saying anything, you're going to say silly things once in a while in unscripted settings.
Besides that, I'd hardly call Geoffrey Hinton an AI evangelist. He's more on the AI doomer side of the fence.
At the time? I would say he was a AI evangelist.
Scale is not always about trougput. You can be constrained by many things, in this case, data.
When I do write something up, it is usually very finalized at that time; the process of getting to that point is not recorded.
The models maybe need more naturalistic data and more data from working things out.
With interventional radiologists and radio-oncologists it's different but were talking about radiologists here...
By the way, even if I sound dismissive I have great respect for the skills required by your profession. Reading an IRM is really hard when you have the radiologist report in hand and to my untrained eyes it's impossible without it!
And since you talk to patients frequently, I have an even greater respect of you as a radiologist.
I also recently had surgery and the surgeon consulted with the radiologist that read my MRI before operating.
Or maybe it's related to socialized Healthcare because in the article there is a breakdown of the time spent by a radiologists in Vancouver and talking to patients isn't part of it.
AI art is getting better but still it's very easy for me to quickly distinguish AI result from everything else, because I can visually inspect the artifacts and it's usually not very subtle.
I'm not a radiologist, but I would imagine AI is doing the same thing here, making up things that are cancer, missing things that aren't cancer, and it takes an expert to distinguish the false positives from true. So we're back at square one, except the expertise has shifted from interpreting the image to interpreting the image and also interpreting the AI.
I actually disagree in that it's not easy for me at all to quickly distinguish AI images from everything else. But I think we might differ what we mean by "quickly". I can quickly distinguish AI if I am looking. But if I'm mindlessly doomscrolling I cannot always distinguish 'random art of an attractive busty woman in generic fantasy armor that a streamer I follow shared' as AI. I cannot always distinguish 'reply-guy profile picture that's like a couple dozen pixels in dimensions' as AI. I also cannot always tell if someone is using a filter if I'm looking for maybe 5 seconds tops while I scroll.
As a related aside, I've started seeing businesses clearly using ChatGPT for their logos. You can tell from the style and how much random detail there is contrasted with the fact that it's a small boba tea shop with two employees. I am still trying to organize my thoughts on that one.
Edit:
Example: https://cloudfront-us-east-1.images.arcpublishing.com/brookf...
I suppose first of all, is that generally agreed? People aren't expecting a LLM to give a radiology opinion, the same as way that you can feed in a PDF or an image into ChatGPT and ask it something about it, are they?
I'm interested whether most people here have a higher opinion of ML than of the generative AIs, in terms of giving a reliably useful output. Or do a lot of you think that these also just create so much checking it would be easier to just have a human do the original work?
I think it's probably worth excluding self-driving from my above question, since that is a particularly difficult area to agree anything on.
AI is going to augment radiologists first, and eventually, it will start to replace them. And existing radiologists will transition into stuff like interventional radiology or whatever new areas will come into the picture in the future.
The kiosk is placed inside of a clinic/hospital setting, and rather than driving to the pharmacy, you pick up your medications at the kiosk.
Pharmacists are currently still very involved in the process, but it's not necessarily for any technical reason. For example, new prescriptions are (by most states' boards of pharmacies) required to have a consultation between a pharmacist and a patient. So the kiosk has to facilitate a video call with a pharmacist using our portal. Mind you, this means the pharmacist could work from home, or could queue up tons of consultations back to back in a way that would allow one pharmacist to do the work of 5-10 working at a pharmacy, but they're still required in the mix.
Another thing we need to do for regulatory purposes is when we're indexing the medication in the kiosk, the kiosk has to capture images of the bottles as they're stocked. After the kiosk applies a patient label, we then have to take another round of images. Once this happens, this will populate in the pharmacist portal, and a pharmacist is required to take a look at both sets of images and approve or reject the container. Again, they're able to do this all very quickly and remotely, but they're still required by law to do this.
TL;DR I make an automated dispensing kiosk that could "replace" pharmacists, but for the time being, they're legally required to be involved at multiple steps in the process. To what degree this is a transitory period while technology establishes a reputation for itself as reliable, and to what degree this is simply a persistent fixture of "cover your ass" that will continue indefinitely, I cannot say.
A) The night before, a woman in her 40's came in to the ER suffering a major psychological breakdown of some kind (she was vague to protect patient privacy). The Dr prescribed a major sedative, and the software alerted that they didn't have a negative pregnancy test because this drug is not approved for pregnant women and so should not be given. However, in my wife's clinical judgement- honed by years of training, reading papers, going to conferences, actual work experience and just talking to colleagues- the risk to a (potential) fetus from the drug was less than the risk to a (potential) fetus from mom going through an untreated mental health episode and so she approved the drug and overrode the alert.
B) A prescriber had earlier in that week written a script for Tylenol to be administered "PR" (per-rectum) rather than PRN (per requisite need). PR Tylenol is a perfectly valid thing that is sometimes the correct choice, and was stocked by the hospital for that reason. But my wife recognized that this wasn't one of the cases where that was necessary, and called the nurse to call the prescriber to get that changed so the nurse wouldn't have to give them a Tylenol suppository. This time there were no alerts, no flags from the software, it was just her looking at it and saying "in my clinical judgement, this isn't the right administration for this situation, and will make things worse".
So someone- with expensively trained (and probably licensed) judgement- will still need to look over the results of this AI pharmacist and have the power to override its decisions. And that means that they will need to have enough time per case to build a mental model of the situation in their brain, figure out what is happening, and override if necessary. And it needs to be someone different from the person filling out the Rx, for Swiss cheese model of safety reasons.
Congratulations, we've just described a pharmacist.
This is something I question. If you go to a specialist, and the specialist judges that you need surgery, he can just schedule and perform the surgery himself. There’s no other medical professional whose sole job is to second-guess his clinical judgment. If you want that, you can always get a second opinion. I have a hard time buying the argument that prescription drugs always need that second level of gatekeeping when surgery doesn’t.
That pharmacists also provide a safety check is a more modern benefit, due to their extensive training and ability to see all of the drugs that you are on (while a specialist only knows what they have prescribed). And surgeons also have a team to double-check them while they are operating, to confirm that they are doing the surgery on the correct side of the body, etc. Because these safety checks are incredibly important, and we don't want to lose them.
IDK, these are just limitations - people that really believe in AI will tell you there is basically nothing it can't do... eventually. I guess it's just a matter of how long you want to wait for eventually to come.
If every doctor agreed to electronically prescribe (instead of calling it in, or writing it down) using one single standard / platform / vendor, and all pharmacy software also used the same platform / standard, then our jobs are definitely redundant.
I worked at a hospital where basically doctors and pharmacists and nurses all use the same software and most of the time we click approve approve approve without data entry.
Of course we also make IVs and compounds by hand, but that's a small part of our job.
The other answer is that AI will not hold your hand in the ICU, or share with you how their mother felt when on the same chemo regimen that you are prescribing.
I am a medical school drop-out — in my limited capacity, I concur, Doctor.
My dentist's AI has already designed a new mouth for me, implants &all ("I'm only doing 1% of the finish-work: whatever the patient says doesn't feel just quite right, yet"—myDMD). He then CNCs in-house on his $xxx,xxx 4-axis.
IMHO: Many classes of physicians are going to be reduced to nothing more than malpractice-insurance-paying business owners, MD/DO. The liability-holders, good doctor.
In alignment with last week's (H)(1)(b) discussion, it's interesting to note that ~30% of US physician resident "slots" (<$60kUSD salary) are filled by these foreigner visa-holders (so: +$100k cost per applicant, amortized over a few years of training, each).
Everything else besides the above in TFA is extraneous. Machine learning models could have absolute perfect performance at zero cost, and the above would make it so that radiologists are not going to be "replaced" by ML models anytime soon.
The current "workflow" is primary care physician (or specialist) -> radiology tech that actually does the measurement thing -> radiologist for interpretation/diagnosis -> primary care physician (or specialist) for treatment.
If you have perfect diagnosis, it could be primary care physician (or specialist) -> radiology tech -> ML model for interpretation -> primary care physician (or specialist.
To understand why, you would really need to take a good read of the average PCP's malpractice policy.
The policy for a specialist would be even more strict.
You would need to change insurance policies before your workflow was even possible from a liability perspective.
Basically, the insurer wants, "a throat to choke", so to speak. Handing up a model to them isn't going to cut it anymore than handing up Hitachi's awesome new whiz-bang proton therapy machine would. They want their pound of flesh.
And it would be the developer's throat that gets choked when something goes awry.
I'm betting developers will want to take on neither the cost of insurance, nor the increased risk of liability.
Human radiologists have them. They can miss things: false negative. They can misdiagnose things: false positive.
Interviews have them. A person can do well, be hired and turn out to be bad employee: false positive. A person who would have been a good employee can do badly due to situational factors and not get hired: false negative.
The justice system has them. An innocent person can be judged guilty: false positive. A guilty person can be judged innocent: false negative.
All policy decisions are about balancing out the false negatives against the false positives.
Medical practice is generally obsessed with stamping out false negatives: sucks to be you if you're the doctor who straight up missed something. False positives are avoided as much as possible by defensive wording that avoids outright affirming things. You never say the patient has the disease, you merely suggest that this finding could mean that the patient has the disease.
Hiring is expensive and firing even more so depending on jurisdiction, so corporations want to minimize false positives as much as humanly possible. If they ever hire anyone, they want to be sure it's absolutely the right person for them. They don't really care that they might miss out on good people.
There are all sorts of political groups trying to tip the balance of justice in favor of false negatives or false positivies. Some would rather see guilty go free than watch a single innocent be punished by mistake. Others don't care about innocents at all. I could cite some but it'd no doubt lead to controversy.
HackerNews is often too quick to reply with a “well actually” that they miss the overall point.
If you're getting a blood test, the pipeline might be primary care physician -> lab with a nurse to draw blood and machines to measure blood stuff -> primary care physician to interpret the test results. There is no blood-test-ologist (hematologist?) step, unlike radiology.
Anyway, "there's going to be radiologists around for insurance reasons only but they don't bring anything else to patient care" is a very different proposition from "there's going to be radiologists around for insurance reasons _and_ because the job is mostly talking to patients and fellow clinicians".
PCPs don't have the training and aren't paid enough for that exposure.
Is this uncommon in the rest of the US?
How often do they talk to patients? Every time I have ever had an x-ray, I have never talked to a radiologist. Fellow clinicians? Train the xray tech up a bit more.
If the mote is 'talking to people' that is a mote that doesn't need an MD, or at least not a full specialization MD. ML could kill radiologist MD, radiologist could become the job title of a nurse or x-ray tech specialized in talking to people about the output.
That's fine. But then the xray tech becomes the radiologist, and that becomes the point in the workflow that the insurer digs out the malpractice premiums.
In essence, your xray techs would become remarkably expensive. Someone is talking to the clinicians about the results. That person, whatever you call them, is going to be paying the premiums.
>Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians.
The vast majority of radiologists do nothing other than: come in (or increasingly, stay at home), sit down at a computer, consume a series of medical images while dictating their findings, and then go home.
If there existed some oracle AI that can always accurately diagnose findings from medical images, this job literally doesn't need to exist. It's the equivalent of a person staring at CCTV footage to keep count of how many people are in a room.
Does that sounds like an assistance's job?
I could have just gone to med school and never deal with layoffs, RTO, etc.
Generalizing this to all radiologists is just as wrong as the original article saying that radiologists don't spend the majority of their time reading images. Yes, some diagnostic radiologists can purely read and interpret images and file their results electronically (often remotely through PACS systems). But the vast majority of radiology clinics where I live have a radiologist on-site, and as one example, results for suspicious mammograms where I live in Texas are always given by a radiologist.
And as the other comment said, many radiologists who spend the majority of their time reading images also perform a number of procedures (e.g. stereotactic biopsies).
I think it may be selection bias.
I also recently had surgery and the surgeon talked to the radiologist to discuss my MRI before operating.
It's sort of like saying "sometimes a cab driver talks to passengers and suggests a nice restaurant nearby, so you can't automate it away with a self-driving cab."
She also said that she frequently talks to the them before ordering scans to consult on what imaging she’s going to order.
> It's sort of like saying "sometimes a cab driver talks to passengers and suggests a nice restaurant nearby, so you can't automate it away with a self-driving cab."
It’s more like if 3/100 kids who took a robot taxi died, suffered injury, had to undergo unnecessary invasive testing, or were unnecessarily admitted to the hospital.
In the same fashion as construction worker just shows up, "performs a series of construction tasks", then go home. We just need to make a machine that performs "construction tasks" and we can build cities, railways and road networks for nothing but the cost of the materials!
Perhaps this minor degree of oversimplification is why the demise of radiologists have been so frequently predicted?
Do you have some kind of source? This seems unlikely.
>It's enhacing the capabilties of radiologists.
So it is not replacing radiologists?
It seems that with AI in particular, many operate with 0/1 thinking in that it can only be useless or take over the world with nothing in between.
I would be very interested if you could provide specific examples.
I think the part that says models will reduce time to complete tasks and allow providers to focus on other tasks is on point in particular. For one CV task, we’re only saving on average <30min of work per study, so it isn’t massive savings from a provider’s perspective. But scaled across the whole hospital, it’s huge savings
Or, far more likely, to cut costs and increase profits.
At the end of the day if the radiologist makes an error the radiologist gets sued.
If AI replaces the radiologist then it is OpenAI or some other AI company that will get sued each and every time the AI model makes a mistake. No AI company wants to be on the hook for that.
So what will happen? Simple. AI will always remain just a tool to assist doctors. But there will always be a disclaimer attached to the output saying that ultimately the radiologist should use his or her judgement. And then the liability would remain with the human not the AI company.
Maybe AI will "replace" radiologists in very poor countries where people may not have had access to radiologists in the first place. In some places in the world it is cheap to get an xray but still can be expensive to pay someone to interpret it. But in the United States the fear of malpractice will mean radiologists never go away.
EDIT: I know the article mentions liability but it mentions it as just one reason among many. My contention is that liability will be the fundamental reason radiologists are never replaced regardless of how good the AI systems get. This applies to other specialities too.
Are you sure? Who would want to be a radiologist then when a single false negative could bankrupt you? I think it's more likely that as long as they make a best effort at trying to classify correctly then they would be fine.
I believe medical AI will probably take hold first in a poorer countries where the existing care is too bad/unaffordable, then as it proves itself there, it may slowly find its way to richer countries.
But probably lobbying will be strong against it, just as you can't get cheap generic medications made in India if you live in the US.
Not just radiologists but surgeons too for example.
Depends on if the patient sues and how that goes.
It won't lead to bankruptcy as malpractice insurance is there but your premiums would go higher and it could still be costly.
> Some products can reorder radiologist worklists to prioritize critical cases, suggest next steps for care teams, or generate structured draft reports that fit into hospital record systems.
I think we all have become hyper-optimistic on technology. We want this tech to work and we want it to change the world in some fundamental way, but either things are moving very slowly or not at all.
The interesting thing is that there are problems for which this rule applies recursively. Of the remaining 20%, most of it is easier than the remaining 20% of what is left.
Most software ships without dealing with that remaining 20%, and largely that is OK; it is not OK for safety critical systems though.
https://www.reuters.com/technology/tesla-video-promoting-sel...
for me i have been riding in waymos the last year and have been very pleased with the results. i think we WANT this technology to move faster but the some of the challenges at the edges take a lot of time and resources to solve, but not fundamentally unsolvable.
they are likely semi autonomous, which is still cool, but I wish they'd be honest about it
Is this why Waymo is slow to expand, not enough remote drivers?
Maybe that is where we need to be focused, better remote driving?
I think maybe we can and should focus on both. Better remote driving can be extended into other equipment operations as well - remote control of excavators and other construction equipment. Imagine road construction, or building projects, being able to be done remotely while we wait for better automation to develop.
* Saves on commute or travel time.
* Job sites no longer need to provide housing for workers.
* Allows the vehicles to stay in operation continuously, currently they shut down for breaks.
* With automation multiple vehicles could be operated at once.
The biggest benefits seem to be in resource extraction but I believe the vehicles there are already highly automated. At least the haul trucks.
The reason that Waymo is slow to expand is that they have to carefully and extensively LiDAR map every single road of their operating area before they can open up service in an area. Then while operating they simply do a difference algo on what each LiDAR sees at the moment and the truth data they have stored, and boom, anything that can potentially move pops right out. It works, it just takes a lot of prep- and a lot of people to keep on top of things too. For example, while my kid's school was doing construction they refused to drop off in the parking lot, but when the construction ended they became willing. So there must be a human who is monitoring construction zones across the metro area, and marking up on their internal maps when areas are off limits.
It does, they argue cause they are clueless or have veted interest.
Sometimes both.
Much like phone-a-friend, when the Waymo vehicle encounters a particular situation on the road, the autonomous driver can reach out to a human fleet response agent for additional information to contextualize its environment. The Waymo Driver does not rely solely on the inputs it receives from the fleet response agent and it is in control of the vehicle at all times. As the Waymo Driver waits for input from fleet response, and even after receiving it, the Waymo Driver continues using available information to inform its decisions. This is important because, given the dynamic conditions on the road, the environment around the car can change, which either remedies the situation or influences how the Waymo Driver should proceed. In fact, the vast majority of such situations are resolved, without assistance, by the Waymo Driver.
https://waymo.com/blog/2024/05/fleet-response/
Although I think they overstate the extent to which the Waymo Driver is capable of independent decisions. So, honest, ish, I guess.
[0] https://waymo.com/safety/impact/
Is there any saying that exists about overestimating stuff in the near term and long term but underestimating stuff in the midterm? Ie flying car dreams in the 50s etc.
https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
Gates seems more calm and collected having gone through the trauma of almost losing his empire.
Musk is a loose cannon having never suffered the consequences of his actions (ie. early Gates and Jobs) and so he sometimes gets things right but will eventually crash and burn having not had the fortune of failing and maturing early on in his career(he is now past the midpoint of his career with not enough buffer to recover).
They are both dangerous in their own ways.
We still don't have flying cars 70 years later, and they don't look any more imminent than they did then. I think the lesson there is more "not every dream eventually gets made a reality".
The city itself is relatively small. A vast majority of area population lives distributed across the MSA, and it can create hellish traffic. I remember growing up thinking 1+ hour commutes were just a fact of life for everyone commuting from the suburbs.
Not sure what car ownership looks like, and I haven’t been in years, but I’d imagine it’s still much more than just 20%
I said "less than 80% car ownership", not "80% do not own a car". Technically these are not mutually exclusive but I think you read it as the second one. I haven't really found much analysis about how public transit interfaces with self driving cars honestly.
They've got testing facilities in Detroit ( https://mcity.umich.edu/what-we-do/mcity-test-facility/ ) ... but I want to see it work while it is snowing or after it has snowed in the upper midwest.
https://youtu.be/YvcfpO1k1fc?si=hONzbMEv22jvTLFS - has suggestions that they're starting testing.
If AI driving only works in California, New Mexico, Arizona, and Texas... that's not terribly useful for the rest of the country.
If you refer to rural areas, thats 1/7 of the population and ~10% of GDP. They can be tossed aside like they are in other avenues.
For some reason, enthusiasts always think this time is different.
It has similar insights, and good comments from doctors and from Hinton:
“It can augment, assist and quantify, but I am not in a place where I give up interpretive conclusions to the technology.”
“Five years from now, it will be malpractice not to use A.I.,” he said. “But it will be humans and A.I. working together.”
Dr. Hinton agrees. In retrospect, he believes he spoke too broadly in 2016, he said in an email. He didn’t make clear that he was speaking purely about image analysis, and was wrong on timing but not the direction, he added.
304 more comments available on Hacker News