Us Opens Tesla Probe After More Crashes Involving Its "full Self-Driving"
Posted3 months agoActive3 months ago
apnews.comTechstory
heatednegative
Debate
80/100
TeslaAutonomous VehiclesRegulatory Oversight
Key topics
Tesla
Autonomous Vehicles
Regulatory Oversight
The US has opened an investigation into Tesla's 'Full Self-Driving' technology following multiple crashes, sparking debate about the safety and marketing of autonomous vehicles.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
18m
Peak period
74
0-12h
Avg / period
16
Comment distribution96 data points
Loading chart...
Based on 96 loaded comments
Key moments
- 01Story posted
Oct 9, 2025 at 10:07 AM EDT
3 months ago
Step 01 - 02First comment
Oct 9, 2025 at 10:25 AM EDT
18m after posting
Step 02 - 03Peak activity
74 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 16, 2025 at 2:34 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45527931Type: storyLast synced: 11/20/2025, 9:01:20 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The marketing name FSD or Full Self-Driving with (Supervised) in small font and brackets is incredibly misleading.
Regulatory agencies have been toothless towards Tesla for a long time.
But it'll be based on risks introduced by preventable human error- hubris, etc.
All it will take is some viral video of a Tesla running over a child or something terrible like that.
Of course if that fails, perhaps they could use honking as backup sensor.
A new solution that has less problems is worse than an existing solution with more problems.
There's also a willingness to be less upset with humans making a mistake than a machine.
Edit:
Unknown problems may or may not exist so while I think that concern makes sense it doesn't matter until they come up.
I'm making the decision based on the current state. If additional issues come about then you reevaluate if the new solution is better or worse than the existing.
If you consider unknown problems then how can you make a decision?
X+Y > Z?
Where X is the weight of problems for the new solution, Z is the weigh for the existing solution, and Y is a value between 0 and infinity (unknown problems)
The distribution of errors is as important as the error rate.
Simply being sober, awake, calm and not texting puts you far above the average driver and are things you can control.
A self diving car which, apparently 10+ years into development is currently running red lights randomly 1% of the time is not in your control.
It can be controlled however.
>Simply being sober, awake, calm and not texting puts you far above the average driver and are things you can control.
It's not important if it's possible, what matters if it happens.
I am not elderly, do not DUI, drive tired, or touch my phone while driving. I put myself in a different risk pool by my behavior.
I step into a Tesla FSD car, I am in the same risk pool as everyone else in one when it decides to run 1% of red lights or whatever other stupid bug in the next release.
The unsupervised "robotaxi" uses a different NN. It doesn't behave the same way as FSD.
There's willingness to be upset at anyone with deep pockets who can be found accountable. And the motivations for that aren't emotional, they are purely material.
There's a reason why people have spent decades trying to find pharmacological cause for autism, in spite of the enormous amount of evidence that the condition is mostly hereditary.
And a very good reason why vaccines in America are exempt from the legal system.
Don't disagree, but new solutions can come with unknowns
Everyone thinks they're above average, even people who know statistics! So if it's merely 20% better than the average driver a huge number of people will conclude "I am above average so I'll do a better job"
Will some of them be wrong? Of course. But tons of them will be right, too.
It can't be statistically significantly better, it has to be statistically overwhelmingly better. Not a part of a standard deviation but several of them.
If you aren’t in one of those categories you are immediately dramatically better than average. This is fairly easy to do before even considering being a “good driver” / defensive driver / etc.
So you want a better comparison for most people.
A self-driving car might be 5x better than me at driving but logically I can't be liable for what it does. The company making it has to be. 5x better would be 0.2 accidents a year. But multiply by that the 100,000 cars the manufacturer has sold... they don't want that liability. That's why Telsa's autopilot is still supervised, because they want its mistakes to be your problem.
It presents a lot of thorny problems. If I am a persistently dangerous driver I can have my license taken away and be taken off the road. But if a self driving car is judged to be too dangerous for the road you'll suddenly have thousands of people who lose access to their car (assuming a future with self-driving only cars) through no fault of their own. What's their path to getting back on the road?
If the self-driving car company takes on that liability it'll save you the $1000/year. So assume they're either going to charge you an extra $10K up front or an extra $1000/year. For that kind of cash they should be quite willing to take on the risk or they can find an insurance company to do so, if their car is actually safer than an average driver.
This should work in most countries. Perhaps not the US with its pattern of massive punitive damage awards.
OP said:
> self driving will be statistically significantly better than human drivers, but because it isn't perfect we won't allow it anyway.
My contention is that it's not that everyone is a luddite, it's that while companies are legally allowed to provide quasi-self driving they have no liability for they will do exactly that. And that is what will hold us back.
2. There's $1000/year of potential revenue they're missing out on by not assuming liability. That's a pretty powerful incentive.
I wonder if that's still the case, and if so how many accidents they've become liable for.
Except, in both cases the risk, statistically, is clearly worth it.
It is the optics that suck.
But humans are easily influenced by perception and narrative, rather than rationality.
There's still no final storage in all of the US, so there's that.
Thinking total risk, end-to-end, including reduction of risks associated with other technologies.
Self driving has a similar issue where the value shrinks the more supervision it requires. Tesla is a new benefit in terms of effort but it can’t operate safely while the driver is asleep.
I think that's by broad policy and not by individual risk mitigation. Isn't it something like "if nuclear is cheaper than the average then it has to spend the difference on risk mitigation"?
3 mile island wasn’t a public health hazard but lack of maintenance cost billions by destroying the reactor. Thus prompting the industry to spend significantly more money on maintaining reactors. The problem is it’s really difficult to determine what’s overkill here.
There’s something like 600,000 US bridges and sometimes people look a failure and say it’s rare enough not to be worth doing anything about.
Is the manufacturer liable? Autopilot would be too much risk and the manufacturer would demand users can only activate it behind the wheel, needs both hands on the wheel while getting a coffee infusion. The tool would lose its advantages.
Power plants aren't insurable because it would financially destroy any company in case of a leak or operating costs would become so high, that nuclear cannot compete anymore.
We maybe will get it one day. Waymo probably did it correctly. Limited road network, careful approach, learn what the problems are and expand on that.
https://www.mbusa.com/en/owners/manuals/drive-pilot
Meanwhile, if you have contract with more serious company, you wont have to spend years and thousands fighting them over liability.
>DRIVE PILOT can be activated in heavy traffic jams at a speed of 40 MPH or less on a pre-defined freeway network approved by Mercedes-Benz. DRIVE PILOT operates in daytime lighting conditions when inclement weather is not present and in areas where there is not a construction zone. Please refer to the Operator’s Manual for a full list of conditions required for DRIVE PILOT.
No one is going to regulate it this presidential term though so Tesla has some more time to work I guess.
We already have self-driving cars: look at Waymo, etc. look at chinese ride-hailing companies. What we won't have is private-use self-driving cars: a regular person will not be able to buy one.
While a good amount of functionality exists, the liability model and accidents are big road blocks to seeing this technology truly mainstream, not just select cities/routes/etc
I aspire to be a trillionaire. Does that count for anything?
> While a good amount of functionality exists, the liability model and accidents are big road blocks to seeing this technology truly mainstream, not just select cities/routes/etc
Waymo just started service at SFO airport last month.
What’s your definition of mainstream? Everywhere anytime like an Uber?
I rarely take an Uber or a taxi (probably single digit number of times a year) and, even if it were half the price, that would be unlikely to change my behavior much.
That can change consumer behavior around you dramatically , for example cut car ownership ?
You woound only need local people to grab the truck at a parking spot close by to drive them to the target location.
That alone would help long road truckers to see their familys and not having to sleep in their trucks. It would save costs and would make it saver for everyone if all the trucks drive automatically.
BMW and other EV developers can already drive on a lot of german autobahn hands free.
What i also don't understand, if i really want the benefit of self driving car, I only need it when i'm driving long or when i'm intoxicated. Tbh. let me just record the road from bar to my home, let me drive it for a few times until my car knows that direction and done.
Comma.ai is here today and does it for that. I highly recommend one for all of my friends! (Not being paid to say that, just a happy customer.)
Suppose the accident rate for regular cars were 1 fatality every 100 million miles driven (it actually is in the US).
Suppose further a hypothetical self-driving car has a proven rate of 1 fatality every 1 billion miles (10x better). Except when that fatality happens, it is because the car suddenly incinerates when arriving otherwise safely at its destination. Something about the advanced AI technology makes this outcome completely random and completely unfixable.
Which do you choose? Drive yourself, 10x more dangerous? Or leave it entirely up to chance, but 10x safer?
The rational choice is to pick the self-driving car. Yet I suspect many people (including me, I admit) would choose to drive themselves.
How far apart do those numbers need to be before most people give up the steering wheel?
Our mental suffering is not because car is on autopilot. Suffering happens because WE ARE ON AUTOPILOT. So I chose to trade the 30x risk of death for a 30x reduction in mental suffering. Rational? God I hope not.
Personal self-driving cars? Maybe less so because we probably want them to be well maintained.
Tesla-style, camera-only, dual-use (human and computer driven), safety-as-an-afterthought cars? Probably not.
Just six more months though...
"Optimus robot is the future of Tesla"
He knows shareholders value your company far more when it's their dreams guiding valuation, rather than what exists in reality.
Moving from mixed hardware to camera-only is only ever likely to result in articles such as the one linked to being written.
No amount of AI bullshit is going to save you from the brick wall that the camera can't see because there is fog etc. in the way.
I'd agree in the general case, but better sensors likely wouldn't have saved them here.
p.s. i've been a long sceptic of FSD from tesla, but latest changes, really really shows huge progress, even 1 year away and now these are two different worlds.
So you're ok with them nuking crash data, to avoid a lawsuit? You're ok with them turning off FSD moments before a crash in a childish attempt at avoiding liability for a product they created that doesn't work as sold?
Why are you ok with them using your life as a beta test for profits?
-Chuck Palahniuk, Fight Club. 1999 for the movie, 1996 for the book.
That a corporation is acting in its own best interests at the cost of human lives neither shocks nor surprises me. NHTSA is investigating the issue, specifically about disengagements, but also the broader issues. They took down Cruise for lying to them Their CEO should have had to face criminal charges. Politics being what it is, it bothers me a great deal how fair any investigations will be, but let's not get distracted by that.
Tesla's far from the first car company to make a profit while human lives in the mix. It won't be the last.
You can totally find histrionic videos of constructed scenarios where the system fails and get all worked up about it. The algorithm thrives on engagement, and getting you all worked up makes them money. It's as if you were a fiddle, that gets content shown to you, and then they make money from it. Do you enjoy that? Mark Rober does have a good one though.
Over the many years of driving I've done, of the accidents I've gotten into, the accidents I've avoided, as a driver, as a passenger; of the accidents my friends and family have gotten into, what it comes down to for me is that human drivers drive when they shouldn't have been driving, every single day. I'm already forced to share the road with those assholes, what's one more?
Human drivers suck. They drive when they're tired, when they're drunk. From glancing at other drivers while I'm driving, texting's gotten quite popular for drivers. One accident I'm aware of, the driver passed out from doing fentanyl while driving. Not sure how that one happens.
Now, I'm sure you're an excellent driver. Better than FSD. But how are you at driving when you're passed out from a stroke or some other freak accident? Freak accidents happen to computers too, but in my mind, they're different. On top of that, a computer's not going to get hopping mad that it struck out at the bar at 1 am and drive home really mad and also drunk so hey let's drive by my ex's house while we're at it. Obviously, I'd rather that driver not do that in the first place, but since we can't prevent that, I'd rather live in a world where a computer drives their car than have them drive it around in that state. I'm a night owl. I frequently end up being outside at those hours. So no matter how good the good drivers are who would never do that are, statistically my chances of encountering bad drivers who do shit like that that are higher than average.
In life you have to contend with unknowns, and we, the general public don't actually know how many Tesla robotaxis are actually driving around right now. If you're in one of the supported locations you can try it out, and watch for yourself whether the human who's got one of the most boringest jobs ever is or is not actually driving the car. If it's so bad, shouldn't we be hearing about them crashing, non-stop? All I've heard of is the three in Austin at launch, but it doesn't seem like there have been any after that. I haven't heard about any crashes in San Francisco, but I also haven't gone looking super deep.
Do I wish both Tesla and Elon Musk behaved better than they do now? Absolutely. Not sure how much that counts or actually affects anything though.
> ODI has identified six Standing General Order ("SGO") reports in which a Tesla vehicle, operating with FSD engaged, approached an intersection with a red traffic signal, continued to travel into the intersection against the red light and was subsequently involved in a crash with other motor vehicles in the intersection. Of these incidents, four crashes resulted in one or more reported injuries. At least some of the incidents appeared to involve FSD proceeding into the intersection after coming to a complete stop.
I've experienced this bug on every FSD 13.x version, including the current 13.2.9. When you're the first to pull up to a red light, the car stops and waits, then after a while it sometimes (maybe 1 time in 100 or so) just decides to go even if the light is still red. Horrifying because sitting at a red light doesn't seem like a dangerous situation, but in fact it might be the most dangerous place on FSD right now. Hopefully this forces them to fix it because my colorful language on the voice feedback apparently hasn't convinced them.
[1] https://www.nhtsa.gov/?nhtsaId=PE25012
Fortunately there was no accident and there were no cops around or apparently traffic cameras. Somehow I think "Sorry, the car broke the red light, not me!" would not have been a compelling thing to say to a cop or a judge.
The only thing I could do was hit the dashcam record button as some sort of proof, but the video itself has no indication FSD was engaged. I suspect I would have to subpoena or forensically extract any data that could exonerate me, which is just not practical if the worst I got was a ticket.
It should be illegal by default until the OEM proves it's safe with paid safety drivers.
Legal by default is an insane state of affairs.
Freedom isn't just individual freedom. It's also protection from idiots who beta-test SDCs on roads that me and my kids are on.
How do I know this is a bad situation? It has literally killed dozens of people already.
Note that all the regulation was useless in the Boeing case, the FAA ultimately allowed Boeing to self-inspect using contractors. The same is happening with drugs.
You may be underestimating the risk of human drivers. Given the choice of sharing the road with 10,000 tesla, waymo, zoox, etc. self-driving cars vs. 10,000 human-driven cars, which would you choose for you and your kids?
Although maybe I hope you are right because it can be proven to be unsafe and shut down.
You confirm it is fixed by ensuring your validation set also has sufficient data representing the failures (and that they now succeed).
The errors it makes are inhuman, make no sense to us, which makes them more insidious and unpredictable.
I could get running a light trying to beat it, humans do that. Defensive drivers around you can anticipate this and look for crossing traffic when their light goes green and they floor it immediately.
But a full stop then deciding to make a break for it is bananas.
Not the first baffling inhuman error it has made either.
Fuck it, I'm just gonna go!
Maybe we have different ideas of defensive driving; personally, I would wait to enter the intersection
This is without even accounting for "no right on red" signs, which honestly seem to be entirely optional. People usually don't even look anywhere near the direction where the sign is hung.
I'm curious what's going on there, because I would have assumed they'd treat the red light as a hard "you're stopped now and absolutely may not go again without affirmative signal (green light or driver intervention)" state. It seems pretty weird that it would just... decide to go, after a bit, on its own, without anything telling it to.