Tesla Influencers Tried Coast-to-Coast Self-Driving, Hit Debris Before 60 Miles
Posted4 months agoActive4 months ago
electrek.coTechstoryHigh profile
heatednegative
Debate
85/100
TeslaAutonomous VehiclesElon Musk
Key topics
Tesla
Autonomous Vehicles
Elon Musk
Tesla influencers attempted a coast-to-coast self-driving trip but crashed into debris within 60 miles, sparking controversy over Tesla's FSD capabilities and Elon Musk's claims.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
19m
Peak period
134
0-6h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 22, 2025 at 7:51 AM EDT
4 months ago
Step 01 - 02First comment
Sep 22, 2025 at 8:10 AM EDT
19m after posting
Step 02 - 03Peak activity
134 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 24, 2025 at 3:22 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45332120Type: storyLast synced: 11/20/2025, 8:14:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
For automated tools like CNC machines, there is a reason it’s just ‘emergency stop’ for instance, not ‘tap the correct thing we should do instead’.
But doing an e-stop on a road at speed is a very dangerous action as well, which is why that isn’t an option here either.
The alternative is taking control of the car all the time. Something nobody is going to practically do.
And then when something goes wrong, you are unceremoniously handed a plane with potentially some or all of those protections no longer active.
As an analogy, imagine your FSD car was trying to slow down for something, but along the way there is some issue with a sensor. So it gives up and hands control back to you while it's in the middle of braking, yet now your ABS is no longer active.
So now the situation is much more sudden than it would have been (if you had been driving the car you would have been aware of it and slowing down for it youself earlier in the game), you likely weren't paying as much attention in the first place because of the automation, and some of the normal protection isn't working.
So it's almost three levels of adding insult to injury. Potentially related discussion: https://news.ycombinator.com/item?id=43970363
The level of training required to oversee full automation is non-trivial if you have to do more than press a stop button.
It does have the same problem - if 99.999% of your flight time is spent in normal law you are not especially ready to operate in one of the alternate laws or god forbid direct law, which is similar to the case of a driver who perhaps accustomed to the system forget how to drive.
But I think we have a ways before we get there. If the car could detect issues earlier and more gradually notify the driver that they need to take control, most every driver at present retains the knowledge of how to directly operate a car with non-navigational automation (abs as you mentioned, power stearing, etc)
I was thinking of something similar to XL Airways Germany 888T. I was trying to find it and came across this thread making a similar comparison so I'll link that: https://www.reddit.com/r/AdmiralCloudberg/comments/18ks9nl/p...
But I think there was some other example with an engine asymmetry (an autothrottle issue?) that the autopilot was fighting with bank, and eventually it exceeded the bank limit and dumped a basically uncontrollable aircraft in the pilots' lap. It would have been more obvious if you were seeing the yoke bank more and more. (Though it looks like this was China Airlines 006, a 747SP, which contradicts that thought.)
I agree that we can make the situation less abrupt for cars in some cases (though people will probably get annoyed by the car bugging them for everything going on)
> "In trying to explain why Ho never took this critical step and subsequently failed to notice the plane’s increasing bank, the NTSB looked at two areas: fatigue, and overreliance on automation. Regarding the latter, investigators noted that during cruise flight, the job of a Boeing 747 pilot is to monitor the automation, not to fly the airplane. Studies have shown that humans are naturally poor monitors of automation, because it’s boring and does not actively engage our brains and bodies. As a result, when something goes wrong, the brain has to “wake up” before it can assess the situation and take corrective action. Therefore, when flying on autopilot pilots have increased reaction times to unexpected events, as opposed to flying manually, when a sudden change in the state of the aircraft can be instinctively assessed using physical cues transmitted via the control column."
So who knows what we can do. I've definitely experienced this to varying degrees with the fancier cruise controls (e.g. "Autopilot"). It's one thing to just take pressure off the gas and/or steering wheel, but another entirely when you aren't actively "driving the car" at full attention anymore.
Passively monitoring a situation continuously and reacting quickly when needed is something humans are not good at. Even with airline pilots, where we do invest a lot in training and monitoring, it's a well-understood issue that having to take over in a surprising situation often leads to confusion and time needed to re-orient before they can take effective action. Which for planes is often fine, because you have the time buffer needed for that, but not always.
They want to pretend you'll only need to actually intervene in 'edge case' situations, but here we have an example of perfect conditions requiring intervention. Regardless of the buzzwords they can attach to whatever methods they are using, it doesn't feel like it works.
... wait, do they actually claim that? I mean that's just nonsense.
I don't think that's right; I think the stock price entirely depends on people seeing it a vehicle to invest in Musk. If Musk died tomorrow, but nothing else changed at Tesla, the stock price would crater.
I guess it really does depend on which reality distortion field we’re talking about haha.
Right now it’s easily double to triple that, even with Musks behavior.
The passive investing / market cap weighted ETF complex tends to help big valuations stay big, but a company like Tesla still needs that sharp shot in the arm followed by frenzied buying occasionally in order to stay aloft (be it by traders, retail participants, shorts covering, etc).
I suppose they could replace Musk with another hype salesman, but the "hate" that Tesla gets is a big part of these upside shock cycles for the stock, because the ticker is a siren call for short sellers, who are ultimately guaranteed future buyers.
I suspect a significant proportion of Tesla's stock price comes from people who are using it as a proxy for his other companies that the public can't invest in, primarily xAI (as all AI companies are in a horrific bubble right now) and SpaceX.
I get that it’s way over hyped, but they have real results that can’t be denied
They have the best selling vehicle by a little under 1%, with the Tesla Model Y just edging out the Toyota Corolla. But Toyota also has the 3rd best selling model (RAV4) that is about 7% behind the Model Y. And they have a third model in the top 10, the Camry, at a little over half the Model Y sales.
Just those 3 Toyota models combined sell about 30% more than all Tesla models combined.
Across all models Toyota sells 6 times as many cars as Tesla.
By number of cars sold per year Tesla is the 15th biggest car maker. The list is Toyota, Volkswagen, Hyundai-Kia, GM, Stellantis, Ford, BYD, Honda, Nissan, Suzuki, BMW, Mercedes-Benz, Renault, Geely, and then Tesla.
If we go by revenue from sales rather than units sold it is 12th. The list is: Toyota, Volkswagen, Hyundai-Kia, Stallantis, GM, Ford, Mercedes-Benz, BMW, Honda, BYD, SAIC Motor, and then Tesla.
Yet Tesla has something like 6 times the market cap of Toyota and around 30 times the market caps of VW and Honda. That's pretty much all hype.
Of course not.
They don’t make as many vehicles or have the revenue of other auto manufacturers, but who cares.
What they do, they do very, very well. They lead to mass market EV adoption. Even if they crumble tomorrow their contribution is immense. Who cares about market cap, it’s all just gambling.
Elon can't levitate TSLA and other valuations by himself. There has to be at least the appearance of substance. That appearance is wearing thin. While I'm going to observe the caution that the market can stay irrational longer than I can stay solvent, once reality assert itself, Elon will be powerless to recreate the illusion.
Have a minimum quorum of sensors, disable one if it generates impossible values (while deciding carefully what is and isn't possible), use sensors that are much more durable, reliable, and can be self-tested, and integration and subsystem test test test thoroughly some more.
The debris? The very visible piece of debris? The piece of debris that a third party camera inside the car did in fact see? Adding 2 radars and 5 LIDARs would totally solve that!
For fuck's sake, I am tired of this worn out argument. The bottleneck of self-driving isn't sensors. It was never sensors. The bottleneck of self-drivng always was, and still is: AI.
Every time a self-driving car crashes due to a self-driving fault, you pull the blackbox, and what do you see? The sensors received all the data they needed to make the right call. The system had enough time to make the right call. The system did not make the right call. The issue is always AI.
That got us some of the way towards self-driving, but not all the way. AI was the main bottleneck back then. 20 years later, it still is.
Because it's geofenced to shit. Restricted entirely to a few select, fully pre-mapped areas. They only recently started trying to add more freeways to the fence.
Musk has, IIRC, actually admitted that this was their purpose.
It was about scuttling the expansion of the monorail to the airport.
Musk just picked up after the taxi cartel collapsed.
Kinda sorta.
It only operates a few hours a day, and the cars are not self-driving.
It's like a dedicated tunnel for Ubers.
That word choice.
Did we give him wayyy too much free money via subsidies? Yes. But that was our mistake. And if we hadn't given it to him, we would have given it to some other scam artists somewhere or other. So even in the case of the counterfactual, we could expect a similar outcome. Just different scumbags.
No we wouldn’t have. Not every dollar we give goes to scam artists. And there are a whole lot of industries and companies far less deceitful.
Only isolated people need Starklink and 95% of people on Earth live in urban cities with pop > 100,000. So it's a product for the 5%
What kind of absurd argument is that? Like any company that doesn't serve 100% of people on the planet is by definition not a success? That's the dumbest fucking argument I have ever heard and I hope you are not actually series about that.
Because Apple doesn't serve 100% of the population either, so clearly Apple is not successful. By your logic right?
And its not even true. People that live in big cities do a thing called 'flying' between big cities quite often. And Starlink is already starting to be dominant in the airline market, meaning all those city people, when they fly use Starlink.
And Starlink is used in agriculture and mining, and shipping. So all those city people do actually use things that have Starlink in the supply chain.
Common man, its fine to hate Musk, but at some point reality is a thing.
From a car design and development point of view, it's a massive waste of lost opportunities.
From a self driving interested person, it's a joke.
Really depends on how you view things, in a purely money in the stock market aspect tesla is doing great.
The trillion dollar pay package will make it happen, that's what was missing.
The only time I had to take over was for road debris on the highway. Off the highway it’s very good about avoiding it. My guess is Tesla has not been focusing on this issue as it's not needed for robotaxi for phase one.
Later, V12, which is the end-to-end neural network, worked on highways as well, but they use different stacks behind the scenes.
A human would rather be involved in a crash because of its own doing, rather than because they let the machine take control and put trust in it.
> Consider a turkey that is fed every day. Every single feeding will firm up the bird's belief that it is the general rule of life to be fed every day by friendly members of the human race "looking out for its best interests," as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.
The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb, page 40
But, if you watch the car’s display panel, it looks as if the car didn’t see anything and just went full speed ahead. That’s not great.
It should have slowed down and alerted the driver that something was there. I didn’t watch the complete video so maybe there’s more.
So if that was a human and they ran them over it'd be okay because they were testing FSD?
They're putting themselves (fine) and everyone around them (far less fine) in danger with this stunt.
A competent human driver would instinctively slow down, look at the potential obstruction, and think about changing lanes or an emergency stop.
Most probably, the visible object is just some water spilled on the road or that kind of thing. But if it isn’t then it’s very dangerous
This car appeared to be blind to any risk. That’s not acceptable
At 56, I don't expect to see it on country roads in my lifetime, but maybe someday they'll get there.
Anything "could" happen, but it would take an inordinately inattentive driver to be this bad.
They had 7-8 seconds of staring and talking about the debris before hitting it (or perhaps more, the video starts the moment the guy says "we got eh, a something", but possibly he saw it some instants before that).
So a person would need to be pretty much passed out to not see something with so much time to react.
Elon's estimates have always been off but it is irresponsible to see an obstacle up ahead and assume the computer would do something about it while the driver and passenger debate on what the said obstacle is. I am not sure if they were trying to win a Darwin Award and I say that as no particularly fan of Musk!
Also of course you're avoiding an unknown object that large, especially when there's plenty of space to go around it on either side.
If you still don't think you can avoid something like that, please get off the road for everyone's safety.
The person you replied to didn't do that, though:
> But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
Sad to see HN to give up to mob mentality.
If it can't see something like that in ideal conditions, then god knows what else it'd miss in less ideal conditions.
Contrast with Tesla's "vision-only" system, which uses binocular disparity along with AI to detect obstacles, including the ground plane. It doesn't have as good a range, so with a low- profile object like this it probably didn't even see it before it was too late. Which seems to me a theme for Tesla autonomy.
This is why FSD is still shit in late 2025 and drives like it's drunk.
The car reacted the opposite a human would. If you saw something unidentifed ("road kill?") in the distance then you'd be focusing on it and prepared to react according to what it was. With an empty lane beside you I think most drivers would be steering around it just based on size, even before they realized exactly what it was (when emergency braking might be called for).
Do they? "Many humans" would hit that? The humans in the car spotted the debris at least 8s before the impact. I don't think any humans would hit that in broad daylight unless they were asleep, completely drunk, or somehow managed to not look at the road for a full 10s. These are the worst drivers, and there aren't that many because the punishment can go up to criminal charges.
The argument that "a human would have made that mistake" backfires, showing that every Tesla equipped with the "safer than a human driver" FSD is in fact at best at "worst human driver" level. But if we still like the "humans also..." argument, then the FSD should face the same punishment a human would in these situations and have its rights to drive any car revoked.
All that to say that I don't feel this is a fair criticism of the FSD system.
Yes it is because the bar isn't whether a human would detect it, but whether a car with LiDAR would. And without a doubt it would, especially given those conditions: clear day, flat surface, protruding object is a best case scenario for LiDAR. Tesla's FSD was designed by Musk who is not an engineer nor an expert in sensors or robotics, and therefore fails predictably in ways that other systems designed by competent engineers do not.
Imagine there was a human driver team shadowing the Tesla, and say they got T-boned after 60 miles. Would we claim that human drivers suck and have the same level of criticism? I don't think that would be fair either.
This is a cross-country trip. LA to New York is 2776 miles without charging. It crashed the first time in the first 2% of the journey. And not a small intervention or accident either.
How you could possibly see this as anything other than FSD being a total failure is beyond me.
>They made it about 2.5% of the planned trip on Tesla FSD v13.9 before crashing the vehicle.
This really does need to be considered preliminary data based on only one trial.
And so far that's 2.5% as good as you would need to make it one way, one time.
Or 1.25% as good as you need to make it there & back.
People will just have to wait and see how it goes if they do anything to try and bring the average up.
That's about 100:1 odds against getting there & back.
One time.
Don't think I would want to be the second one to try it.
If somebody does take the risk and makes it without any human assistance though, maybe they (or the car) deserve a ticker-tape parade when they get there like Chas Lindbergh :)
This argument makes no sense. I take it that you're saying that if we provide the Tesla a road which contains nothing to hit, it won't hit anything?
Well, sure. Also, not interesting.
In a real world drive of almost 3000 miles there will nearly always be things to avoid on the way.
Not quite. I am saying that basing the judgment on a rare anomaly is a bit premature. It's a sample size of 1, but I base this on my own driving record of 30 years and much more than 3000 miles where I never encountered an obstacle like this on a highway.
> Also, not interesting
I would have liked to see the planned cross-country trip completed; I think that would've provided more realistic information about how this car handles with FSD. The scenario of when there is a damn couch or half an engine on the highway is what's not interesting to me, because it is just so rare. Seeing regular traffic, merges, orange cones, construction zones, etc. etc. now that would have been interesting.
Roboticists in 2016: "Tesla's sensor technology is not capable of this."
Tesla in 2025: coast-to-coast FSD crashes after 2% of the journey
Roboticists in 2025: "See? We said this would happen."
The reason the robot crashed doesn't come down to "it was just unlucky". The reason it crashed is because it's not sufficiently equipped for the journey. You can run it 999 more times, that will not change. If it's not a thing in the road, it's a tractor trailer crossing the road at the wrong time of day, or some other failure mode that would have been avoided if Musk were not so dogmatic about vision-only sensors.
> The video does not give me that information as a prospective Tesla customer.
If you think it's just a fluke, consider this tweet by the person who is directing Tesla's sensor strategy:
https://www.threads.com/@mdsnprks/post/DN_FhFikyUE/media
Before you put your life in the hands of Tesla autonomy, understand that everything he says in that tweet is 100% wrong. The CEO and part-time pretend engineer removed RADAR thinking he was increasing safety, when really he has no working knowledge of sensor fusion or autonomy, and he ended up making the system less safe. Leading to predictable jury decisions such as the recent one: "Tesla found partly to blame for fatal Autopilot crash" (https://www.bbc.com/news/articles/c93dqpkwx4xo)
So maybe you don't have enough information to put your life in the hands of one of these death traps, but controls and sensors engineers know better.
Now you can certainly argue that "objects in the road" is a category of failure mode you don't expect to happen often enough to care about, but it's still a technical flaw in the FSD system. I'd also argue it points to a broader problem with FSD because it doesn't seem like it should have been all that hard for the Tesla to see and avoid the object since the humans saw it in plenty of time. The fact that it didn't raises questions for me about how well the system works in general.
But also, I doubt you would break your swaybar running over some retreads
Probably good parable for Waymo vs Tesla here. One is generalized approach for entire world while another is carefully pre-mapped for a small area.
More likely you simply drove around the debris and didn't register the memory because it's extremely unlikely that you've never encountered dangerous road debris in 30 years of driving.
229 more comments available on Hacker News