Tesla Said It Didn't Have Key Data in a Fatal Crash, Then a Hacker Found It
Original: Tesla said it didn't have key data in a fatal crash, then a hacker found it
Key topics
A bombshell report revealed that Tesla claimed it didn't have crucial data related to a fatal Autopilot crash, only for a hacker to later uncover the evidence, sparking a heated debate about the company's transparency and accountability. Commenters skewered Tesla's denials, with some pointing out that lying to a court is a serious offense that a sensible legal department would avoid at all costs, while others cynically noted that a well-connected CEO like Musk might be able to "wave it away" with a private dinner with the president. The discussion also veered into broader criticisms of Tesla's marketing practices and Musk's alleged attempts to curry favor with Trump to protect his company's interests. As commenters dissected the implications of the report, a consensus emerged that Tesla's actions were at best shady and at worst downright deceitful.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
50m
Peak period
157
0-12h
Avg / period
53.3
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 29, 2025 at 7:15 AM EDT
4 months ago
Step 01 - 02First comment
Aug 29, 2025 at 8:05 AM EDT
50m after posting
Step 02 - 03Peak activity
157 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 6, 2025 at 5:14 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Do we expect them to admit they were outright lying and wrong considering their leader is a pill popping Nazi salute making workaholic known to abuse his workers?
But today you just have a private dinner with the president and he'll wave it away.
But it took him four months deeply embedded with the Republican party to come to this conclusion?
It's been blindingly obvious to anyone remotely paying attention to US politics for the last decade (or two, or more, but blindingly so, more recently).
It's always difficult to get a true read on what people believe, and that goes double for very powerful or wealthy people, who have professionals devoted to their image management.
Which, not coincidentally, is also where the idea of Musk as an "amazing, brilliantly intelligent man" comes from. After all, he doesn't have any sort of history of published work or intellectual breakthroughs to support that.
He seems to be good at investment, and at a certain kind of hype-based marketing, including of himself (up to a point.)
The much more likely hypothesis in my view is that he was helping Trump because of personal conviction (only in small parts motivated by naked self-interest).
You should expect rational billionaires to tend politically right out of pure self-interest and distorted perspective alone; because the universal thing that such parties reliably do when in power is cutting tax burden on the top end.
So this isn't so much of an assumption, as taking him at his word.
This is how conservatives keep people going 'both sides!' even though they manufacture whatever is required to be that way.
The problem is not that the republican party used to be a conservative right party.
What I’m saying is this is not a sports competition where Musk is automatically an opponent of the Democratic party because he supported Trump. He supported Trump in order to improve his chances with the legal system because he knew Trump would be willing to be so corrupt.
Another world might be imagined in which the Democratic party was taken over in 2016 but that is not the world we live in.
> U.S. District Judge Beth Bloom, who presided over the case, said in an order that she did not find “sufficient evidence” that Tesla’s failure to initially produce the data was intentional.
He used to be quite charismatic, I believed him up until about 2017 or so. Then I figured he was just a bit greedy and maybe money got to his head but still a respectable innovator. However during 2020 or 2021 (I don’t exactly remember) he started to get quite unpleasant and making obviously short-term decisions, such as relying only on cameras for self driving because of chip shortages but dressing it up as an engineering decision.
You will basically never hear another CEO of another publicly traded company say this. I just don't believe that the same person who cares so little about his stock price that he sends a tweet like that (and the stock dropped 10% on it) also is making fraudulent statements to inflate the price. A better explanation is that he just says what he thinks without regard for the stock price, which is also something you won't see any other CEO of a publicly traded company do.
(Democrats aren't left btw)
What is your actual point? What would he stand in front of a judge for, right now, if Harris had won?
You'd have to ask Musk what he feels so guilty about that he had to buy an election.
On the left the details of your sentence structure get criticism for weeks from the public and the press (remember "garbage people"?)
Sending a bunch of scriptkiddies around and having them cut government funding and gut agencies is not really how you make evidence "vanish", how would that even work?
And, lastly, jumping in front of an audience at every opportunity and running your mouth is the absolute last thing anyone would ever do if the goal was to avoid prosection. But it is perfectly in line with a person that has a very big ego and wants to achieve political goals.
I scrutinise beliefs and assumptions even if they are convenient, and you should, too.
I don't believe that Musks main motivation to participate in the 2024 election was to avoid prosecution, because his actions are not really compatible with this, and there is a much more plausible alternative hypothesis that he preferred (possibly no longer) the republican platform for non-prosecution reasons/personal conviction instead, which his actions are very compatible with.
> Labor violations, taxes, National Highway traffic safety administration investigation Tesla
Let me say it like this: Billionaires generally don't have to care about minor infractions like this at all. The whole system is set up to shield them from liability, and wealth is an excellent buffer against effective prosection regardless of who is president. There have been a plethora of infinitely more serious infractions with zero real consequences for the CEOs involved, and this is not because they participated in past presidential election campaigns. See: the VW diesel emission fraud or much worse, leaded gas in the last century (and what associated industry did to keep that going).
There is a pretty recent precendent on the other side of the political spectrum: Hillary Clinton. Republicans went on and on for how she belonged in prison. Anyone with half a brain was able to tell that this was not gonna happen, because there simply was no case. Republicans got basically absolute power since, and --surprise-- Hillary did not go to prison.
What makes you so confident that you are right about Elon, while the people back then were obviously wrong with Hillary (even without hindisght!).
But that doesn't mean that there is no case against Elon. I'm not sure why you would draw an equals between the two. The SEC really does put people in jail, the stuff with Vlad is bordering on treason.
I think you may have forgotten that we actually used to have a government and there was rule of law.
You can’t seriously think it’s just politics. Elon’s _entire_ fortune is built on these misrepresentations. A competent government that forces them to stop selling self driving, after a series of high profile lawsuits, is something Elon would pay very close attention to, probably even more than the day to day operations of the (mediocre electric car) company that he runs.
Elon is about the stock, the stock price, and the yarn he spins to keep it up. There is nothing else for him.
Look at the things he’s already been in court for: labor and environmental violations, safety problems. A really big one for him is the exposure to suits over his statements as the CEO of a public company: anyone who’s bought TSLA in the the last decade or so could, for example, claim that he was knowingly misleading investors about FSD’s readiness or safety record.
Each of those are areas where he absolutely does not want someone with government investigatory powers talking to his employees, demanding internal documents, etc. and in several of them he could have business activities blocked if, for example, they linked approvals to more comprehensive evidence (imagine if they couldn’t sell FSD for use on public roads until it was safer and had to compensate past buyers).
Humans are simply incapable of paying attention to a task for long periods if it doesn't involve some kind of interactive feedback. You can't ask someone to watch paint dry while simultaneously expect them to have < 0.5sec reaction time to a sudden impulse three hours into the drying process.
Oh! And also, moving within the lane is sometimes important for getting a better look at what's up ahead or behind you or expressing car "body language" that allows others to know you're probably going to change lanes soon.
> I also don't understand how one has trouble staying between the lines with minimal cognitive input after more than a few months of driving.
Once you have something assist you with that, you'll notice how much "effort" you are actually putting towards it.
I commute mainly on the highway about 45-1hr each way every day and it makes a big difference for driver fatigue. I was honestly a bit surprised. Even though, I'm steering, it requires less effort. I don't have my foot on the gas and I'm not having to adjust my speed constantly.
Critically, though, I do have to pay attention to my surroundings. It's not taking so much out of my driving that I can't stay engaged to what's happening around me.
1. AEB brakes violently to a full stop. We experience shock and dismay. What happened? Oh, a kid on a bike I didn't see. I nearly fucked up bad, good job AEB
2. AEB smoothly slows the vehicle to prevent striking the bicycle, we gradually become aware of the bike and believe we had always known it was there and our decision eliminated risk, why even bother with stupid computer systems?
Humans are really bad at accepting that they fucked up, if you give them an opportunity to re-frame their experience as "I'm great, nothing could have gone wrong" that's what they prefer, so, to deliver the effective safety improvements you need to be firm about what happened and why it worked out OK.
FSD doesn’t lull humans into a false sense of security, humans do. FSD doesn’t let you use your phone while it’s on. This alone is an upgrade over most human beings, who think occasional quick phone usage while driving is fine (at least for themselves).
I believe that if you replaced all human drivers in the US with FSD as it exists today, fatalities would go down immediately.
Humans are not a gold standard, and the current median human driver is easy to outperform on safety.
I’m sure things are very different out around the edges, as you note, but the majority of the time humans in cars kill people it isn’t because they were in an edge case - quite the opposite. They were just driving home from the bar like they do every night.
Like also everytime let the downvotes rain. If you downvote, it would be nice, if you could tell me where I am wrong. It might change my view on things.
All this demonstrates is the term “full self driving” is meaningless.
Tesla has a SAE Level 3 [1] product they’re falsely marketing as Level 5; when this case occurred, they were misrepresenting a Level 2 system as Level 4 or 5.
If you want to see true self driving, take a Waymo. Tesla can’t do that. They’ve been lying that they can. That’s gotten people hurt and killed; Tesla should be liable for tens if not hundreds of billions for that liability.
[1] https://www.sae.org/blog/sae-j3016-update
It’s meaningless because Tesla redefines it at will. The misrepresentation causes the meaninglessness.
Also "All this demonstrates is the term “full self driving” is meaningless." prooves my point that it is not missleading.
The levels are set at the lowest common denominator. A 1960s hot rod can navigate a straight road with no user input. That doesn’t mean you can trust it to do so.
> Where did Tesla say FSD is SAE Level 5 approved?
They didn’t say that. They said it could do what a Level 5 self-driving car can do.
“In 2016, the company posted a video of what appears to be a car equipped with Autopilot driving on its own.
‘The person in the driver's seat is only there for legal reasons,’ reads a caption that flashes at the beginning of the video. ‘He is not doing anything. The car is driving itself.’ (Six years later, a senior Tesla engineer conceded as part of a separate lawsuit that the video was staged and did not represent the true capabilities of the car.) [1]”
> Tesla is full self driving with Level 2/3 supervision and in my opinion this is not missleading
This is tautology. You’re defining FSD to mean whatever Tesla FSD can do.
[1] https://www.npr.org/2025/07/14/nx-s1-5462851/tesla-lawsuit-a...
FSD cannot “do everything a Level 5 system can.” It can’t even match Waymo’s Level 4 capabilities, because it periodically requires human intervention.
But granting your premise, you’d say it’s a Level 2 or 3 system with some advanced capabilities. (Mercedes has a lane-keeping and -switching product. They’re not constantly losing court cases.)
Not urgently. FSD has time-sensitive intervention requirements. Waymo’s time sensitivities are driven by passenger comfort, not safety.
Self driving can totally means the human own-self driving.
Having SAE level is clearer.
There's plenty wrong about the FSD terminology and SAE levels would absolutely be clearer, but I doubt more than a tiny fraction of people are confused as to the target of 'self' in the phrase 'full self driving'.
How many juries and courts have ruled adversely against self-cleaning oven makers?
Tesla has absolutely lied about its software's capabilities. From the lawsuit that went to trial:
“In 2016, the company posted a video of what appears to be a car equipped with Autopilot driving on its own.
‘The person in the driver's seat is only there for legal reasons,’ reads a caption that flashes at the beginning of the video. ‘He is not doing anything. The car is driving itself.’ (Six years later, a senior Tesla engineer conceded as part of a separate lawsuit that the video was staged and did not represent the true capabilities of the car.) [1]”
[1] https://www.npr.org/2025/07/14/nx-s1-5462851/tesla-lawsuit-a...
I just disagree that any significant number of people anywhere have thought the 'self' in 'full self driving' refers to the driver.
Are you saying you would sit in a Tesla without paying much attention, in the same way you're sitting next to someone you trust driving the car? Would you go do phone stuff or look for stuff in your bag while your Tesla is driving you?
I mean I guess people are doing that, but with all the reports and stories I hear, it seems to me it's quite tricky, and you better just watch the road.
So I wouldn't really call that fully self driving. It's kind of like an LLM, it does great most of the time, but occasionally it does something disastrous. And therefore a human needs to be there to correct it. If you would let it all go on it's own it's not gonna end well. That's not fully self driving. That's human assisted driving.
https://electrek.co/2025/09/05/tesla-changes-meaning-full-se...
It needs to have a crash rate equal to or ideally lower than a human driver.
Tesla does not release crash data (wonder why...), has a safety driver with a finger on the kill switch, and only lets select people take rides. Of course according to Elon always-honest-about-timelines Musk, this will all go away Soon(TM) and we will have 1M Robotaxis on the road by December 31st.
Completing a route without intervention doesn't mean much. It needs to complete thousands of routes without intervention.
Keep in mind that Waymos have selective intervention for when they get stuck. Teslas have active intervention to prevent them from mowing down pedestrians.
And no you wouldn't.
Blocking a technology is Luddism. Blocking a company is politics.
The problem here isn't that people think they don't need to pay attention because their car can drive itself and then crash. The problem is that people who know full well that they need to focus on driving just don't because fundamentally the human brain isn't any good at paying attention 100% of the time in a situation where you can get away with not paying attention 99.9% of the time, and naming just can't solve this.
> Moments later, court records show, the data was just as automatically “unlinked” from the 2019 Tesla Model S at the scene, meaning the local copy was marked for deletion, a standard practice for Teslas in such incidents, according to court testimony.
Wow...just wow.
edit: My point is that it was not one lone actor, who would have made that change.
It’s just worded to make this sound sketchy. I bet ten bucks “unlinked” just refers to the standard POSIX call for deleting a file.
Also this is not like some process crash dump where the computer keeps running after one process crashed.
This would be like an plane black box uploading its data to the manufacturer, then deleting itself after a plane crash.
Deleting the data on the server is totally sketchy, but that’s not what the quoted section is about.
How Tesla could say that detecting a collision and not locking all/any of the data is normal is just insane.
The real issue is that Tesla claimed the company doesn't have the data after every copy was deleted. There's no technical reason to dispose of data related to a crash when you hold so much data on all of the cars in general.
Crash data in particular should be considered sacred, especially given the severity in this case. Ideally it should be kept both on the local black box and on the servers. But anything that leads to it being treated as instantly disposable everywhere, or even just claiming it was deleted, can only be malice.
My money is on nobody built a tool to look up the data, so they have it, they just can't easily find it.
Exactly. The issue is deleting the data on the servers, not a completely mundane upload-then-delete procedure for phoning home. This should have been one sentence, but instead they make it read like a heist.
Maybe this is not appropriate for a car, but that doesn’t excuse the ridiculous breathless tone in the quoted text. It’s the worst purple prose making a boring system sound exciting and nefarious. They could have made your point without trying to make the unlink() call sound suspicious.
The rogue engineer defense worked so well for VW and Dieselgate.
The issue of missing crash data was raised repeatedly. Deleting or even just claiming it was deleted can only be a mistake the first time.
So we just shrug because software boys gotta be software boys? That’s completely insane and a big reason why a lot of engineers roll their eyes about developers who want to be considered engineers.
Software engineers who work on projects that can kill people must act like the lives of other people depend on them doing their job seriously, because that is the case. Look at the aviation industry. Is it acceptable to have a bug in the avionics suite down planes at random and then delete the black boxes? It absolutely is not, and when anything like that happens shit gets serious (think 737 MAX).
The developers who designed the systems are responsible, and so are their managers who approved the changes, all the way to the top. This would not happen in a company with appropriate processes in place.
I'd argue that this data is far less important in cars. Airline safety has advanced to the point where crashes are extremely rare and usually have a novel cause. Data recorders are important to be able to learn that cause and figure out how to prevent it from happening again. Car safety, on the other hand, is shit. We don't require rigorous training for the operators. Regulations are lax, and enforcement even more lax. Infrastructure is poor. We're unwilling to fix these things. Almost all safety efforts focus on making the vehicles more robust when collisions occur, and we're just starting to see some effort put into making the vehicles automatically avoid some collisions. What are we going to learn from this data in cars? "Driver didn't stop for a red light, hit cross traffic." "Driver was drunk." "Driver failed to see pedestrian because of bad intersection design which has been known for fifty years and never been fixed." It's useful for assigning liability but not very useful for saving lives. There's a ton of lower hanging fruit to go after before you start combing through vehicle telemetry to find unknown problems.
Even if you do consider it to be life-critical, uploading the data and then deleting the local copy once receipt is acknowledged seems completely fine, if the server infrastructure is solid. Better than only keeping a local copy, even. The issue there is that they either have inadequate controls allowing data to be deleted, or inadequate ability to retrieve data.
So either the problem is Tesla engineers are fucking stupid (doubtful) or this is a poor business/product design.
My money is on the latter.
This data is yours. You were going the speed limit when the accident happened and everyone else claims you were speeding. It would take forever to clear your name or worse you could be convicted if the data was lost.
This is more of "you will own nothing" crap. And mainly so Tesla can cover its ass.
This is like saying "maybe nobody has recently looked at the ad-selection mechanism at Google." That's just not plausible.
I worked a year in airbag control, and they recorded a bit of information if the bags were deployed - seatbelt buckle status was one thing. But I was under the impression there was no legal requirement for that. I'm sure it helps in court when someone tries to sue because they were injured by an airbag. The argument becomes not just "the bags comply with the law" but also "you weren't wearing your seatbelt". Regardless, I'm sure Tesla has a lot more data than that and there is likely no legal requirement to keep it - especially if it's been transferred reliably to the server.
It's very easy to imagine a response to this being (beyond "don't log so much") an audit layer to start automatically removing redundant data.
The externalities of the company are such that people want to ascribe malice, but this is a very routine kind of thing.
The performance ship sailed, like, 15 years ago. We're already storing about 10000000 more data than we need. And that's not even an exaggeration.
Perhaps if there is some sort of crash.
Which of these is evidence of a conspiracy:
The requirements should have been clear that crash data isn't just "implement telemetry upload", a "collision snapshot" is quite clearly something that could be used as evidence in a potentially serious incident.
Unless your entire engineering process was geared towards collecting as much data that can help you, and as little data as can be used against you, you'd handle this like the crown jewels.
Also, to nit-pick, the article says the automated response "marked" for deletion, which means it's not automatically deleted as your reductive example which doesn't verify it was successfully uploaded (at least && the last rm).
Any competent engineer who puts more than 3 seconds of thought into the design of that system would conclude that crash data is critical evidence and as many steps as possible should be taken to ensure it's retained with additional fail safes.
I refuse to believe Tesla's engineers aren't at least competent, so this must have been done intentionally.
Assuming its not intentionally malicious this is a really dumb bug that I could have also written. You zip up a bunch of data, and then you realize that if you don't delete things you've uploaded you will fill up all available storage, so what do you do? You auto delete anything that successfully makes it to the back-end server, you mark the bug fixed, not realizing that you overlooked crash data as something you might want to keep.
I could 100% see this being what is happening.
You're implying it's special for crashes, but we don't know that.
Saying "hey, the upload_and_delete function is used in loads of places!" doesn't free you of the responsibility that you used that function in the crash handler.
u know if for instance u weld a gas pipeline and an xray machine reveal a crack in your work, you can go to jail.... but if you treat car software as an appstore item, totally fine??
stop defending ridiculously bad design and corporate practices.
After it confirmed upload to the server? What if it was a minor collision? The car may be back on the road the same day, or get repaired and on the road next week. How long should it retain data (that is not legally required to be logged) that has already been archived, and how big does the buffer need to be?
Then, I don’t know… Check if it was the case? Seriously, it’s unbelievable. It’s a company with a protocol to delete possibly incriminating evidence in a situation where it can be responsible for multiple deaths.
If the car requires that a certain amount of storage is always available to write crash data to, then it doesn't matter what's in that particular area of storage. That reserved storage is always going to be unavailable for other general use.
207 more comments available on Hacker News