Palisades Fire Suspect's Chatgpt History to Be Used as Evidence
Posted3 months agoActive3 months ago
rollingstone.comTechstoryHigh profile
controversialmixed
Debate
80/100
Artificial IntelligencePrivacyLaw Enforcement
Key topics
Artificial Intelligence
Privacy
Law Enforcement
A suspect in the Palisades Fire case had their ChatGPT history used as evidence, raising concerns about AI data privacy and its implications in legal cases.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5d
Peak period
96
132-144h
Avg / period
40
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 8, 2025 at 5:53 PM EDT
3 months ago
Step 01 - 02First comment
Oct 13, 2025 at 11:31 PM EDT
5d after posting
Step 02 - 03Peak activity
96 comments in 132-144h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 15, 2025 at 1:25 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45521032Type: storyLast synced: 11/20/2025, 8:23:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
[1] https://www.eff.org/deeplinks/2024/08/federal-appeals-court-...
[2] https://techcrunch.com/2023/12/16/google-geofence-warrants-l...
In this case, there's an explicit middle point - chatgpt.com resolves to a CloudFlare server, so CloudFlare is actually one of the ends here. It likely acts as a reverse proxy, meaning that it will forward your requests to a different, OpenAI-owned server. This might be over a new HTTPS connection, or it might be over an unencrypted HTTP connection.
It really is super important to emphasize this point. End-to-end encryption is not simply that your data is encrypted between you and the ultimate endpoint. It's that it can't be decrypted along the way - and decrypting your HTTPS requests is something that CloudFlare needs to do in order to work.
(To be clear, I'm not accusing CloudFlare of anything shady here. I'm just saying that people have forgotten what end-to-end encryption really means.)
Just for general peace of mind, use a privacy-oriented search engine. I use leta.mullvad.net or search.brave.com usually. I haven't used Google in years. And if you just happen to have a curiosity about something fringe that might be misinterpreted in the wrong circumstances, download an LLM and use it locally.
If you want real and total anonymous search, use a public computer.
No they can't. People write fiction, a lot of it. I'm willing to bet that the number of fiction related "incriminating" questions to chatgpt greatly numbers the number of "I'm actually a criminal" questions.
Also wonder about hypotheticals, make dumb bets, etc.
End of the day, a chimp with a 3 inch brain has to digest the info tsunami of flagged content. That's why even the Israelis didn't see Oct 7th coming.
Once upon a time I worked on a project for banks to flag complaints about Fraud in customer calls. Guess what happened? The system registered a zillion calls where people talked about fraud world wide, the manager in charge was assigned 20 people to deal with it, and after naturally getting overwhelmed and scapegoated for all kinds of shit, he puts in a request for few hundred more, saying he really needed thousands of people. Corporate wonderland gives him another 20 and writes a para in their annual report about how they are at the forefront of combatting fraud etc etc.
This is how the world works. The chimp troupe hallucinates across the board, at the top and at the bottom about what is really going on. Why?
Because that 3 inch chimp brain has hard limits to how much info, complexity and unpredictability it can handle.
Anything beyond that, the reaction is similar to ants running around pretending they are doing something useful anytime the universe pokes the ant hill.
Herbert Simon won a nobel prize for telling us we don't have to run around like ants and bite everything anytime we are faced with things we can't control.
If you see CSAM posted on the service then you're required to report it to NCMEC, which is intentionally designed as a private entity so that it has 4th amendment protections. But you're not required to proactively go looking for even that.
In reality, Uber records and conflicting statements incriminated him. He seems to be the one who provided the ChatGPT record to try to prove that the fire was unintentional.[1]
> He was visibly anxious during that interview, according to the complaint. His efforts to call 911 and his question to ChatGPT about a cigarette lighting a fire indicated that he wanted to create a more innocent explanation for the fire's start and to show he tried to assist with suppression, the complaint said.
[1] https://apnews.com/article/california-wildfires-palisades-lo...
So they probably have developed the tool, and once developed been secretly compelled to use it.
My understanding is that Apple’s executives were surprised at the forcefulness of the opposition to their stand together with the meekness of public support.
(Having worked on private legislation, I get it. You work on privacy and like two people call their electeds because most people don’t care about privacy, while those who do are predominantly civically nihilists or lazy.)
https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...
If Apple had simply had the text records, they would have had to comply with the government order to provide them.
Not Mullvad. Swedish police showed up looking for some dat, Mullvad didn't even collect what they wanted, police left empty handed.
See Wilson v. United States
https://supreme.justia.com/cases/federal/us/221/361/
The specifics vary by country, but basically all legal systems require you to comply with what they say and impose penalties if you don't. I don't know if there are any countries where it's legal to ignore the courts, but I would imagine that their court systems don't work too well.
Tech is full of people that make extremely good money, from other people's personal information, and they plug their ears and sing "La-la-laaaa-I-can't-hear-yooouuu-la-la-la", when confronted with information that says what they are doing has problems. Not just techhies. That's fairly basic human nature.
This is pretty much the embodiment of Upton Sinclair's quote: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it."
For my part, I don't collect any data that I don't need; even if it makes it more difficult to do stuff like administer a server.
https://openai.com/index/response-to-nyt-data-demands/ (yes, that's written 100% from OpenAI's perspective)
In particular:
> The New York Times is demanding that we retain even deleted ChatGPT chats and API content that would typically be automatically removed from our systems within 30 days.
> ...
> This data is not automatically shared with The New York Times or anyone else. It’s locked under a separate legal hold, meaning it’s securely stored and can only be accessed under strict legal protocols.
> ...
> Right now, the court order forces us to retain consumer ChatGPT and API content going forward. That said, we are actively challenging the order, and if we are successful, we’ll resume our standard data retention practices.
Presumably what's of evidentiary value is the tokens you type, though.
As I understand it, some people treat chatgpt like a close personal friend and therapist. Confiding their deepest secrets and things like that.
Same difference as: "Allowing minors into casinos... is it any different from letting them play cards with their friends at home with their pocket money?"
I take issue with the "is designed to" phrase. That implies an intentionality upon OpenAI (and others) to create something that acts as a therapist or confidant. It is designed to respond to you in a way that you ask it to. The agency for confiding deep secrets to a cloud service is entirely upon the human typing in the text.
If one doesn't try to make it your friend, it doesn't try to act like it.
There is text input and text output it's really not that complicated
If used in court the jury would be given access to the full conversation just like if it was an email thread
I'm sure that there are many people who thoughtlessly type very personal things into chatgpt including things that might not look so good for them if they came out at trial.
(and yah, yada yada about journalism no longer, or maybe never, being about truth, I get it, but still IMO the field should be held to the higher journalistic standard)
[1] - https://web.archive.org/web/20251008204636/https://www.rolli...
It's no different than the contents of your home. Obviously we don't want police busting in to random homes to search, but if you're the suspect of a crime and police have a warrant, it's entirely reasonable to enter a home and search. I guess it can't necessarily help clear you up like an alibi would, but if the party is guilty is could provide things like more certainty, motivation, timeline of events, etc.
I think people conflate the two. They hold that certain things should remain private under all circumstances, where I believe the risk is a large dragnet of surveillance that affects everyone as opposed to targeted tools to determine guilt or innocence.
Am I wrong?
We’ve long ago entered a reality where almost everyone has a device on them that can track their exact location all the time and keeps a log of all their connections, interests and experiences. If a crime occurs at a location police can now theoretically see everyone who was in the vicinity, or who researched methods of committing a crime, etc. It’s hard to balance personal freedoms with justice, especially when those who execute on that balance have a monopoly on violence and can at times operate without public review. I think it’s the power differential that makes the debate and advocacy for clearer privacy protection more practical.
Plenty of big services will just give cops info if they ask for it. It's legal. Any company or individual can just offer up evidence against you and that's fine, but big companies will have policies that do not require warrants.
Despite this atrocious anti-privacy stance, cops STILL clear around half of violent crimes, and that's only in states with rather good police forces, usually involving higher requirements than "A pulse" and long training in a police Academy. Other states get as low as 10% of crimes actually solved.
When you've built a panopticon and cops STILL can't solve cases, it's time to stop giving up rights and fix the cops.
I think this is where policy is failing. No clear protections on privacy and collusion between corporations and the state is allowed. It’s outdated and impractical to have the limits on search and seizure at physical boundaries but not electronic ones.
In other words: if it's Joe Schmoe's Haberdashery forwarding CCTV footage to police to elucidate a crime right in front of their door, sure, it's fine and dandy, they do have an interest in not having crime in front of their door. But when Revolving Door MegaCorp builds a dragnet of surveillance AND is also selling cloud contracts to the government by the billion, it becomes a lot more murky if they just start snitching on everything they see.
As a naturally curious person, who reads a lot and looks up a lot of things, I've learned to be cautious when talking to regular people.
While considering buying a house I did extensive research about fires. To do my job, I often read about computer security, data exfiltration, hackers and ransomware.
If I watch a WWI documentary, I'll end up reading about mustard gas and trench foot and how to aim artillery afterwards. If I read a sci-fi novel about a lab leak virus, I'll end up researching how real virus safety works and about bioterrorism. If I listen to a podcast about psychedelic-assisted therapy, I'll end up researching how drugs work and how they were discovered.
If I'm ever accused of a crime, of almost any variety or circumstance, I'm sure that prosecutors would be able to find suspicious searches related to it in my history. And then leaked out to the press or mentioned to the jury as just a vague "suspect had searches related to..."
The average juror, or the average person who's just scrolling past a headline, could pretty trivially be convinced that my search history is nefarious for almost any accusation.
DAs for bigger departments are likely well equipped, well trained, and well practiced at tugging on the heartstrings of average juries, which are not average people, because jury selection is often a bad system.
1. How wide is the search net dragged?
2. Who can ask for access?
The first shows up in court cases about things like "which phones were near the crime" or "who in the area was talking about forest fires to ChatGPT?" If you sweep the net far enough, everyone can be put under suspicion for something.
A fun example of the second from a few years ago in the New York area was toll records being accessed to prove affairs. While most of us are OK with detectives investigating murders getting access to private information, having to turn it over to our exes is more questionable. (And the more personal the information, the less we are OK with it.)
The modern abuse of the third-party doctrine is a different topic. Modern usage of the third-party doctrine claims (for instance) that emails sent and received via Gmail are actually Google's property and thus they can serve Google a warrant in order to access anyone's emails. The old-timey equivalent would be that the police could subpoena the post office to get the contents of my (past) letters -- this is something that would've been considered inconceivably illegal a few decades ago, but because of technical details of the design of the internet, we have ended up in this situation. Of course, the fact there are these choke points you can subpoena is very useful to the mass surveillance crowd (which is why these topics get linked -- people forget that many of these mass surveillance programs do have rubber-stamped court orders to claim that there is some legal basis for wiretapping hundreds of millions of people without probable cause).
In addition (in the US) the 5th amendment allows you the right to not be witness against yourself, and this has been found to apply to certain kinds of requests for documents. However, because of the third-party doctrine you cannot exercise those rights because you are not being asked to produce those documents.
In the past, if you put evidence in a safe and refused to open it, the police could crack it, drill it, cut it open, etc. if all else failed.
Modern technology allows wide access to the equivalent of a perfectly impregnable safe. If the police get a warrant for your files, but your files fundamentally cannot be read without your cooperation, what then?
It comes down to three options: accept this possibility and do without the evidence; make it legally required to unlock the files, with a punishment at least as severe as you're facing for the actual crime; or outlaw impregnable safes.
There doesn't seem to be any consensus yet about which approach is correct. We see all three in action in various places.
Do you think OpenAI wont produce responsive records when it receives a lawful subpoena?
And very rightly so, regardless if Uber records incriminated this person.
> Investigators, he noted, allege that some months prior to the burning of the Pacific Palisades, Rinderknecht had prompted ChatGPT to generate “a dystopian painting showing, in part, a burning forest and a crowd fleeing from it.” A screen at the press conference showed several iterations on such a concept...
Video here, including the ChatGPT "painting" images circa 1m45s: https://xcancel.com/acyn/status/1975956240489652227
(Although, to be clear, it's not like the logs are the only evidence against him; it doesn't even look like parallel construction. So if one assumes "as evidence" usually implies "as sole evidence," I can see how the headline could be seen as sensationalizing/misleading.)
Talking to an LLM like a human is like talking to a mirror. You're just shaping their responses based on what you say. Quite sad to see stuff like the "myboyfriendisai" reddit
https://en.wikipedia.org/wiki/Third-party_doctrine
Godzilla cats really seems like it needs a movie.
a lot of people, especially younger ones, seem to use chatgpt as a neutral third party in every important decision. so it probably has more extensive records on their thoughts than social media ever did. in fact, people often curate their Instagram feeds - but chatgpt has their unfiltered thoughts.
As far as I’ve heard from other articles, he lived in the Palisades at the time and worked as an Uber driver there. He moved to Florida after the fire. This is not very well researched.
Are the 12 deaths separate charges? A sentence of 5-20 years seems very light for 12 deaths. This article is clearly focused on the AI aspect of it, so it doesn't cover the charges at all really.
That seems clear cut first degree murder to me, as I understand it (I'm not sure if it requires a specific person to be murdered but a pre-meditated act that kills people seems like it'd qualify to me).
Leaving aside the fact that we don't know yet if he actually started the fire: anyone who starts any fire without appropriate control measures (like extinguishers or containing the fire in something made to contain it) can theoretically be charged under the law for negligence - and practically will, if things go south.
And in a time where there's ample fuel for fires on the ground and the weather conditions are favorable to large fires (e.g. hot, low humidity, clear skies and strong winds) any kind of fire (even smoking - cigarette butts thrown out of car windows are a particularly bad fire source in Croatia) can quickly escalate into a full blown forest fire. Even things that one would not even perceive to be dangerous can cause fires... an all too common occurrence is a diesel car with a freshly regenerated DPF that's being parked on a parking lot that used to be overgrown with weed that's now dried out. The heat from the DPF is massive enough (> 500 °C) to lead to ignition of dried-out weeds (~ 300 °C).
So, it's not a stretch to assume that anyone starting an open fire should know it might escalate into a deadly disaster. And even the reckless cases that I mentioned (smokers, car drivers) can be charged as manslaughter here in Europe.
>Raymond Lee Oyler, 54, of Beaumont, was sentenced to death for starting the Esparanza Fire in October 2006. He was convicted of five counts of first-degree murder, 19 counts of arson and 16 counts of possessing incendiary devices. https://kesq.com/news/2025/05/05/ca-supreme-court-upholds-de...
https://en.wikipedia.org/wiki/Esperanza_Fire
I think those things are to be decided in court. As for the charges and times, it's mentioned only the ranges for arson but there's nothing to stop them bringing charges of manslaughter for example. They'll build evidence and charge as such. It's the process.
Liability would still be on him.
For those reading: this is the difference between proximate cause and actual cause. Yes it's true that but for the fire being started in the first place, the fire would not have rekindled. But once professional firefighters arrive to put out the fire, it's not foreseeable by a normal person that the fire could be rekindled, so that person wouldn't be liable. The harm is too remote. The firefighters may even be grossly negligent because they are professionals, intervened, and the fire rekindled. A person negligently failing to fully extinguish their own fire would lead to liability, though.
So they can prove that this was one continuous combustion reaction? They can show beyond reasonable doubt that, despite the observations of the fire going out that convinced a team of firefighting professionals that it had stopped, it in fact continued and nothing else ignited a new fire in this location where fires naturally occur?
You may find the "Thin skull rule" interesting for criminal liability
I have a "saved" history in Google Gemini. The reason I put "saved" in scare quotes is that Google feels free to change the parts of that history that were supplied by Gemini. They no longer match my external records of what was said.
Does ChatGPT do the same thing? I'd be queasy about relying on this as evidence.
1. I engaged with Gemini.
2. I found the results wanting, and pasted them into comment threads elsewhere on the internet, observing that they tended to support the common criticism of LLMs as being "meaning-blind".
3. Later, I went back and viewed the "history" of my "saved" session.
4. My prompts were not changed, but the responses from Gemini were different. Because of the comment threads, it was easy for me to verify that I was remembering the original exchange correctly and Google was indulging in some revision of history.
Rewriting history requires computes which is more malicious. Why would someone burn compute to rewrite your stuff given that rewrites are not free? Once again not defending google trying to think through what's going on.
I can't think of any reason it would make sense to do that, though.
This would be a major tech news story. "Google LLM rewriting user history" would be a scandal. And since online evidence is used in court, it could have significant legal implications. You'd be helping people.
This is much too important to merely be a comment on HN.
If you cause a problem, report it, then the authorities responsible for dealing with those problems take care of it and go home, what does it mean?
Are the authorities then partially responsible for not ensuring the fire was put out properly before leaving the area?
Is he even guilty at all given that he filled his duty and reported the problem after unintentionally causing it?
However many US states have a "felony murder rule" which as I understand it says if you did something that resulted in death, and it was in the course of a felony then it can be tried as murder. Most of them rule out some felonies (felony assault + death => murder is a stupid way to apply such a rule and so is usually ruled out) and some only rule in a handful like rape and prison escapes, but felony arson + death => murder might play.
The distinction between murder and manslaughter is malice aforethought. For first degree murder, you must have intended the death of a particular person. For second degree murder, you need only have known that you could kill someone, and did it anyways. This specifically includes things done with extreme recklessness.
So to prove second degree murder you need to show 1) you intentionally did something, 2) you knew (or should have known) it could kill someone, and 3) someone died.
These can be proven for arson. You have to prove the intent to start the forest fire. Everyone knows (or should know) that forest fires can kill people. You have to prove that someone died from the fire.
That is why arson qualifies as second degree murder. Just like, say, failing to maintain the brakes on a fleet of trucks. (True story. My nephew was the unlucky driver of such a truck whose brakes failed...)
The textbook example is running someone over while fleeing the scene of a robbery. You didn't have mens rea for murder, the crime you intended to commit was robbery. But you chose to commit a felony, and someone did die because of it. Not only that, it's potentially capital murder, because it was for financial gain (Newsom put a moratorium on felony murder death sentences, so that's not a thing at the moment).
It is easier to prove the felony murder. Was it on the list of felonies? To prove the second degree murder, you have to demonstrate "extreme recklessness". Prosecutors will often pile up multiple charges like this. To give the jury as many options as possible to convict.
I'm not a lawyer. But in this case the fact that he called emergency services could be evidence against extreme recklessness, and therefore second degree murder. But felony murder still fits.
The case for malicious intent is extremely flimsy and based entirely on circumstantial evidence. The strongest piece of evidence they have for arson is that he threatened to burn down his sister's house but here's the thing, it would be extremely unusual for an arsonist to switch from targeted arson based on anger or revenge to thrill seeking arson setting unmotivated fires.
This is all pet theories and silliness for purposes of discussion. I freely admit that I haven't built a case here that's strong enough to withstand even a gentle poking by an opponent.
I don't know as much about arson but I did go through the same serial killer phase as every morose teen and one of the things that stuck with me is the way that some offenders escalate from simple peeping and stalking all the way up to murder. Another thing that stuck with me is how in some cases when there is an intended victim, esp for revenge, an obsessed mind will often hone in on a single characteristic of the intended victim then transfer victimhood to strangers based on that characteristic. The woman who "wronged" you is a skinny blonde who smokes cigarettes so you go out looking for skinny blondes who smoke cigarettes to victimize in her stead because in your unconscious brain that matches the pattern of behavior that would soothe the wounded entitlement of the offender. Given these facts about the nature of obsessive, vengeance-oriented crime and the fact that the serial killer/arsonist crossover is so common that arson is one of the mcdonald triad of behaviors common to serial killers there's a non-zero possibility that we're seeing a revenge fantasy transferred to another victim. There's also the fact that obsessed criminals tend to want to roleplay or practice and a lot of times their first "serious" crime is one of these roleplay/practice sessions getting out of control. This feels like that to me though I can't prove it. It's like he wanted to see what starting a fire would be like, assumed that the local VFD would get it under control and in doing so would also give him an idea of what the response looked like so he could optimize for escape, then either it got out of control or he tried to inject himself into the emergency response (another common thing among obsessed criminals, many like to relive the crime by being part of the investigation, like to tease investigators by being right under their nose or believe that by injecting themselves into the investigation they can steer it away from them).
Again, does any of this hold up in a court of law? Of course not. Does it hold up in a court of a thread on a post on HN? Maybe, we're here to talk and I'm of a mind that we didn't do anything to fix w/e it was that made people serial killers but there aren't really any serial killers anymore so something must have happened to that behavior. Perhaps stranger arson is a way that the same drivers that led to serial murder before the ~~panopticon~~internet are driving new behaviors now. Intuitively I'm highly confident that the stranger spree killings we see now are driven by those same pressures in a lot of perpetrators and the change in MO is about taking advantage of lag time in law enforcement's ability to correlate facts. Before the internet you could drive a few hours' down the road and start using a new name and unless your old name was already in the system there was basically no way for anyone to know. Obsessed criminals could offend, disappear and wait it out. Nowadays we're really good at ID'ing an offender so obsessive murders have to be one and done, but another strategy could be crimes that are small enough that they don't trigger the kind of dragnet response that involves things like checking all the CCTV cameras in a ten mile circle around the crime and things like that.
edit: everyone seems focused on the "serial killer phase" line that was really intended to be a throwaway. I just mean that I read a lot about them and thought it was shocking and cool to have a "favorite". Gross shit, but I assure you no one was ever in any amount of physical or psychic danger beyond declaring me a pizza cutter (all edge and no real point).
I’m sorry, what? As a former morose teen, I can assure you that a “serial killer phase” is not a universal experience
When I was a kid, some teens who were into darker themes (not all but definitely some) had a phase where they were interested in serial killers. It always struck me more as shallow “edgy teen” posturing than anything else. After Columbine this demographic moved on to other interests, as even a performative interest in real-world violence could lead to official harassment.
Comment OP here: it was exactly this; the safest, most boring possible way to be transgressive. I didn't talk about it as much as other kids who were like this, so I didn't have to stop once people started to actually care/respond, but I did go from keeping my Harold Schechter books on my bookshelf to in a special box in my closet. Merely knowing about these things gave me a little secret thrill like I was some sort of badass with extreme psychic warding able to go into some secret space that most people couldn't stand. In reality I was just desensitized cuz abusive mom and I'm really glad I grew out of it before I got to the part some kids get to where learning isn't enough and they start experimenting.
I had unmedicated bipolar 1 as a teenager. If anyone was going to go through a serial killer phase I would have.
Even as an adult I had some pretty bad episodes prior to being diagnosed early thirties. My brain went some pretty bad, dark places but it never went to serial killer.
I can only hope that you're mistaken on what a serial killer actually is. Mass murderer and spree killing, depraved as they are, have motives that are recognizable by the average person. Serial killer is a special kind of insanity.
The end of leaded gasoline may play a significant role. And/or a reduction in other chemical hazards. Violence was already declining pre-panopticon. https://en.wikipedia.org/wiki/Lead%E2%80%93crime_hypothesis
You did, however, allude to one of my favorite facts about violent crime in America: far from being a cause of violent crime, the rise of violent video games has been correlated with the most dramatic drop in crime in all of recorded history. That's right y'all, it's at least arguable that not only did Doom not inspire violence, it may have actually made us safer.
Electronic data brokers started in the 1950s. The early decades were less insidious but the box was opened. Invasive government electronic surveillance started before Google was founded.
1) this whole case hinges on intentionality and the gov't intends to prove that he set the fire intentionally. part of the chatgpt history is images he generated of fires and people running from fires. If he intentionally set a fire in a wildfire-prone area it doesn't matter that he didn't intend it to be a wildfire or anything he did after he set the fire.
2) If you'd like to have emergency services that are either prohibitively expensive or simply nonexistent, one great way to do that is to make first responders responsible for not doing a good enough job in their responses. I'm honestly not sure what we'd do in cases of blatantly neglectful behavior by a first responder during an emergency response, but beyond intentional malpractice we generally extend an assumption of good faith to anyone who bothers to show up and help during an emergency like this. The first time I get sued for not putting a fire out fast enough or completely enough is the last time I put out a fire.
https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no...
130 more comments available on Hacker News