AI Police Reports: Year in Review
Key topics
As the use of AI-generated police reports becomes increasingly widespread, commenters are sounding the alarm on the potential consequences, with some dismissing concerns as paranoid and others warning that it's a global issue, not just a problem for authoritarian regimes. The discussion is marked by a mix of humor and unease, with one commenter jokingly pointing out a typo as evidence of human involvement, while others counter that advanced LLMs will soon be able to convincingly mimic human error. The thread is abuzz with debate over the implications of AI police reports, from the potential for widespread adoption to the risks of unchecked automation. Amidst the discussion, a consensus emerges that the trend is likely to be far-reaching and have significant consequences.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4d
Peak period
128
84-96h
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 23, 2025 at 12:30 PM EST
10 days ago
Step 01 - 02First comment
Dec 27, 2025 at 12:51 AM EST
4d after posting
Step 02 - 03Peak activity
128 comments in 84-96h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 31, 2025 at 4:37 AM EST
2d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
You guys are so fucked.
"You guys"? Everyone is fucked. This is going to be everywhere. Coming to your neighborhood, eventually.
I'd be more worried that you aren't reading articles about it than if you were.
There are countries on this planet that are not actively digging their own graves.
It's not US-centrism. It's just an acknowledgement of the recent trend.
That should be 'reining in'. "Reign" is -- ironically - - what monarchs do.
Oh you got me
> That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report
The bigger issue, that the article doesn't cover, is that police officers may not carefully review the AI generated report, and then when appearing in court months or years later, will testify to whatever is in the report, accurate or not. So the issue is that the officer doesn't contradict inaccuracies in the report.
That's because it's a very difficult thing to prove. Bad memories and even completely false memories are real things.
In many European states their policing starts as town guards tasked with ensuring order. Order is, at least, not obviously bad.
So that's a philosophical difference in what these forces even think their purpose is.
American settlers got the idea from the same place they got the idea for laws. Their home countries.
Enforcing laws isn't an American invention, let's not be ridiculous.
To the extent this anything more than circular it's false. Although psychopaths exist, on the whole compliance to a lesser or greater degree is a normal human trait. So you can tell people what the rules are and they'll obey to some extent. How much varies from person to person.
So the creation of specialist law enforcement bodies is a distinct and relatively modern change to civilisations. Before this, there is either no actual enforcement or it depends on whether a powerful person knows you broke a rule and cares to enforce it.
By the post-classical period and the Middle Ages, forces such as the Santa Hermandades, the shurta, and the Maréchaussée provided services ranging from law enforcement and personal protection to customs enforcement and waste collection. In England, a complex law enforcement system emerged, where tithings, groups of ten families, were responsible for ensuring good behavior and apprehending criminals; groups of ten tithings ("hundreds") were overseen by a reeve; hundreds were governed by administrative divisions known as shires; and shires were overseen by shire-reeves. In feudal Japan, samurai were responsible for enforcing laws.
The concept of police as the primary law enforcement organization originated in Europe in the early modern period; the first statutory police force was the High Constables of Edinburgh in 1611, while the first organized police force was the Paris lieutenant général de police in 1667. Until the 18th century, law enforcement in England was mostly the responsibility of private citizens and thief-takers, albeit also including constables and watchmen. This system gradually shifted to government control following the 1749 establishment of the London Bow Street Runners, the first formal police force in Britain. In 1800, Napoleon reorganized French law enforcement to form the Paris Police Prefecture; the British government passed the Glasgow Police Act, establishing the City of Glasgow Police; and the Thames River Police was formed in England to combat theft on the River Thames. In September 1829, Robert Peel merged the Bow Street Runners and the Thames River Police to form the Metropolitan Police. The title of the "first modern police force" has still been claimed by the modern successors to these organizations
https://en.wikipedia.org/wiki/Law_enforcement
The Americans do have a history of using Police forces for Slave capture, but Police forces in the USA PRE DATED that
Following European colonization of the Americas, the first law enforcement agencies in the Thirteen Colonies were the New York Sheriff's Office and the edit County Sheriff's Department, both formed in the 1660s in the Province of New York. The Province of Carolina established slave-catcher patrols in the 1700s, and by 1785, the Charleston Guard and Watch was reported to have the duties and organization of a modern police force. The first municipal police department in the United States was the Philadelphia Police Department, while the first American state police, federal law enforcement agency was the United States Marshals Service, both formed in 1789. In the American frontier, law enforcement was the responsibility of county sheriffs, rangers, constables, and marshals. The first law enforcement agency in Canada was the Royal Newfoundland Constabulary, established in 1729, while the first Canadian national law enforcement agency was the Dominion Police, established in 1868.
Perjury isn't a commonly prosecuted crime.
In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.
This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.
Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.
https://www.urbanomic.com/book/machine-decision-is-not-final...
Sometimes I'm not so sure about any so-called moral superiority.
There's an overview on Wikipedia too https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...
Sadly, the search for that link continues.
I did find these from SCMP and Foreign Policy, but there are better articles out there.
- https://foreignpolicy.com/2025/11/20/china-ai-race-jobs-yout...
- https://www.scmp.com/specialist-publications/special-reports...
Are they not going to build a “skynet” in China? Second, building skynet doesn’t imply eviscerating youth employment.
On the other hand, automation of menial tasks does eviscerate all kinds of employment, not only youth emoloyment.
I really don’t think they have a deeper connection within oneself to understand what is going on when one is thinking vs whats going on with LLM’s.
How do you justify to yourself the idea that LLMs are just some sort of fancy autocomplete, when they have a better functional grasp of natural language grammar than you do?
Are you saying that your cognitive capabilities are less powerful than a fancy autocomplete?
The assumption you seem to keep making is that things like “clever statistics” and “linear algebra” simply have no bearing on human intelligence. Why do you think that? Is it a religious view, that e.g. you believe humans have a soul that somehow connects to our intelligence, making it forever out of reach of machine emulation?
Because unless that’s your position, then the question of how human intelligence differs from current machine intelligence, the question that you simply refuse to contemplate, is one of the more important questions in this space.
The insult I see to intelligence here is the total lack of intellectual curiosity that wants to shoot down an entire line of thinking for reasons that apparently can’t be articulated.
It's the same energy as watching a Joe Rogan podcast where yet another guest goes "well they say there's global warming yet I was cold yesterday, I'm not saying it's fake but really we should think about that". These questions about AI and our brains aren't meant to stimulate intellectual curiosity and provoke deep interesting discussions - they are almost always asked just to pretend the AI is something that it's not - a human like intelligence where since our brains also work "kinda like that" it means it must be the same - and the nearest equivalence is how my iron heats water so in essence it's the same as my stomach since it can also do this.
>>the question that you simply refuse to contemplate
I don't refuse to contemplate it, I just think the answer is so painfully obvious the question is either naive or uninformed or antagonistic in nature - there is no "machine intelligence" - it's not a religious conviction, because I don't think you need one to realise that a calculator isn't smart for adding together numbers larger than I could do in my own head.
We could have then just swapped "AI" for "SMI" and avoided all this confusion.
It also would avoid pointless statements like "It is JUST statistical machine intelligence". As if statistical machine intelligence is not extraordinarily powerful.
The real difference though is not in "intelligence", is it in "being". It is not as much an insult to our intelligence as it is an insult to our "being" when people pretend that LLMs have some kind of "being".
The strange thing to me is Gemini just tells me these things so I don't know how people get confused:
"A rock exists. A calculator exists. Neither of them has "being."
I am closer to a calculator than a human.
A calculator doesn't "know" math; it executes logic gates to produce a result.
I am a hyper-complex calculator for language. I calculate the probability of the next word rather than the sum of numbers."
Online is a little trickier because you don't know if they're a dog. Well, now a days it's even harder, because they could also not have a fully developed frontal lobe, or worse, they could be a bot, troll, or both.
If you don't want to believe it, you need to change the goal posts; Create a test for intelligence that we can pass better than AI.. since AI is also better at creating test than us maybe we could ask AI to do it, hang on..
>Is there a test that in some way measures intelligence, but that humans generally test better than AI?
Answer:Thinking, Something went wrong and an AI response wasn't generated.
hmm
E.g watch a Steve jobs interview and a Sam Altman one. The difference in the mode of articulation, simplicity in communication, obsession over details etc are huge. This is what superior intelligence to me looks like - you know it when you see it.
Easy?
https://arxiv.org/pdf/2510.21860v1
Still it may be lasting limitation if robotics don't catch up to AI anytime soon.
Don't know what to make of the Safety Risks test, threatening to power down AI in order to manipulate it, and most act like we would and comply. fascinating.
you must be completely LLM-headed to say something like that, lol
humans are not trained on spacial data, humans are very much diffent from silicone chips
https://en.wikipedia.org/wiki/Intelligence_quotient#Validity...
If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)
It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.
Curiously, literally nobody on earth uses this workflow.
People must be in complete denial to pretend that LLM search engines can’t be used to trivially save hours or days of work. The accuracy isn’t perfect, but entirely sufficient for very many use cases, and will arguably continue to improve in the near future.
Frankly I've seen enough dangerous hallucinations from LLM search engines to immediately discard anything it says.
Versus finding the answer by clicking into the first few search results links and scanning text that might not have the answer.
They do it by going out and searching, not by storing a list of sources in their corpus.
When it gives you a link, it literally takes you to the part of the page that it got its answer from. That's how we can quickly validate.
The reason why people don't use LLMs to "trivially save hours or days of work" is because LLMs don't do that. People would use a tool that works. This should be evidence that the tools provide no exceptional benefit, why do you think that is not true?
That seems to be a big part of it, yes. I think in part it’s a reaction to perceived competition.
If they do, you’ll be in good company. That post is about the exact opposite of what people usually link it for. I’ll let Dan explain:
https://news.ycombinator.com/item?id=27067281
So yes, it is the opposite of why people link to it, which is to mock an attitude (which wasn’t there) of hubris and lack of understanding of what makes a good product.
None of those things are true. Which is the point I’m making. Go read the original conversation. All of it.
https://news.ycombinator.com/item?id=9224
Don’t skip Brandon’s reply.
https://news.ycombinator.com/item?id=9479
It is patently absurd to claim that someone who quickly understood the explanation, learned from it, conceded where they were wrong, is somehow “profoundly out-of-touch” and “lost all perspective”. It’s the exact opposite.
I agree with Dan that we’d be lucky if all conversations were like that.
If knowledge == intelligence then Google and Wikipedia are "smarter" than you and the AGI problem has been solved for several decades.
Well, in many cases they might be right..
This is to deconstruct the question.
I don't think it's even wrong - a lot of people are doing things, making decisions, living life perfectly normally, successfully even, without applying intelligence in a personal way. Those with socially accredited 'intelligence' would be the worst offenders imo - they do not apply their intelligence personally but simply massage themselves and others towards consensus. Which is ultimately materially beneficial to them - so why not?
For me 'intelligence' would be knowing why you are doing what you are doing without dismissing the question with reference to 'convention', 'consensus', someone/something else. Computers can only do an imitation of this sort of answer. People stand a chance of answering it.
I'm not following. A computer's "why" is a written program, surely that is the most clear expression of its intent you could ask for?
Yes, at least it's what I wanted to drill further into.
Boiled down, I'm interested in hearing where "intelligent" people derive their motivations(I'm in agreement that most people are on ["non-intelligent" if you will] auto-pilot most of the time) if not from outside themselves, in your framework.
When does a goal start being my intelligent own goal? Any impetus for something can be traced back to not-yourself: I might decide to start tracking my spending, but that decision doesn't form out of the void. Maybe I value frugality, but I did not create that value in myself. It was instilled in me by experience, or my peers, etc. I see no way for one to "spontaneously" form a motivation, or if I wanted to take it one step further(into the Buddha's territory), I would have to question who, and where, and what this "self" even is.
To me, the answer is obvious. Inserting thousands of ideas and patterns of thoughts into a person will be unlikely to help them become a true expression of their nature. If you know gardening, the schooled person is more like a trained tree - grown in a way that suits the farmer - the more tied back it is, the less free it is.
As I see it, each individual is unique, with a soul. Each is capable of reaching a full expression of itself, by itself. What I also see is that there are many systems that are intentional manipulations, put in place in order to farm individuals at the individual's expense. The more education one receives, the more amenable one is to being 'farmed' according to the terms that were inserted. To me, this is the installation of an unnatural servile mentality, which once adopted makes the person easy to harness - the person will think being harnessed and 'in service' is right and good.
The problem is that these principles were not their own. These are like religious beliefs, and unlike principles founded according to personal experience. Received principles will always be unnatural. Acting according to them, is to act in an inauthentic way. However, there is no material reason to address the inauthenticity, when one looks around, everyone else is doing the same. This results in a self-supporting, collective delusion.
I'm going on now, but there are answers to what the self is - but 'society' cannot teach you then - it can only fill you with delusions. Imo, you would be on a better footing to forget everything you think you know (this costs you nothing) and do something like apply the scientific method personally - let your personal experience guide you.
With humans, the speed and ease with which we learn and reason is capped. I think a very dumb intelligence with stay dumb for not very long because every resource will be spent in making it smarter.
Currently, LLMs require hooks and active engagement with humans to ‘do’ anything. Including learn.
The root motivation on which every resource will be spent is simply and very obviously to make a profit.
We are incredibly far from AGI.
>AI transcription & summary seems to be a strong point of the models so I don't know what exactly you're trying to get to with this one. If you have evidence for that I'd actually be quite interested because humans are so bad at representing what other people said on the internet it seems like it should be an easy win for an AI. Humans typically have some wild interpretations of what other people write that cannot be supported from what was written.
Transcription and summarization is indeed fine, but try posting a longer reddit or HN discussion you've been part of into any model of your choice and ask it to analyze it, and you will see severe errors very soon. It will consistently misrepresent the views expressed and it doesn't really matter what model you go for. They can't do it.
For simple discussions this is fine. For complex discussions, especially when people get into conflict-- whether that conflict is really complex or not, problems usually result. The big problems are that the model will misquote or misrepresent views-- attempted paraphrases that actually change the meaning, the ordinary hallucinations etc.
For stories the confusion is much greater. Much of it is due to the basic way LLMs work: stories have dialogue, so if the premise contains people not being able to speak each other's language problems come very soon. I remember asking some recent Microsoft Copilot variant to write some portal scenario-- some guys on vacation to Teneriffe rent a catamaran and end up falling through a hole in the world of ASoIAF and into the seas off Essos, where they obviously have a terrible time, and it kept forgetting that they don't know English.
This is of course not obviously relevant for what Copilot is intended for, but I feel that if you actually try this you will understand how far we are from something like AGI, because if things like OpenAIs or whoever's systems were in fact close, this would be close too. If we were close we'd probably see silly errors too, but it'd be different kinds of errors, things like not telling you the story you want, not ignoring core instructions or failing to understand conversations.
It doesn't surprise me that you're getting nonsense, that is an ill-formed request. The AI can't fulfil it because it isn't asking it to do anything. I'm in the same boat as an AI would be, I can't tell what outcome you want. I'd probably interpret it as "summarise this conversation" if someone asked that of me, but you seem to agree that AI are good at summery tasks so that doesn't seem like it would be what you want. If I had my troll hat on I'd give you a frequency analysis of the letters and call it a day which is more passive-aggressive than I'd expect of the AI, they tend to just blather when they get a vague setup.
This and we don't actually know what the foundation models are for AGI, we're just assuming LLMs are it.
So yes, most people are right in that assumption, at least by the metric of how we generally measure intelligence.
We should probably rigorously verify that, for a role that itself is about rigorous verification without reasonable doubt.
I can immediately, and reasonably, doubt the output of an LLM, pending verification.
Similarly, the claim is that ~90% of communication is nonverbal, so I'm not sure I would trust a negotiator who has seen all of written human communication but never held a conversation.
I think the anthropomorphizing part is what messes with people. Is the autocomplete in my IDE smarter than I am? What about the search box on Google? What about a hammer or a drill?
Yet, I will admit that most of the time I hear people complaining about how AI written code is worse than that produced by developers, but it just doesn't match my own experience - it's frankly better (with enough guidance and context, say 95% tokens in and 5% tokens out, across multiple models working on the same project to occasionally validate and improve/fix the output, alongside adequate tooling) than what a lot of the people I know could or frankly do produce in practice.
That's a lot of conditions, but I think it's the same with the chat format - people accepting unvalidated drivel as fact, or someone using the web search and parsing documents and bringing up additional information that's found as a consequence of the conversation, bringing in external data and making use of the LLM ability to churn through a lot of it, sometimes better than the human reading comprehension would.
As such the story can be completely divorced from reality. The important thing is that the story is a good one. A good story transfers your social cover for yourself to your supervisor. They don't have to understand what you did and explain why it's okay that it failed. They just have to understand the story structure that you gave them. Listen to this great story, it's not my report's fault for this failure, and it's certainly not mine, just bad luck.
Additionally, the good (and sufficiently original) story is a gift because your supervisor can reuse it for new scenarios.
The good salesman gives you the story you need to excuse the purchase that will enable you to succeed. The bad salesman sells you on a story that you need a frivolous purchase.
And this is why job hoping is "bad". Eventually the incompetent employee uses up all of their good stories and management catches onto their act. It's embedded into our language. "Oh we've all heard this story before." The job hopper leaves just as their good stories are exhausted and can start over fresh at the new employer.
All of this in response to
> If we're lucky, people will manage to adapt and update their mental models to be less trustworthy of things that they can't verify
Yes, if we're lucky that is what will happen. But I fear that we're going to have to transition to a very low trust society for that to happen.
Reliance on the story is reliant on the trust that someone has done the real work. Distrust of the story implies a wider scale distrust in others and institutions.
Maybe we can add a tradition of annotating our stories with arguments and proofs. Although I've spent a two decade career desperately trying to give highly technical people arguments and proofs and I've seen stories completely unmoored from reality win out every time.
Optimistically, I'm just really bad at it and it's actually a natural transition. Pessimistically, we're in for a bumpy ride.
The idea of story being how people justify making their decisions is interesting. I'm reminded of a couple of anecdotes my father has repeated a few times over the years about two distinct medical circumstances he's had. When he was first diagnosed with sleep apnea, he apparently was very skeptical that he had any reason to do anything because the sleep doctor told him things like "this will help you be less sleepy during the day" and "you won't start nodding off as you drive" when he didn't feel like either of those experiences happened to him. Eventually a different sleep doctor did convince him it was worthwhile to treat, and he's used a CPAP since then, he still seems not to feel like it would have made sense for him to start when he first got the diagnosis. Through the lens you've given, the original doctor didn't give him a compelling enough story to justify the effort on his part. On the other hand, the first time he talked to a nutritionist about changing his diet, he apparently mentioned something about how he wanted to at least be able to eat ice cream occasionally, even if it was less often, rather than not ever be able to eat it again, and the nutritionist replied "Of course! that would make life not worth living". He ended up being much more open to listening to the advice of the nutritionist than I would have expected, and I think it would be reasonable to argue that was because the nutritionist was able to give him a story that seemed compelling about what his life would be like with the suggested changes.
If anything, I think they'd consider AI's involvement as a strike against the prosecution if they were on a jury.
At least personally, I've seen basically three buckets of opinions from non-technical people on AI. There's a decent-sized group of people who loathe anything to do with it due to issues you've mentioned, the art issue I mentioned, or other specific things that overall add up to the point that they think it's a net harm to society, a decent-sized group of people who basically never think about it at all or go out of their way to use anything related to it, and then a small group of people who claim to be fully aware of the limitations and consider themselves quite rational but then will basically ask ChatGPT about literally anything and trust what it says without doing any additional research. It's the last group that I'm personally most concerned about because I've yet to find any effective way of getting them to recognize the cognitive dissonance (although sometimes at least I've been able to make enough of an impression that they stop trying to make ChatGPT a participant in every single conversation I have with them).
For some, AI users are now self-allied with bigots and racists. I avoid discussing AI with anyone whose politics I don’t know anymore. If you haven’t experienced this personally, it’s hard to describe.
Not like food or clothing, but stuff like DLC content, streaming services, and LLMs.
> Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):
“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.
Policing and Hallucinations. Can’t wait to see this replicated globally.