How the N.y.p.d.'s Facial Recognition Tool Landed the Wrong Man in Jail
Posted4 months agoActive4 months ago
nytimes.comNewsstory
controversialnegative
Debate
80/100
Audio_recognitionLaw EnforcementAI Bias
Key topics
Audio_recognition
Law Enforcement
AI Bias
Discussion Activity
Light discussionFirst comment
37m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Aug 26, 2025 at 7:00 PM EDT
4 months ago
Step 01 - 02First comment
Aug 26, 2025 at 7:38 PM EDT
37m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 27, 2025 at 3:44 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45033394Type: storyLast synced: 11/20/2025, 11:47:20 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
As a layman, it looks to me like the legal system is unprepared for the unintuitive issue that if you run a facial recognition search with a 0.000001 false positive rate on a database 10 million people, you get someone who:
1. Likely looks extremely close to the target
2. Has a 99% chance of not actually being the target
Eyewitness confirmation does little to shift that 99% because it'll be based largely on the same factors, rather than being independent evidence.
Even scarier with the black box that is AI…
How can I refute what a computer says if nobody knows how it came to a conclusion?