Marines Managed to Get Past an AI Powered Camera "undetected" by Hiding in Boxes
Posted5 months agoActive5 months ago
rudevulture.comTechstory
calmmixed
Debate
60/100
Artificial IntelligenceSecurity VulnerabilitiesMachine Learning Limitations
Key topics
Artificial Intelligence
Security Vulnerabilities
Machine Learning Limitations
Marines bypassed AI-powered camera by hiding in boxes, sparking discussion on AI limitations and potential workarounds.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3m
Peak period
24
1-2h
Avg / period
6.5
Comment distribution39 data points
Loading chart...
Based on 39 loaded comments
Key moments
- 01Story posted
Aug 21, 2025 at 12:59 PM EDT
5 months ago
Step 01 - 02First comment
Aug 21, 2025 at 1:02 PM EDT
3m after posting
Step 02 - 03Peak activity
24 comments in 1-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 22, 2025 at 3:46 AM EDT
5 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44975164Type: storyLast synced: 11/20/2025, 2:46:44 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I'm working on some AI projects at work and there's no magic code I can see to know what it is going to do ... or even sometimes why it did it. Letting it loose in an organization like that seems unwise at best.
Sure they could tell the AI to watch out for boxes, but now every time some poor guy moves some boxes they're going to set off something.
The surface area of these issues is really fun.
Delivery guy shows up carrying boxes, gets shot.
Don't security cameras have universals motion detection triggers you can use to make sure everything gets captured? Why only pre-screen human silhouettes?
Since AGI for cameras is very far away as the number of false positives and creative workarounds for camouflage is insane to be caught by current "smart" algorithms.
Rotations? Like the military hold perimeter security?
>And, really, how would you feel if that was YOUR job?
If I couldn't get a better job to pay my bills, then that would be amazing. Weird of you to assume like that would somehow be the most dehumanizing job in existence.
I’m reminded of the Skyrim shopkeepers with a basket on their head.
humans will see that they are screwing up and reformulate the action plan.
AI will keep screwingup until it is stopped, and apparently will gaslight when attempts are made to realign at the prompt.
humans realize when results are not desirable.
AI just keeps generating output until plugpull.
Therein lies the rub.
And that is an entirely different problem, isn’t it?
In simple terms: The AI doesn’t need to say, "something unusual is happening because I saw walking trees and trees usually cannot walk", but merely "something unusual is happening because what I saw was unusual, care to take a look?"
I bet they’d have similar luck if they dressed up as bears. Or anything else non-human, like a triangle.
It isn't about knowing that trees don't walk, but that trees do behave in certain ways and noticing that it is "surprised" that they fail to behave in the predicted ways, where "surprise" is something like "this is a very low probability output of my model of the next frame". It isn't necessary to enumerate all the ways the next frame was low-probability, it is enough to observe that it was logically-not high probability.
In a lot of cases this isn't necessarily that useful, but in a security context having a human take a look at a "very low probability series of video frames" will, if nothing else, teach the developers a lot about the real capability of the model. If it spits out a lot of false positives, that is itself very informative about what the model is "really" doing.