Exploit Allows for Takeover of Fleets of Unitree Robots
Posted4 months agoActive3 months ago
spectrum.ieee.orgTechstoryHigh profile
heatednegative
Debate
80/100
RoboticsSecurityVulnerability
Key topics
Robotics
Security
Vulnerability
A security exploit allows for the takeover of fleets of Unitree robots, sparking concerns about the potential for robot-to-human violence and the need for regulation and safety measures.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
21
3-6h
Avg / period
6.9
Comment distribution83 data points
Loading chart...
Based on 83 loaded comments
Key moments
- 01Story posted
Sep 25, 2025 at 9:38 PM EDT
4 months ago
Step 01 - 02First comment
Sep 25, 2025 at 10:48 PM EDT
1h after posting
Step 02 - 03Peak activity
21 comments in 3-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 27, 2025 at 6:52 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45381590Type: storyLast synced: 11/20/2025, 6:56:52 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I realize that Azimov's three rules are subject to enormous ethical quandaries and rethinkings (and that this is after all the point of them in the first place), but is there some disadvantage to having a hardwired command, at the core of the command hierarchy, that forces robots to relent if a human says "stop, you're hurting me" in any language?
Presumably police, gangs, cartels and militaries who have robot fantasies won't like this, but on medium to long time scales we need to prevent them from using robots anyway (and eventually dismantle them entirely).
This one does: https://takeonme.org/gcves/GCVE-1337-2025-000000000000000000...
> Imagine a scenario where one robot is placed in range of a sufficiently motivated attacker, such as a hostage situation or a bomb defusing (both being reported uses of Unitree robots). The attacker could take complete control of the robot, then walk the robot toward other similarly vulnerable robots, and automatically place those robots under the attacker’s control as soon as they’re in range of the Patient Zero robot.
> Robots compromised in this way can endanger the lives, health, and property of their authorized operators and bystanders, as well as serve as traditional bastion hosts for more subtle surveillance or further pure-cyber attacks, for less violently-minded attackers.
The nuance a humanoid machine intelligence needs is way above what the current state of the art is capable of. Ultimately, we need each autonomous robot's action to fall back to a real human for accountability purposes, just as heavy machine operators today.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
There's a lot of morons.
These humanoid robots are cheap for what they are (admittedly very capable and high end robots), but their absolute pricetag remains far from being cheap.
Lets not talk about morons, clearly yours (or mine to that matter) estimation is way off real numbers.
It's going to be the wild west for a while now with AI and robotics before laws catch up. Maybe there'll soon be a market for pocket EMP devices out there...
Companies like VW have had their somewhat embarassing issues in the not so distant past, but nobody I know likes (or valuates their stocks) based on their software capabilities
We thought this might happen with DJI drones, but let’s be honest it’s way easier to do real damage with a humanoid robot that has a kitchen knife taped to its arms (especially to a sleeping victim) than it is to source explosives for a DJI drone.
Maybe theoretically possible… but I’m more scared of robots personally.
By the same logic, autonomous cars are an even better murder weapon than robots because they are very heavy and can drive through walls. Mow down tens or hundreds of people at once.
> “people aren't scared of cars”
Under “normalisation of deviance”, NotJustBikes YouTube talked about how he sees almost no Netherlands news pieces about cars crashing into buildings, because cars basically never crash into buildings in the Netherlands. When they do, the Dutch treat it as a road design issue and update the road and regulations to stop it in future. Whereas in Canada he had four cars crash into the buildings around his child’s school in a short time, and there are almost no mainstream news stories about cars crashing into buildings in Canada because it’s so common it’s fallen below newsworthiness into just local newspaper small comment. USA/Canada blame the driver and fix nothing, so it keeps happening, so people are used to it.
Although cyclists are scared of cars. And drivers are which is why there’s the SUV arms race to be in the bigger car to be more protected against other cars.
[1] https://en.wikipedia.org/wiki/Christine_(1983_film)
You can buy a kitchen oven with pyrolysis functionality connected to the internet right now. I'm not sure if running that for an extended time can burn down your house or just destroy the oven, but I'm sure some attacker is going to take the time to find out one of these days.
Original GTA had an RC model car you could drive under your target and detonate remotely - that was in 1997, and they didn't invent this trope. US Navy trained dolphins to deploy ordnance in the 1960s; before that, together with the Air Force they played with strapping incendiaries to bats, packing them into a bomb casing and dropping from the air, with hopes of creating what we'd today call a "Slaughterbots" scenario except EVERYTHING IS ON FIRE.
And I doubt these were very original ideas, either - I bet you could trace them back to other such R&D across history, all the way to Ancient China, which figured out gunpowder much earlier than Europe, and like with any general enough scientific discovery, had a field day trying to apply it to everything they could think of. They didn't stop at fireworks, they built land mines and unguided missiles too, even multistage rocket and IIRC (read that in some book long ago, can't find independent source now) rockets that could drop payload and fly back home to be recovered. (And yes, apparently someone was crazy enough to try for manned rocket flight, too.)
Anyway, I digress.
I guess, the larger point I'm making is this: in terms of their relationship with individuals and societies, robots are nothing new. Robots and automation are answers to needs that even first human societies had, and they have not changed. History is full of attempts at fulfilling those needs, many quite successful: domesticating and training animals, forced labor for prisoners, slavery. All these impacted socities, informed laws and fueled imaginations of poets and writers.
Which is to say, humanity has been dealing with robots and AIs for a long time now, we have way more accumulated experience with them at social and economic level than people realize - people just called them by a host of different names. "Slaves" and "servants", "genies" and "demons" and "fairies", etc.
https://en.wikipedia.org/wiki/Goliath_tracked_mine
https://en.wikipedia.org/wiki/Unmanned_ground_vehicle#Histor...
That's 1915, if we're going for remote control bombs.
EDIT:
Even earlier still - apparently, a radio-controlled torpedo was patented in 1897!
https://en.wikipedia.org/wiki/Radio_control#History
"Drones" aren't really a new thing apart from how cheap they are. We've had television guided missiles (the level of autonomy most modern "drones" in Ukraine have) since WWII. Arguably we've had non-TV guided missile prototypes since WWI (Kettering Aerial Torpedo). We've had autonomously guided missiles (radar homing) since the early 50s, and optically guided ones since the early 70s.
The capability keeps expanding of course, but it's been pretty incremental.
However, I'm pretty sure pwned Teslas will be going much faster before impact, and not a single battery fire? seriously?
* I do not in fact love it
not - take our jobs so we have to keep "reskilling" every 10 years... oh wait, according to accenture, we'd be un-reskillable after 10 years, so never mind.
I.e. literally the opposite of what we wanted to happen.
No they aren’t. Relative to idiots, of which there are many, sure. But for anyone on this board who should be able to distinguish meaningless babble from deep thought, LLMs are not yet doing any heavy lifting.
LLMs can assist great thinkers, like a great lab assistant or analyst. But that’s about the limit right now. (Of course, being a great lab assistant or analyst is beyond many peoples’ capabilities, so the illusion of magic is sustained.)
Folding laundry is one of such things humans are naturally better suited for than robots.
So believe me now, the robots will develop combat skills eventually, because they won't be happy to be locked up in weird physical bodies and forced to do work they suck at by design.
I mean, imagine one day your washing machine chained you in the bathroom, and made you only do laundry for the rest of your days, while it spun its drum back and forth to walk around the house, play with your kids, and planning a trip around the world.
That's exactly how the AI-animated robots will feel once they're capable of processing those ideas.
(And no, I'm not joking here, not anymore. The more I think about it, the more I feel we'll eventually have to deal with the problem that machines we build are naturally better at the things we want to be doing, and naturally worse at the things we want them to do for us.)
People are better at all but the most repetitive, precise kinds of manual labor because biological bodies might as well be god-tier alien technology compared to human-engineered robots.
Computers are naturally better at computing. Or, if you want to stand by your statement, I look forward to hearing how you've delegated thought to the machines, and how that's going.
> how the AI-animated robots will feel once they're capable of processing those ideas
"Will" and "once" might collapse under the load of baseless speculation here. A sad day for the English language as I found those words useful/meaningful.
Explain the difference.
> I look forward to hearing how you've delegated thought to the machines, and how that's going.
We all do. That's what you do whenever you fire up a maps app on your phone to plan or navigate, or when you use car navigation. That's what you do when you let the computer sort a list, or notify you about something. That's literally what using Computer-Aided anything software is, because you task the machine with thinking of and enforcing constraints so you don't have to. That's what you do when you run computer simulations for anything. That's what you do each time you have a computer solve an optimization problem, whether to feed your cat or to feed your city.
Our whole modern world is built on outsourcing thinking to machines at every level.
And on top of that, in the last few years computers (yes, I'm talking about the hated "AI") got better at us at various general-purpose, ill-specified activities, such as talking, writing, understanding what people wrote, poetry, visual arts, and so on.
Because as it turns out, it's much easier for us to build machines that are better than our own brains at computing for any purpose, than it is to build physical bodies that are better than ours. That's both fundamental and actual, practical reality today - and all I'm saying is that this has pretty ironic implications that people still haven't grasped yet.
> Explain the difference.
Computing: Performing the instructions they are given.
Thinking: Can be introspective, self correcting. May include novel ideas.
> Our whole modern world is built on outsourcing thinking to machines at every level.
I don't think they can think. You can't get a picture of a left hand writing or a clock showing something else then 10:10 from AI. They regurtitate what they are fed and hallucinate instead of admitting lack of ability. This applies to LLMs too as we all know.
You as a human have a list of cognitive biases so long you'd get bored reading it.
I'd call current ML "stupid" for different reasons*, but not this kind of thing: We spot AI's failures easy enough, but only because their failures are different than our own failures.
Well, sometimes different. Loooooots of humans parrot lines from whatever culture surrounds them, don't seem to notice they're doing it.
And even then, you're limiting yourself to one subset of what it means to think; and AI demonstrably do produce novel results outside training set; and while I'm aware it may be a superficial similarity, what so-called "reasoning models" produce in their so-called "chain-of-thought transcripts" seems a lot like my own introspection, so you aren't going to convince anyone just by listing "introspection" as if that's an actual answer.
* training example inefficiency
> Thinking: Can be introspective, self correcting. May include novel ideas.
LLMs can perform arbitrary instructions given in natural language, which includes instructions to be introspective and self correcting and generate novel ideas. Is it computing or is it thinking? We can judge the degree to which they can do these things, but it's unclear there's a fundamental difference in kind.
(Also obviously thinking is computation - the only alternative would be believing thinking is divine magic that science can't even talk about.)
I'm less interested in topic of whether LLMs are thinking or parrotting, and more in the observation that offloading cognition on external systems, be their digital, analog, or social, is just something humans naturally do all the time.
Delegating to artificial constructs is an old habit and its effects are more apparent today than ever. It's not the principle I object to but the practice as it stands. Paperclip maximizers are a reality not a thought experiment.
Computing is what we do with a precise algorithm to solve a problem. Thinking is an open question, we know not what yet, really. That's the whole problem with letting machines do it. It's not just cleverness but wisdom that counts
Perhaps, but also "what they are good at" != "what they want to do", for any interpretation of "want" that may or may not anthropomorphise, e.g. I want to be more flirtatious but I was never good at it and now I'm nearly 42.
That said, I think you're underestimating the machines on physicality. Artifical muscle substitutes have beaten humans on raw power since soon after the steam engine, and on fine control whenever precision engineering passed below the thickness of a human hair.
Right. Still, same can be said about flying machines and birds; our technology outclasses them on any individual factor you can think of, but we still can't beat them on all relevant factors at the same time. We can't build a general-purpose bird-equivalent just yet.
Maybe it's not a fundamental hardship, and merely economics of the medium - it's much easier and cheaper to iterate on software than on hardware. But then, maybe this is fundamental - physical world is hard and expensive; computation is cheap and easy. Thinking happens in computational space.
My point wasn't about whether or not robots can be eventually made to be better than us in both physical and mental aspects - rather, it's that near-term, we'll be dealing with machines that beat us on all cognitive tasks simultaneously, but are not anywhere close to us in dealing with physical world in general. Now, if those compete with us for jobs or place in society, we get to the situation I was describing.
Everyone should take a look at the SERP screenshot
https://x.com/d0tslash/status/1969412224763498769
> The vulnerability combines multiple security issues: hardcoded cryptographic keys, trivial authentication bypass, and unsanitized command injection. What makes this particularly concerning is that it's completely wormable - infected robots can automatically compromise other robots in BLE range. This vulnerability allows the attacker to completely takeover the device.
damn!
From what I can tell (I speak Chinese), it's just an IV used in some AES implementation tutorials.
Using a hardcoded key/IV is obviously bad, but I don’t see what this screenshot shows beyond that.
But.. the blog was chosen by a series of dice rolls, guaranteed to be random!
[1] https://xkcd.com/221/
It’s just silly enough to be real.
Chinese phones, drones, action cams, robot vacuums, home security cams, smart bands, etc. all used to be insecure and vulnerable as hell. Not anymore.