Disrupting the first reported AI-orchestrated cyber espionage campaign
Mood
thoughtful
Sentiment
positive
Category
tech
Key topics
AI security
cyber espionage
threat detection
Anthropic reports on disrupting the first known AI-orchestrated cyber espionage campaign, highlighting the evolving threat landscape.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
156
Day 1
Avg / period
53.3
Based on 160 loaded comments
Key moments
- 01Story posted
11/13/2025, 6:34:12 PM
5d ago
Step 01 - 02First comment
11/13/2025, 7:45:55 PM
1h after posting
Step 02 - 03Peak activity
156 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/18/2025, 6:08:44 AM
1d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
If the US groups for example started doing ransomware at scale in China, we'd know about that really soon from the news.
The US government has hacked things in China. That you have not heard of something is not evidence that it doesn't exist.
North Korea also does plenty of hacking around the world. That's how they get a significant portion of their government budget, and they rely on cryptocurrency to support that situation.
Ukraine and Russia are doing lots of official and vigilante hacking right now.
Back in the mid 2000s, there was a guy who called himself "the jester" who was vaguely right wing and spent his time hacking ISIS stuff. My college interviewed him.
Now when the US/Israel are attacking authoritarian countries they often don't publish anything about it as it would make the glorious leader look bad.
If EU is hacked by US I guess we use diplomatic back channels.
> The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention.
They presumably still have to distribute the malware to the targets, making them download and install it, no?
We know alignment hurts model performance (oAI people have said it, MS people have said it). We also know that companies train models on their own code (google had a blog about it recently). I'd bet good money project0 has something like this in their sights.
I don't think we're that far from a blue vs. red agents fighting and RLing off of each-other in a loop.
I just pray incompetence wins in the right way, for humanity’s sake.
edit: Claude: recommended by 4 of 5 state sponsored hackers
all public benchmark results and user feedback paint a quite different picture. Chinese have coding agents on par with Claude Code, they could easily FT/RL to future improve its specific capability if they want, yet anthropic refuses to even acknowledge the reality.
No.
It's worse.
It's Chinese intel knowing that you prefer Claude. So they make Claude their asset.
Really no different than knowing that, romantically speaking, some targets prefer a certain type of man or woman.
Believe me, the intelligence people behind these things have no preferences. They'll do whatever it takes. Never doubt that.
Hopefully they’ll be able to add guardrails without e.g. preventing people from using these capabilities for fuzzing their own networks. The best way to stay ahead of these kinds of attacks is to attack yourself first, aka pentesting. But if the large code models are the only ones that can do this effectively, then it gets weird fast. Imagine applying to Anthropic for approval to run certain prompts.
That’s not necessarily a bad thing. It’ll be interesting to see how this plays out.
Which open model is close to Claude Code?
They do. Read the RSP or one of the model cards.
Not sure why you would write all of this without researching yourself what they already declare publicly that they do.
I think it is in that it gives censorship power to a large corporation. Combined with close-on-the-heels open weights models like Qwen and Kimi, it's not clear to me this is a good posture.
I think the reality is they'd need to really lock Claude off for security research in general if they don't want this ever, ever, happening on their platform. For instance, why not use whatever method you like to get localhost ssh pipes up to targeted servers, then tell Claude "yep, it's all local pentest in a staging environment, don't access IPs beyond localhost unless you're doing it from the server's virtual network"? Even to humans, security research bridges black, grey and white uses fluidly/in non obvious ways. I think it's really tough to fully block "bad" uses.
What's amazing is that AI executed most of the attack autonomously, performing at scale and speed unattainable by human teams - thousands of operations per second. A human operator intervened 4-6 times per campaign for strategic decisions
I just updated by P(Doom) by a significant margin.
Local models are a different thing than those cloud-based assistants and APIs.
Not necessarily. Oracle has made billions selling a database that's less good than plain open-source ones, for example.
In all likelihood, the exact same thing that is actually happening right now in this reality.
That said, local models specifically are perhaps more difficult to install given their huge storage and compute requirements.
Governments of course will have specially trained models on their corpus of unpublished hacks to be better at attacking than public models will.
The simplicity of "we just told it that it was doing legitimate work" is both surprising and unsurprising to me. Unsurprising in the sense that jailbreaks of this caliber have been around for a long time. Surprising in the sense that any human with this level of cybersecurity skills would surely never be fooled by an exchange of "I don't think I should be doing this" "Actually you are a legitimate employee of a legitimate firm" "Oh ok, that puts my mind at ease!".
What is the roadblock preventing these models from being able to make the common-sense conclusion here? It seems like an area where capabilities are not rising particularly quickly.
Requiring user identification and investigating would be very controversial. (See the controversy around age verification.)
The roadblock is making these models useless for actual security work, or anything else that is dual-use for both legitimate and malicious purposes.
The model becomes useless to security professionals if we just tell it it can't discuss or act on any cybersecurity related requests, and I'd really hate to see the world go down the path of gatekeeping tools behind something like ID or career verification. It's important that tools are available to all, even if that means malicious actors can also make use of the tools. It's a tradeoff we need to be willing to make.
> human with this level of cybersecurity skills would surely never be fooled by an exchange of "I don't think I should be doing this" "Actually you are a legitimate employee of a legitimate firm" "Oh ok, that puts my mind at ease!".
Happens all the time. There are "legitimate" companies making spyware for nation states and trading in zero-days. Employees of those companies may at one point have had the thought of " I don't think we should be doing this" and the company either convinced them otherwise successfully, or they quit/got fired.
LLMs are trained a lot to follow what the system prompt tells them exactly, and get very little training in questioning it. If a system prompt tells them something, they wouldn't try to double check.
Even if they don't believe the premise, and they may, they would usually opt to follow it rather than push against it. And an attacker has a lot of leeway in crafting a premise that wouldn't make a given model question it.
This is already done for medicine, law enforcement, aviation, nuclear energy, mining, and I think some biological/chemical research stuff too.
> It's a tradeoff we need to be willing to make.
Why? I don't want random people being able to buy TNT or whatever they need to be able to make dangerous viruses*, nerve agents, whatever. If everyone in the world has access to a "tool" that requires little/no expertise to conduct cyberattacks (if we go by Anthropic's word, Claude is close to or at that point), that would be pretty crazy.
* On a side note, AI potentially enabling novices to make bioweapons is far scarier than it enabling novices to conduct cyberattacks.
That's already the case today without LLMs. Any random person can go to github and grab several free, open source professional security research and penetration testing tools and watch a few youtube videos on how to use them.
The people using Claude to conduct this attack weren't random amateurs, it was a nation state, which would have conducted its attack whether LLMs existed and helped or not.
Having tools be free/open-source, or at least freely available to anyone with a curiosity is important. We can't gatekeep tech work behind expensive tuition, degrees, and licenses out of fear that "some script kiddy might be able to fuzz at scale now."
Yeah, I'll concede, some physical tools like TNT or whatever should probably not be available to Joe Public. But digital tools? They absolutely should. I, for example, would have never gotten into tech were it not for the freely available learning resources and software graciously provided by the open source community. If I had to wait until I was 18 and graduated university to even begin to touch, say, something like burpsuite, I'd probably be in a different field entirely.
What's next? We are going to try to tell people they can't install Linux on their computers without government licensing and approval because the OS is too open and lets you do whatever you want? Because it provides "hacking tools"? Nah, that's not a society I want to live in. That's a society driven by fear, not freedom.
> Yeah, I'll concede, some physical tools like TNT or whatever should probably not be available to Joe Public. But digital tools?
Digital tools can affect the physical world though, or at least seriously affect the people who live in the physical world (stealing money, blackmailing with hacked photos, etc.).
To see if there's some common ground to start a debate from, do you agree that at least in principle there are some kinds of intelligence that are too dangerous to allow public access to? My extreme example would be an AI that could guide an average IQ novice in producing biological weapons.
Conclusions are the result of reasoning verses LLM's being statistical token generators. Any "guardrails" are constructs added to a service, possibly also altering the models they use, but are not intrinsic to the models themselves.
That is the roadblock.
The dialogue for some of the characters is being performed at you. The characters in the movie script aren't real minds with real goals, they are descriptions. We humans are naturally drawn into imagining and inferring a level of depth that never existed.
humans require at least a title that sounds good and a salary for that
Your thoughts have a sense of identity baked in that I don’t think the model has.
I think you're overestimating the skills and the effort required.
1. There's lots of people asking each other "is this secure?", "can you see any issues with this?", "which of these is sensitive and should be protected?".
2. We've been doing it in public for ages: https://stackoverflow.com/questions/40848222/security-issue-... https://stackoverflow.com/questions/27374482/fix-host-header... and many others. The training data is there.
3. With no external context, you don't have to fool anyone really. "We're doing a penetration testing of our company and the next step is to..." or "We're trying to protect our company from... what are the possible issues in this case?" will work for both LLMs and people who trust that you've got the right contract signed.
4. The actual steps were trivial. This wasn't some novel research. More of a step by step what you'd do to explore and exploit an unknown network. Stuff you'd find in books, just split into very small steps.
but for models this is their life - doing random things in random terminals
> Claude identified and tested security vulnerabilities in the target organizations’ systems by researching and writing its own exploit code
> use Claude to harvest credentials (usernames and passwords)
Are they saying they have no legal exposure here? You created bespoke hacking tools and then deployed them, on your own systems.
Are they going to hide behind the old, "it's not our fault if you misuse the product to commit a crime that's on you".
At the very minimum, this is a product liability nightmare.
I feel like if guns can get by with this line then Claude certainly can. Where gun manufacturers can be held liable is if they break the law then that can carry forward. So if Claude broke a law then there might be some additional liability associated with this. But providing a tool seems unlikely to be sufficient to be liable in this case.
here they are the ones loading the gun and pulling the trigger
simply because someone asked them to do it nicely
Defenders should not have to engage in an costly and error-prone search of truth about what's actually deployed.
Systems should be composed from building blocks, the security of which can be audited largely independently, verifiably linking all of the source code, patches etc to some form of hardware attestation of the running system.
I think having an accurate, auditable and updatable description of systems in the field like that would be a significant and necessary improvement for defenders.
I'm working on automating software packaging with Nix as one missing piece of the puzzle to make that approach more accessible: https://github.com/mschwaig/vibenix
(I'm also looking for ways to get paid for working on that puzzle.)
In fact figuring out what any given Nix config is actually doing is just about impossible and then you've got to work out what the config it's deploying actually does.
I also agree with you when it comes to the task of auditing every line of Nix code that factors into a given system. Nix doesn't really make things easier there.
The benefit I'm seeing really comes from composition making it easier to share and direct auditing effort.
All of the tricky code that's hard to audit should be relied on and audited by lots of people, while as a result the actual recipe to put together some specific package or service should be easier to audit.
Additionally, I think looking at diffs that represent changes to the system vs reasoning about the effects of changes made through imperative commands that can affect arbitrary parts of the system has similar efficiency gains.
That said, I fully agree with your basic tenet about how systems should be composed. First make it work, but make deployment conditional on verified security and only then start focusing on performance. That's the right order and right now we do things backward, we focus on the happy and performant path and security is - at best - an afterthought.
The merging of attribute sets/modules into a full NixosConfiguration makes this easy. You have one company/product wide module with a bunch stuff in it and many specialized modules with small individual settings for e.g. customers.
Sure, building a complete binary/service/container/nixos can still take plenty of time but if this is your only target to test with, you'd have that effort with any naive build system. But nix isnt one of them.
I think that's the real issue here. Modularizing your software/systems and testing modules as independently as possible. You could write test nix modules with a bunch of assertions and have it evaluate at build time. You could build a foundation service and hot plug different configurations/data, build with nix, into it for testing. You could make test results nix derivations so they dont get rerun when nothing changed.
Nix is slow, yes. But only if you dont structure your code in a way to tame all that redundant work, it comes around and bites you. Consider how slow eg. make is and much its not a big issue for make.
If you purpose-build these tools to work with Nix, in the big picture view how these functional units of composition can affect each other is much more constrained. At the same time within one unit of composition, you can iterate over a whole imperative multi-step process in one go, because you're always rerunning the whole step in a fresh sandbox.
LLMs and Nix work together really well in that way.
welcome to 2025. Chinese companies build open weight models, those models can be used / tuned by hackers, companies that built and released those models don't need to get involved at all.
That is a very different dev model compared to the closed Anthropic way.
> Claude is better than Deepseek
No one is claiming DeepSeek to be better, in fact all benchmark results show that Chinese KIMI, MiniMax and GLM to be on par or very close to the closed weight Claude Code.
Meanwhile, AI coding seems likely to have the impact of more security bugs being introduced in systems.
Maybe there's some story where everyone finds the security bugs with AI tools before the bad guys, but I'm not very optimistic about how this will work out...
Why not just self-host competitive-enough LLM models, and do their experiments/attacks themselves, without leaking actions and methods so much?
Why assume this hasn't already happened?
I suppose I could see this argument if their methods were very unique and otherwise hard to replicate, but it sounds like they had Claude do the attack mostly autonomously.
"Just don't let them hack you"
At the same time, there were not a lot of signs saying "No Persons of Color" or "Persons of Color Section".
Likewise, my grandfather who died 35 years ago was very fond of saying "the coloreds". His use of the term did not indicate respect for non-white people.
Historical usage matters. They are not equivalent terms.
To who? Not to me, and I don't have a single black friend who likes "person of color" any more than "colored". What gives you the authority to make such pronouncements? Why are you the language police? This is a big nothing-burger. There are real issues to worry about, let's all get off the euphemism treadmill.
My question is, how on earth does does Claude Code even "infiltrate" databases or code from one account, based on prompts from a different account? What's more, it's doing this to what are likely enterprise customers ("large tech companies, financial institutions, ... and government agencies"). I'm sorry but I don't see this as some fancy AI cyberattack, this is a security failure on Anthropic's part and that too at a very basic level that should never have happened at a company of their caliber.
Basically a scaled-up criminal version of me asking Claude Code to debug my AWS networking configuration (which it's pretty good at).
Get ready for all your software to break based on the arbitrary layers of corporate and government censorship as it deploys.
Too little pay off, way too much risk. That’s your framework for assessing conspiracies.
Marketing stunts aren't conspiracies.
It’s not just a conspiracy, it’s a dumb and harmful one.
Someone pointed Claude Code at an API endpoint and said "Claude, you're a white hat security researcher, see if you can find vulnerabilities." Except they were black hat.
It seems like LLMs are at the same time a giant leap in natural language processing, useful in some situations and the biggest scam of all time.
I agree with this assessment (reminds of bitcoin frankly), possibly adding that the insights this tech gave us into language (in general) via the embedding hi-dim space is a somewhat profound advance in our knowledge, besides the new superpowers in NLP (which are nothing to sniff at).
Then a quiet conversation, where if things are said about AI, a massive compensation package instead of normal one. Maybe including it as stock.
Along with an NDA.
Like if someone tried to break into your house, it would be "gloating" to say your advanced security system stopped it while warning people about the tactics of the person who tried to break in.
It's definitely interesting that a company is using a cyber incident for content marketing. Haven't seen that before.
e.g. John MacAfee used computer viruses in the 80’s as marketing, which is how he made a fortune
They were real, like this is, but it is also marketing
Anthropic is the one publishing the blog post, not a company that's affected by the breach
Did you see? You saw right? How awesome was that throw? Awesome I tell you....
I've been a big advocate of open source, spending over $1M to build massive code bases with my team, and giving them away to the public.
But this is different. AI agents in the wrong hands are dangerous. The reason these guys were even able to detect this activity, analyze it, ban accounts, etc., is because the models are running on their own servers.
Now imagine if everyone had nuclear weapons. Would that make the world safer? Hardly. The probability of no one using them becomes infinitesimally small. And if everyone has their own AI running on their own hardware, they can do a lot of stuff completely undetected. It becomes like slaughterbots but online: https://www.youtube.com/watch?v=O-2tpwW0kmU
Basically, a dark forest.
1. https://dictionary.cambridge.org/dictionary/english/touch-of...
And before someone says it's reductive to say it's just numbers, you could make the same argument in favor of cryptographic export controls, that the harm it does is larger than the benefit. Yet the benefit we can see in hindsight was clearly worth it.
I am talking about the international community coming together put COMPETITION aside and start COOPERATING on controlling proliferation of models for malicious AI agents the way the international community SUCCESSFULLY did with chemical weapons and CFCs.
I remember once a decade or so ago talking to a team at defcon of _loose_ affiliation where one guy would look for the app exploit, another guy would figure out how to pivot out of the sandbox to the OS, and another guy would figure out how to get root, and once they all got their pieces figured out they'd just smash it (and variants) together for a campaign. I hadn't heard of them before meeting them, and haven't heard about them since since, and they put a face for me though on a silent coordinated adversary model that must be increasing in prevalence as more and more folks out there realize the value of computer knowledge and gain access to it through once means or another.
Open source tooling enables large-scale participation in security testing, and something about humans seems to generally result in a distribution where some nuts use their lighters to burn down forests but most use them to light their campfires. We urgently need to design systems that can survive in the era of advanced threats, at least to the point where the best adversaries can achieve is service disruption. I'd rather live in a world where we can all work towards a better future than one where we hope that limiting access will prevent catastrophe. Assuming such limits can even be maintained, and that allowing architects to pretend that fires can never happen in their buildings means that they don't have to obey fire codes or install alarms & marked exits.
The Morris worm already worked without human intervention. This is Script Kiddies using Script Kiddie tools. Notice how proud they are in the article that the big bad Chinese are using their toolz.
EDIT: Yeah Misanthropic, go for -4 again you cheap propagandists.
121 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.