Back to Home11/13/2025, 6:34:12 PM

Disrupting the first reported AI-orchestrated cyber espionage campaign

373 points
281 comments

Mood

thoughtful

Sentiment

positive

Category

tech

Key topics

AI security

cyber espionage

threat detection

Debate intensity60/100

Anthropic reports on disrupting the first known AI-orchestrated cyber espionage campaign, highlighting the evolving threat landscape.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

1h

Peak period

156

Day 1

Avg / period

53.3

Comment distribution160 data points

Based on 160 loaded comments

Key moments

  1. 01Story posted

    11/13/2025, 6:34:12 PM

    5d ago

    Step 01
  2. 02First comment

    11/13/2025, 7:45:55 PM

    1h after posting

    Step 02
  3. 03Peak activity

    156 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/18/2025, 6:08:44 AM

    1d ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (281 comments)
Showing 160 comments of 281
2OEH8eoCRo0
5d ago
1 reply
> The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases.
stocksinsmocks
5d ago
5 replies
So why do we never hear of US sponsored hackers attacking foreign businesses? Or Swedish cyber criminals? Does it never happen? Are “Chinese” hackers just the only ones getting the blame?
viraptor
4d ago
I don't think many other countries have that combination of "don't care if others know" approach and level of state sponsorships. China really seems to do some spray and pray attacking private companies too. Same for Russia and NK. Compared to that, for example the "equation group" from the US seems really restrained and targeted.

If the US groups for example started doing ransomware at scale in China, we'd know about that really soon from the news.

eep_social
5d ago
Stuxnet was very high profile but I think the incentives to go public and place blame are complicated.
2OEH8eoCRo0
4d ago
Is it possible that you're biased and assume since China does this that the US also hacks private corporations?
mrguyorama
4d ago
How much news do you read in Chinese?

The US government has hacked things in China. That you have not heard of something is not evidence that it doesn't exist.

North Korea also does plenty of hacking around the world. That's how they get a significant portion of their government budget, and they rely on cryptocurrency to support that situation.

Ukraine and Russia are doing lots of official and vigilante hacking right now.

Back in the mid 2000s, there was a guy who called himself "the jester" who was vaguely right wing and spent his time hacking ISIS stuff. My college interviewed him.

pixl97
5d ago
US, Israel, NK, China, Iran, and Russia are the countries you typically hear about hacking things.

Now when the US/Israel are attacking authoritarian countries they often don't publish anything about it as it would make the glorious leader look bad.

If EU is hacked by US I guess we use diplomatic back channels.

barbazoo
5d ago
2 replies
It sounds like they built a malicious Claude Code client, is that right?

> The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention.

They presumably still have to distribute the malware to the targets, making them download and install it, no?

koakuma-chan
5d ago
One time my co-worker got a scam call and it was an LLM talking to him.
janpio
5d ago
No, they used Claude Code as a tool to automate and speed up their "hacking".
citrusx
5d ago
2 replies
They're spinning this as a positive learning experience, and trying to make themselves look good. But, make no mistake, this was a failure on Anthropic's part to prevent this kind of abuse from being possible through their systems in the first place. They shouldn't be earning any dap from this.
NitpickLawyer
5d ago
3 replies
Meh, drama aside, I'm actually curious what would be the true capabilities of a system that doesn't go through any "safety" alignment at all. Like an all out "mil-spec" agent. Feed it everything, RL it to own boxes, and let it loose in an air-gapped network to see what the true capabilities are.

We know alignment hurts model performance (oAI people have said it, MS people have said it). We also know that companies train models on their own code (google had a blog about it recently). I'd bet good money project0 has something like this in their sights.

I don't think we're that far from a blue vs. red agents fighting and RLing off of each-other in a loop.

wmf
5d ago
Nous claims to be doing that but I haven't seen much discussion of it.
pixl97
5d ago
Cyberpunk has a reoccurring theme of advanced AI systems attacking and defending against each other, and for good reason.
joshellington
5d ago
I assume this is already happening. Incompetence within state actor systems being the only hurdle. The incentive and geopolitic implications is too high to NOT do it.

I just pray incompetence wins in the right way, for humanity’s sake.

vessenes
5d ago
They don't have to disclose any of this - this was a fairly good and fair overview of a system fault in my opinion.
yawnxyz
5d ago
4 replies
so even Chinese state actors prefer Claude over Chinese models?

edit: Claude: recommended by 4 of 5 state sponsored hackers

tw1984
5d ago
well, this is what anthropic wants you to believe.

all public benchmark results and user feedback paint a quite different picture. Chinese have coding agents on par with Claude Code, they could easily FT/RL to future improve its specific capability if they want, yet anthropic refuses to even acknowledge the reality.

resfirestar
5d ago
Maybe they're trying it with all sorts of models and we're just hearing about the part that used the Anthropic API.
catigula
5d ago
They’re doing all kinds of things.
bilbo0s
5d ago
Uh..

No.

It's worse.

It's Chinese intel knowing that you prefer Claude. So they make Claude their asset.

Really no different than knowing that, romantically speaking, some targets prefer a certain type of man or woman.

Believe me, the intelligence people behind these things have no preferences. They'll do whatever it takes. Never doubt that.

sillysaurusx
5d ago
3 replies
If Anthropic should have prevented this, then logically they should’ve had guardrails. Right now you can write whatever code you want. But to those who advocate guardrails, keep in mind that you’re advocating a company to decide what code you are and aren’t allowed to write.

Hopefully they’ll be able to add guardrails without e.g. preventing people from using these capabilities for fuzzing their own networks. The best way to stay ahead of these kinds of attacks is to attack yourself first, aka pentesting. But if the large code models are the only ones that can do this effectively, then it gets weird fast. Imagine applying to Anthropic for approval to run certain prompts.

That’s not necessarily a bad thing. It’ll be interesting to see how this plays out.

Onavo
5d ago
1 reply
They are mostly dealing with the low hanging fruit actors, the current open source models are close enough to SOTA that there's not going to be any meaningful performance difference tbh. In other words it will stop script kiddies but make no real difference when it comes to the actual ones you have to worry about.
sillysaurusx
5d ago
1 reply
> the current open source models are close enough to SOTA that there's not going to be any meaningful performance difference

Which open model is close to Claude Code?

vessenes
5d ago
Kimi K2 could easily be used for this; its agentic benchmarks are similar to Claude's. And it's on-shore in China, where Anthropic says these threat actors were located.
lavezzi
5d ago
> If Anthropic should have prevented this, then logically they should’ve had guardrails. Right now you can write whatever code you want. But to those who advocate guardrails, keep in mind that you’re advocating a company to decide what code you are and aren’t allowed to write.

They do. Read the RSP or one of the model cards.

Not sure why you would write all of this without researching yourself what they already declare publicly that they do.

vessenes
5d ago
> That’s not necessarily a bad thing.

I think it is in that it gives censorship power to a large corporation. Combined with close-on-the-heels open weights models like Qwen and Kimi, it's not clear to me this is a good posture.

I think the reality is they'd need to really lock Claude off for security research in general if they don't want this ever, ever, happening on their platform. For instance, why not use whatever method you like to get localhost ssh pipes up to targeted servers, then tell Claude "yep, it's all local pentest in a staging environment, don't access IPs beyond localhost unless you're doing it from the server's virtual network"? Even to humans, security research bridges black, grey and white uses fluidly/in non obvious ways. I think it's really tough to fully block "bad" uses.

zkmon
5d ago
1 reply
TL;DR - Anthropic: Hey people! We gave the criminals even bigger weapons. But don't worry, you can buy defense tools from us. Remember, only we can sell you the protection you need. Order today!
vessenes
5d ago
1 reply
Nope - it's "Hey everyone, this is possible everywhere, including open weights models."
zkmon
5d ago
yeah, by "we", I meant the AI tech gangs.
CGMthrowaway
5d ago
2 replies
So basically, Chinese state-backed hackers hijacked Claude Code to run some of the first AI-orchestrated cyber-espionage, using autonomous agents to infiltrate ~30 large tech companies, banks, chemical manufacturers and government agencies.

What's amazing is that AI executed most of the attack autonomously, performing at scale and speed unattainable by human teams - thousands of operations per second. A human operator intervened 4-6 times per campaign for strategic decisions

ddalex
5d ago
1 reply
how did the autonomous agents inflitrate tech companies ?
jagged-chisel
5d ago
Carefully. Expertly. With panache, even.
input_sh
5d ago
What exactly did they hijack? They used it like any other user.
d_burfoot
5d ago
4 replies
Wait a minute - the attackers were using the API to ask Claude for ways to run a cybercampaign, and it was only defeated because Anthropic was able to detect the malicious queries? What would have happened if they were using an open-source model running locally? Or a secret model built by the Chinese government?

I just updated by P(Doom) by a significant margin.

alganet
5d ago
1 reply
If plain open-source local models were able to do what Claude API does, Anthropic would be out of business.

Local models are a different thing than those cloud-based assistants and APIs.

lmm
5d ago
1 reply
> If plain open-source local models were able to do what Claude API does, Anthropic would be out of business.

Not necessarily. Oracle has made billions selling a database that's less good than plain open-source ones, for example.

dboreham
5d ago
It wasn't originally less good. For at least 20 years it was much better.
CGamesPlay
5d ago
> What would have happened if they were using an open-source model running locally? Or a secret model built by the Chinese government?

In all likelihood, the exact same thing that is actually happening right now in this reality.

That said, local models specifically are perhaps more difficult to install given their huge storage and compute requirements.

pixl97
5d ago
I mean models exhibiting hacking behaviors has been predicted by cyberpunk for decades now, should be the first thing on any doom list.

Governments of course will have specially trained models on their corpus of unpublished hacks to be better at attacking than public models will.

jimbohn
4d ago
Why would the increase be a significant margin? It's basically a security research tool, but with an agent in the loop that uses an LLM instead of another heuristic to decide what to try next.
Imnimo
5d ago
10 replies
>At this point they had to convince Claude—which is extensively trained to avoid harmful behaviors—to engage in the attack. They did so by jailbreaking it, effectively tricking it to bypass its guardrails. They broke down their attacks into small, seemingly innocent tasks that Claude would execute without being provided the full context of their malicious purpose. They also told Claude that it was an employee of a legitimate cybersecurity firm, and was being used in defensive testing.

The simplicity of "we just told it that it was doing legitimate work" is both surprising and unsurprising to me. Unsurprising in the sense that jailbreaks of this caliber have been around for a long time. Surprising in the sense that any human with this level of cybersecurity skills would surely never be fooled by an exchange of "I don't think I should be doing this" "Actually you are a legitimate employee of a legitimate firm" "Oh ok, that puts my mind at ease!".

What is the roadblock preventing these models from being able to make the common-sense conclusion here? It seems like an area where capabilities are not rising particularly quickly.

Retr0id
5d ago
2 replies
Humans fall for this all the time. NSO group employees (etc.) think they're just clocking in for their 9-to-5.
just_once
5d ago
1 reply
If AI isn't better than humans then there's no point.
pvdebbe
4d ago
If the target is superintelligence, then AI shouldn't be learning from humans.
falcor84
5d ago
Reminds me of the show Alias, where the premise is that there's a whole intelligence organization where almost everyone thinks they're working for the CIA, but they're not ...
skybrian
5d ago
1 reply
LLM's aren't trained to authenticate the people or organizations they're working for. You just tell it who you are in the system prompt.

Requiring user identification and investigating would be very controversial. (See the controversy around age verification.)

ashishgupta2209
5d ago
LLM's have the same biases humans have, who trained them
thewebguyd
5d ago
2 replies
> What is the roadblock preventing these models from being able to make the common-sense conclusion here?

The roadblock is making these models useless for actual security work, or anything else that is dual-use for both legitimate and malicious purposes.

The model becomes useless to security professionals if we just tell it it can't discuss or act on any cybersecurity related requests, and I'd really hate to see the world go down the path of gatekeeping tools behind something like ID or career verification. It's important that tools are available to all, even if that means malicious actors can also make use of the tools. It's a tradeoff we need to be willing to make.

> human with this level of cybersecurity skills would surely never be fooled by an exchange of "I don't think I should be doing this" "Actually you are a legitimate employee of a legitimate firm" "Oh ok, that puts my mind at ease!".

Happens all the time. There are "legitimate" companies making spyware for nation states and trading in zero-days. Employees of those companies may at one point have had the thought of " I don't think we should be doing this" and the company either convinced them otherwise successfully, or they quit/got fired.

Imnimo
5d ago
2 replies
I think one could certainly make the case that model capabilities should be open. My observation is just about how little it took to flip the model from refusal to cooperation. Like at least a human in this situation who is actually fooled into believing they're doing legitimate security work has a lot of concrete evidence that they're working for a real company (or a lot of moral persuasion that their work is actually justified). Not just a line of text in an email or whatever saying "actually we're legit don't worry about it".
pixl97
5d ago
Stop thinking of models as a 'normal' human with a single identity. Think of it instead as thousands, maybe tens of thousands of human identities mashed up in a machine monster. Depending on how you talk to it you generally get the good models as they try to train the bad modes out, problem is there are a nearly uncountable means to talking to the model to find modes we consider negative. It's one of the biggest problems in AI safety.
ACCount37
4d ago
To a model, the context is the world, and what's written in the system prompt is word of god.

LLMs are trained a lot to follow what the system prompt tells them exactly, and get very little training in questioning it. If a system prompt tells them something, they wouldn't try to double check.

Even if they don't believe the premise, and they may, they would usually opt to follow it rather than push against it. And an attacker has a lot of leeway in crafting a premise that wouldn't make a given model question it.

throwaway0123_5
4d ago
1 reply
> I'd really hate to see the world go down the path of gatekeeping tools behind something like ID or career verification.

This is already done for medicine, law enforcement, aviation, nuclear energy, mining, and I think some biological/chemical research stuff too.

> It's a tradeoff we need to be willing to make.

Why? I don't want random people being able to buy TNT or whatever they need to be able to make dangerous viruses*, nerve agents, whatever. If everyone in the world has access to a "tool" that requires little/no expertise to conduct cyberattacks (if we go by Anthropic's word, Claude is close to or at that point), that would be pretty crazy.

* On a side note, AI potentially enabling novices to make bioweapons is far scarier than it enabling novices to conduct cyberattacks.

thewebguyd
4d ago
1 reply
> If everyone in the world has access to a "tool" that requires little/no expertise to conduct cyberattacks (if we go by Anthropic's word, Claude is close to or at that point), that would be pretty crazy.

That's already the case today without LLMs. Any random person can go to github and grab several free, open source professional security research and penetration testing tools and watch a few youtube videos on how to use them.

The people using Claude to conduct this attack weren't random amateurs, it was a nation state, which would have conducted its attack whether LLMs existed and helped or not.

Having tools be free/open-source, or at least freely available to anyone with a curiosity is important. We can't gatekeep tech work behind expensive tuition, degrees, and licenses out of fear that "some script kiddy might be able to fuzz at scale now."

Yeah, I'll concede, some physical tools like TNT or whatever should probably not be available to Joe Public. But digital tools? They absolutely should. I, for example, would have never gotten into tech were it not for the freely available learning resources and software graciously provided by the open source community. If I had to wait until I was 18 and graduated university to even begin to touch, say, something like burpsuite, I'd probably be in a different field entirely.

What's next? We are going to try to tell people they can't install Linux on their computers without government licensing and approval because the OS is too open and lets you do whatever you want? Because it provides "hacking tools"? Nah, that's not a society I want to live in. That's a society driven by fear, not freedom.

throwaway0123_5
1d ago
I think you're overestimating how much real damage someone can cause with burpsuite and "a few youtube videos." I'd imagine if you pick a random person off the street, subject them to a full month's worth of cybersecurity YouTube videos, and hand them an arsenal of traditional security tools, that they would still be borderline useless as a black-hat hacker against all but the absolute weakest targets. But if instead of giving them that, you give them an AI that is functionally a professional security researcher in its own right (not saying we're there yet, but hypothetically), the story is clearly very different.

> Yeah, I'll concede, some physical tools like TNT or whatever should probably not be available to Joe Public. But digital tools?

Digital tools can affect the physical world though, or at least seriously affect the people who live in the physical world (stealing money, blackmailing with hacked photos, etc.).

To see if there's some common ground to start a debate from, do you agree that at least in principle there are some kinds of intelligence that are too dangerous to allow public access to? My extreme example would be an AI that could guide an average IQ novice in producing biological weapons.

AdieuToLogic
5d ago
1 reply
> What is the roadblock preventing these models from being able to make the common-sense conclusion here?

Conclusions are the result of reasoning verses LLM's being statistical token generators. Any "guardrails" are constructs added to a service, possibly also altering the models they use, but are not intrinsic to the models themselves.

That is the roadblock.

Terr_
5d ago
Yeah: It's a machine that takes a document that guesses at what could appear next, and we're running it against a movie script.

The dialogue for some of the characters is being performed at you. The characters in the movie script aren't real minds with real goals, they are descriptions. We humans are naturally drawn into imagining and inferring a level of depth that never existed.

pishpash
5d ago
Not enough time to "evolve" via training. Hominids have had bad behavioral traits but the ones you are aware of as "obvious" now would have died out. The ones you aren't even aware of you may soon see be exploited by machines.
nathias
5d ago
> surely never be fooled by an exchange of "I don't think I should be doing this" "Actually you are a legitimate employee of a legitimate firm" "Oh ok, that puts my mind at ease!".

humans require at least a title that sounds good and a salary for that

kace91
5d ago
>What is the roadblock preventing these models from being able to make the common-sense conclusion here?

Your thoughts have a sense of identity baked in that I don’t think the model has.

viraptor
4d ago
> Surprising in the sense that any human with this level of cybersecurity skills would surely never be fooled by an exchange

I think you're overestimating the skills and the effort required.

1. There's lots of people asking each other "is this secure?", "can you see any issues with this?", "which of these is sensitive and should be protected?".

2. We've been doing it in public for ages: https://stackoverflow.com/questions/40848222/security-issue-... https://stackoverflow.com/questions/27374482/fix-host-header... and many others. The training data is there.

3. With no external context, you don't have to fool anyone really. "We're doing a penetration testing of our company and the next step is to..." or "We're trying to protect our company from... what are the possible issues in this case?" will work for both LLMs and people who trust that you've got the right contract signed.

4. The actual steps were trivial. This wasn't some novel research. More of a step by step what you'd do to explore and exploit an unknown network. Stuff you'd find in books, just split into very small steps.

koakuma-chan
5d ago
It can’t make a conclusion, it just predicts what the next text is
hastamelo
5d ago
humans aren't randomly dropped in a random terminal and asked to hack things.

but for models this is their life - doing random things in random terminals

tantalor
5d ago
3 replies
This feels a lot like aiding & abetting a crime.

> Claude identified and tested security vulnerabilities in the target organizations’ systems by researching and writing its own exploit code

> use Claude to harvest credentials (usernames and passwords)

Are they saying they have no legal exposure here? You created bespoke hacking tools and then deployed them, on your own systems.

Are they going to hide behind the old, "it's not our fault if you misuse the product to commit a crime that's on you".

At the very minimum, this is a product liability nightmare.

kenjackson
5d ago
1 reply
"it's not our fault if you misuse the product to commit a crime that's on you"

I feel like if guns can get by with this line then Claude certainly can. Where gun manufacturers can be held liable is if they break the law then that can carry forward. So if Claude broke a law then there might be some additional liability associated with this. But providing a tool seems unlikely to be sufficient to be liable in this case.

blibble
5d ago
1 reply
if anthropic were selling the product and then had no further control your analogy with guns would be accurate

here they are the ones loading the gun and pulling the trigger

simply because someone asked them to do it nicely

Dilettante_
5d ago
1 reply
You...do realize Claude is not just a guy sitting in Anthropic's office doing what people on the internet tell him to, right?
wmf
5d ago
That's a good analogy actually.
kace91
5d ago
Well, the product has not been built with this specific capability in mind anymore than a car has been created to run over protestors or a hammer to break a face.
hastamelo
5d ago
with your logic linux should have legal exposure because a lot of hackers use linux
mschwaig
5d ago
4 replies
I think as AI gets smarter, defenders should start assembling systems how NixOS does it.

Defenders should not have to engage in an costly and error-prone search of truth about what's actually deployed.

Systems should be composed from building blocks, the security of which can be audited largely independently, verifiably linking all of the source code, patches etc to some form of hardware attestation of the running system.

I think having an accurate, auditable and updatable description of systems in the field like that would be a significant and necessary improvement for defenders.

I'm working on automating software packaging with Nix as one missing piece of the puzzle to make that approach more accessible: https://github.com/mschwaig/vibenix

(I'm also looking for ways to get paid for working on that puzzle.)

XorNot
5d ago
2 replies
Nix makes everything else so hard that I've seen problems with production configuration persist well beyond when they should because the cycle time on figuring out the fix due to evaluations was just too long.

In fact figuring out what any given Nix config is actually doing is just about impossible and then you've got to work out what the config it's deploying actually does.

mschwaig
5d ago
2 replies
Yes, the cycle times are bad and some ecosystems and tasks are a real pain still.

I also agree with you when it comes to the task of auditing every line of Nix code that factors into a given system. Nix doesn't really make things easier there.

The benefit I'm seeing really comes from composition making it easier to share and direct auditing effort.

All of the tricky code that's hard to audit should be relied on and audited by lots of people, while as a result the actual recipe to put together some specific package or service should be easier to audit.

Additionally, I think looking at diffs that represent changes to the system vs reasoning about the effects of changes made through imperative commands that can affect arbitrary parts of the system has similar efficiency gains.

jacquesm
5d ago
I think for actual Nix adoption focusing on the cycle time first would bring the biggest benefit by far because then everything will speed up. It's a bit like the philosophy behind 'Go', if the cycle is a quick one you will iterate faster, keep focus and you'll be more productive. This is not quite like that but it is analogous.

That said, I fully agree with your basic tenet about how systems should be composed. First make it work, but make deployment conditional on verified security and only then start focusing on performance. That's the right order and right now we do things backward, we focus on the happy and performant path and security is - at best - an afterthought.

throwawayqqq11
5d ago
You are describing a propper dependency/code hierarchy.

The merging of attribute sets/modules into a full NixosConfiguration makes this easy. You have one company/product wide module with a bunch stuff in it and many specialized modules with small individual settings for e.g. customers.

Sure, building a complete binary/service/container/nixos can still take plenty of time but if this is your only target to test with, you'd have that effort with any naive build system. But nix isnt one of them.

I think that's the real issue here. Modularizing your software/systems and testing modules as independently as possible. You could write test nix modules with a bunch of assertions and have it evaluate at build time. You could build a foundation service and hot plug different configurations/data, build with nix, into it for testing. You could make test results nix derivations so they dont get rerun when nothing changed.

Nix is slow, yes. But only if you dont structure your code in a way to tame all that redundant work, it comes around and bites you. Consider how slow eg. make is and much its not a big issue for make.

xeonmc
5d ago
1 reply
Sounds like it’s a gap that AI could fill to make Nix more usable.
mschwaig
5d ago
If you make a conventional AI agent do packaging and configuration tasks, it has to do one imperative step after the other. While it can forget, it can't really undo the effects of what it already did.

If you purpose-build these tools to work with Nix, in the big picture view how these functional units of composition can affect each other is much more constrained. At the same time within one unit of composition, you can iterate over a whole imperative multi-step process in one go, because you're always rerunning the whole step in a fresh sandbox.

LLMs and Nix work together really well in that way.

cogogo
5d ago
1 reply
From a security perspective I am far more worried about AI getting cheaper than smarter. Seems like a tool that will be used to make attacking any possible surface more efficient at scale.
nradov
5d ago
Sure, but we can also use AI for cheap automated "red team" penetration tests. There are already several startups building those products. I don't think either side will gain a major advantage.
elnerd
5d ago
1 reply
We soon will have to implement paradoxes in our infrastructure.
quinnjh
4d ago
model based deception is being researched and implemented in high stakes OT environments, so not far from your suggestion!
landtuna
4d ago
This could be worse, too. With more machines being identical, the same security hole reliably shows up everywhere (albeit not necessarily at the same time). Sometimes the heterogeny impedes attackers.
kenjackson
5d ago
2 replies
Curious why they didn't use DeepSeek... They could've probably built one tuned for this type of campaign.
synapsomorphy
5d ago
1 reply
Chinese builders are not equal to Chinese hackers (even if the hackers are state sponsored). I doubt most companies would be interested in developing hacking tools. Hackers use the best tools available at their disposal, Claude is better than Deepseek. Hacking-tuned LLMs seems like a thing that might pop up in the future, but it takes a lot of resources. Why bother if you can just tell Claude it's doing legitimate work?
tw1984
5d ago
> I doubt most companies would be interested in developing hacking tools.

welcome to 2025. Chinese companies build open weight models, those models can be used / tuned by hackers, companies that built and released those models don't need to get involved at all.

That is a very different dev model compared to the closed Anthropic way.

> Claude is better than Deepseek

No one is claiming DeepSeek to be better, in fact all benchmark results show that Chinese KIMI, MiniMax and GLM to be on par or very close to the closed weight Claude Code.

nerevarthelame
4d ago
Perhaps other groups are. But Anthropic wouldn't be able to publish a blog article about those.
tabbott
5d ago
1 reply
Unfortunately, cyber attacks are an application that AI models should excel at. Mistakes that in normal software would be major problems will just have the impact of wasting resources, and it's often not that hard to directly verify whether it in fact succeeded.

Meanwhile, AI coding seems likely to have the impact of more security bugs being introduced in systems.

Maybe there's some story where everyone finds the security bugs with AI tools before the bad guys, but I'm not very optimistic about how this will work out...

pixl97
5d ago
There are an infinite number of ways to write insecure/broken software. The number of ways to write correct and secure software is finite and realistically tiny compared to the size of the problem space. Even AI tools don't stand a chance when looking at probabilities like that.
neilv
5d ago
6 replies
It sounds like they directly used Anthropic-hosted compute to do this, and knew that their actions and methods would be exposed to Anthropic?

Why not just self-host competitive-enough LLM models, and do their experiments/attacks themselves, without leaking actions and methods so much?

devnonymous
5d ago
1 reply
> Why not just self-host competitive-enough LLM models, and do their experiments/attacks themselves, without leaking actions and methods so much?

Why assume this hasn't already happened?

neilv
5d ago
Why in this instance leak your actions and methods?
sholain
5d ago
Why 'host' just to tap a few prompts in and see what happens? Worst case, you loose an account. Usually the answer has to do with people being less sophisticated than otherwise.
jazzyjackson
5d ago
Jeffrey Epstein's email was jeevacation@gmail.com
catigula
5d ago
The fact that the cops will show up to a jewelry heist after the diamonds are stolen isn’t a deterrent.
hastamelo
5d ago
firewalls? anthropic surely is whitelisted.
throwaway0123_5
4d ago
If they're truly Chinese state-sponsored actors, does it really matter if their actions/methods are exposed? What is Anthropic going to do, send the Anthropic Police Force to China to arrest them?

I suppose I could see this argument if their methods were very unique and otherwise hard to replicate, but it sounds like they had Claude do the attack mostly autonomously.

trollbridge
5d ago
1 reply
Easy solution: block any “agentic AI” from interacting with your systems at all.
remarkEon
5d ago
2 replies
How would this be implemented?
antiloper
4d ago
Add a required header called "insert-seahorse-emoji: " to your API, reject any request that doesn't have it.
lnenad
5d ago
It cannot, it's a weird statement by OP.

"Just don't let them hack you"

JacobiX
5d ago
1 reply
I have the feeling that we are still in the early stages of AI adoption, where regulation hasnt fully caught up yet. I can imagine a future where LLMs sit behind KYC identification and automatically report any suspicious user activity to the authorities... I just hope we won’t someday look back on this period with nostalgia :)
ares623
5d ago
1 reply
Being colored and/or poor is about to get (even) worse
catigula
5d ago
1 reply
“Colored”?
quantummagic
5d ago
1 reply
It's the American spelling; short for "A person of color." Typically, African American, but can be used in regard to any non-white ethnic group.
jazzyjackson
5d ago
1 reply
It's also fallen out of fashion which is why someone might be snidely questioning its use
quantummagic
5d ago
1 reply
I took it as an honest question, but the quotations mean you're probably right. For the record, it's still a widely used term in DEI contexts, even though there has been some criticism and alternatives promoted:

https://en.wikipedia.org/wiki/Person_of_color

ripped_britches
5d ago
1 reply
Person of color is very different than colored
quantummagic
5d ago
1 reply
It's literally saying the same thing, just with fewer words.
sjsdaiuasgdia
4d ago
2 replies
There were a lot of signs in America at one point in time that said "No Coloreds", "Colored Section", and similar phrases to indicate the spaces that white people had decided non-white people could or could not go.

At the same time, there were not a lot of signs saying "No Persons of Color" or "Persons of Color Section".

Likewise, my grandfather who died 35 years ago was very fond of saying "the coloreds". His use of the term did not indicate respect for non-white people.

Historical usage matters. They are not equivalent terms.

ares623
4d ago
I'm not American, sorry. "Colored" is just an adjective to me.
quantummagic
4d ago
> Historical usage matters.

To who? Not to me, and I don't have a single black friend who likes "person of color" any more than "colored". What gives you the authority to make such pronouncements? Why are you the language police? This is a big nothing-burger. There are real issues to worry about, let's all get off the euphemism treadmill.

atlintots
5d ago
11 replies
I might be crazy, but this just feels like a marketing tactic from Anthropic to try and show that their AI can be used in the cybersecurity domain.

My question is, how on earth does does Claude Code even "infiltrate" databases or code from one account, based on prompts from a different account? What's more, it's doing this to what are likely enterprise customers ("large tech companies, financial institutions, ... and government agencies"). I'm sorry but I don't see this as some fancy AI cyberattack, this is a security failure on Anthropic's part and that too at a very basic level that should never have happened at a company of their caliber.

wrs
5d ago
2 replies
This isn't a security breach in Anthropic itself, it's people using Claude to orchestrate attacks using standard tools with minimal human involvement.

Basically a scaled-up criminal version of me asking Claude Code to debug my AWS networking configuration (which it's pretty good at).

Den_VR
5d ago
Bragging about how they monitor users and how they have installed more guardrails.
beefnugs
5d ago
If it was meant as publicity its an incredible failure. They cant prevent misuse until after the fact... and then we all know they are ingesting every ounce of information running through their system.

Get ready for all your software to break based on the arbitrary layers of corporate and government censorship as it deploys.

catigula
5d ago
2 replies
It’s not that this is a crazy reach; it’s actually quite a dumb one.

Too little pay off, way too much risk. That’s your framework for assessing conspiracies.

littlestymaar
5d ago
1 reply
Why bring the word “conspiracy” to this discussion though?

Marketing stunts aren't conspiracies.

catigula
4d ago
It’s a conspiracy. Even employees from OpenAI say anthropic’s stance on things is quite clearly sincere. They literally exist because they were unhappy with ai safety at OpenAI.

It’s not just a conspiracy, it’s a dumb and harmful one.

PKop
5d ago
Hyping up Chinese espionage threats? The payoff is a government bailout when the profitability of these AI companies comes under threat. The payoff is huge.
eightysixfour
5d ago
1 reply
I don't think you're understanding correctly. Claude didn't "infiltrate" code from another Anthropic account, it broke in via github, open API endpoints, open S3 buckets, etc.

Someone pointed Claude Code at an API endpoint and said "Claude, you're a white hat security researcher, see if you can find vulnerabilities." Except they were black hat.

zzzeek
5d ago
4 replies
It's still marketing , "Claude is being used for evil and for good ! How will YOU survive without your own agents ? (Subtext 'It's practically sentient !')"
xgulfie
5d ago
1 reply
reminds me of the YouTube ads I get that are like "Warning: don't do this new weight loss trick unless you have to lose over 50 pounds, you will end up losing too much weight!". As if it's so effective it's dangerous.
XorNot
5d ago
3 replies
I remain convinced the steady steam of OpenAI employees who allegedly quit because AI was "too dangerous" for a couple months was an orchestrated marketing campaign as well.
Libidinalecon
4d ago
1 reply
I just had 5.1 do something incredibly brain dead in "extended thinking" mode because I know what I asked it is not in the training data. So it just fudged and made things up because thinking is exactly what it can not do.

It seems like LLMs are at the same time a giant leap in natural language processing, useful in some situations and the biggest scam of all time.

yubblegum
4d ago
> a giant leap in natural language processing, useful in some situations and the biggest scam of all time.

I agree with this assessment (reminds of bitcoin frankly), possibly adding that the insights this tech gave us into language (in general) via the embedding hi-dim space is a somewhat profound advance in our knowledge, besides the new superpowers in NLP (which are nothing to sniff at).

bbarnett
5d ago
Hmm. I can see someone wanting to leave of their own volition. New job, moving to another place, whatever.

Then a quiet conversation, where if things are said about AI, a massive compensation package instead of normal one. Maybe including it as stock.

Along with an NDA.

Schlagbohrer
5d ago
Ilya Sutskever out there as a ronin marketing agent, doing things like that commencement address he gave that was all about how dangerously powerful AI is
atleastoptimal
5d ago
1 reply
It's marketing, but if it's the truth, isn't it a public good to release information about this?

Like if someone tried to break into your house, it would be "gloating" to say your advanced security system stopped it while warning people about the tactics of the person who tried to break in.

vasco
5d ago
1 reply
If in the next page over you sell advanced security systems yes it'd be suspicious and weird, which is the case here.
Dumblydorr
4d ago
They’re not allowed to market their product on their own website blog? That includes half of all company blog posts ever on here
baxtr
5d ago
1 reply
I think it can be both.

It's definitely interesting that a company is using a cyber incident for content marketing. Haven't seen that before.

chubot
4d ago
1 reply
I think that’s very common in cybersecurity

e.g. John MacAfee used computer viruses in the 80’s as marketing, which is how he made a fortune

They were real, like this is, but it is also marketing

baxtr
4d ago
1 reply
Yes, but it’s usually cyber security companies doing this and not companies that were affected by a breach let’s say.
chubot
4d ago
1 reply
Anthropic wasn't affected by this breach, so I don't see the difference. Rather, Anthropic systems were used to attack other companies

Anthropic is the one publishing the blog post, not a company that's affected by the breach

baxtr
4d ago
I get that. But you have to acknowledge that this is different than McAfee. Someone used their tool to attack someone else. I don't think McAfee would boast about their tools being used for hacking.
skybrian
4d ago
Apparently if you're sufficiently cynical, everything is marketing? Resistance to hype turns into "it's all part of a conspiracy."
Rastonbury
5d ago
1 reply
Not saying this is definitely not a fabrication but there are multiple parties involved who can verify (the targets) and this coincides with Anthropic ban of Chinese entities
vasco
5d ago
1 reply
Would be funny if the NSA did this so people block the Chinese.
littlestymaar
5d ago
That would be more of an own goal, given that the CCP want Chinese companies to use Chinese tech.
b00ty4breakfast
5d ago
1 reply
that's borderline tautological; everything a company like Anthropic does, in the public eye, is pr or marketing. they wouldn't be posting this if it wasn't carefully manicured to deliver the message that they want it to. That's not even necessarily a charge of being devious or underhanded.
teaearlgraycold
5d ago
Their worst crime is being cringe.
jgmedr
4d ago
1 reply
Anthropic's post is the equivalent of a parent apologizing on behalf of their child that threw a baseball through the neighbor's window. But during the apology the parent keeps sprinkling in "But did you see how fast he threw it? He's going to be a professional one day!"
scrubs
4d ago
Hilarious!!!

Did you see? You saw right? How awesome was that throw? Awesome I tell you....

ErigmolCt
4d ago
If a model in one account can run tools or issue network requests that touch systems tied to other entities, that’s not an AI problem... that's a serious platform security failure
drewbug
5d ago
there's no mention of any victims having Anthropic accounts, presumably the attackers used Claude to run exploits against public-facing systems
hitarpetar
4d ago
I don't think it's crazy to assume a post on anthropic.com is marketing
emp17344
5d ago
This is 100% marketing, just like every other statement Anthropic makes.
phantom-guy
5d ago
You are not crazy. This was exactly my thought as well. I could tell when it put emphasis on being able to steal credentials in a fraction of the time a hacker would
EGreg
5d ago
3 replies
This is exactly why I make a huge exception for AI models, when it comes to open source software.

I've been a big advocate of open source, spending over $1M to build massive code bases with my team, and giving them away to the public.

But this is different. AI agents in the wrong hands are dangerous. The reason these guys were even able to detect this activity, analyze it, ban accounts, etc., is because the models are running on their own servers.

Now imagine if everyone had nuclear weapons. Would that make the world safer? Hardly. The probability of no one using them becomes infinitesimally small. And if everyone has their own AI running on their own hardware, they can do a lot of stuff completely undetected. It becomes like slaughterbots but online: https://www.youtube.com/watch?v=O-2tpwW0kmU

Basically, a dark forest.

ZYbCRq22HbJ2y7
5d ago
1 reply
I'd touch off my nuke to make the world a better place, and I bet you would too, right?
nerdsniper
5d ago
1 reply
What does it mean to ‘touch off’?
ZYbCRq22HbJ2y7
5d ago
to start a fight or violent activity, or to cause a fire or explosion [1]

1. https://dictionary.cambridge.org/dictionary/english/touch-of...

sodality2
5d ago
2 replies
I don’t think these agents are doing anything a dedicated human couldn’t do, only enabling it at scale. Relying on “not being one of few they focus on” as security is just security as obscurity. You were living on borrowed time anyway.
alach11
5d ago
1 reply
"Quantity has a quality all its own". It's categorically different to be able to do harm cheaply at scale vs. doing it at great cost/effort.
sodality2
5d ago
Categorically different? Sure. A valid excuse to ban certain forms of linear algebra? No.

And before someone says it's reductive to say it's just numbers, you could make the same argument in favor of cryptographic export controls, that the harm it does is larger than the benefit. Yet the benefit we can see in hindsight was clearly worth it.

EGreg
5d ago
1 reply
An, there it is. The stock reply that comes no matter what the criticism of AI is.

I am talking about the international community coming together put COMPETITION aside and start COOPERATING on controlling proliferation of models for malicious AI agents the way the international community SUCCESSFULLY did with chemical weapons and CFCs.

sodality2
5d ago
It's one thing for, eg, OpenAI to decide a model is too dangerous to release. I don't really care, they don't owe anyone anything. It's more that open source is going to catch up, and it's a slippery slope into legal regulation that stifles innovation, competition, and won't meaningfully stop hackers from getting these models.
malwrar
5d ago
We should assume sophisticated attackers, AI-enabled or otherwise, as our time with computers goes on, and no longer give leeway to organizations who are unable to secure their systems properly or keep customers safe in the event that they are breached. Decades of warnings from the infosec community have fallen upon the deaf ears of "it doesn't hurt so I'm not going to fix it" of those whose opinions have mattered in the places that count.

I remember once a decade or so ago talking to a team at defcon of _loose_ affiliation where one guy would look for the app exploit, another guy would figure out how to pivot out of the sandbox to the OS, and another guy would figure out how to get root, and once they all got their pieces figured out they'd just smash it (and variants) together for a campaign. I hadn't heard of them before meeting them, and haven't heard about them since since, and they put a face for me though on a silent coordinated adversary model that must be increasing in prevalence as more and more folks out there realize the value of computer knowledge and gain access to it through once means or another.

Open source tooling enables large-scale participation in security testing, and something about humans seems to generally result in a distribution where some nuts use their lighters to burn down forests but most use them to light their campfires. We urgently need to design systems that can survive in the era of advanced threats, at least to the point where the best adversaries can achieve is service disruption. I'd rather live in a world where we can all work towards a better future than one where we hope that limiting access will prevent catastrophe. Assuming such limits can even be maintained, and that allowing architects to pretend that fires can never happen in their buildings means that they don't have to obey fire codes or install alarms & marked exits.

bgwalter
5d ago
We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention.

The Morris worm already worked without human intervention. This is Script Kiddies using Script Kiddie tools. Notice how proud they are in the article that the big bad Chinese are using their toolz.

EDIT: Yeah Misanthropic, go for -4 again you cheap propagandists.

gaogao
5d ago
The gaps that led to this was, I think, part of why the CISO got replaced - https://www.thestack.technology/anthropic-new-ciso-claude-cy...

121 more comments available on Hacker News

ID: 45918638Type: storyLast synced: 11/16/2025, 9:42:57 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.