Back to Home11/16/2025, 11:32:39 AM

Anthropic’s paper smells like bullshit

750 points
229 comments

Mood

skeptical

Sentiment

negative

Category

tech

Key topics

AI

cybersecurity

research critique

Debate intensity85/100
Earlier thread: Disrupting the first reported AI-orchestrated cyber espionage campaign - https://news.ycombinator.com/item?id=45918638 - Nov 2025 (281 comments)

The author criticizes Anthropic's paper on AI-orchestrated cyber espionage, questioning its validity and methodology.

Snapshot generated from the HN discussion

Discussion Activity

Light discussion

First comment

15h

Peak period

5

Day 1

Avg / period

3.5

Comment distribution7 data points

Based on 7 loaded comments

Key moments

  1. 01Story posted

    11/16/2025, 11:32:39 AM

    2d ago

    Step 01
  2. 02First comment

    11/17/2025, 2:24:46 AM

    15h after posting

    Step 02
  3. 03Peak activity

    5 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/17/2025, 3:18:17 PM

    1d ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (229 comments)
Showing 7 comments of 229
coldtea
2d ago
If you want to justify asking for post-AI-crash trillion dollar bailouts, what's better than on grounds of "national interest"?
miohtama
2d ago
Anthropic portrays itself as an AI safety company. Their stock price and funding rounds depend on this. Published AI safety, even if bullshit, is then what they do, to downplay competitors and Chinese for regulatory capture.
makaking
2d ago
I agree that these reports should be verifiable and provide more details about the method and how to protect your own network. Even more so if they want to be heard by serious security teams.

However, regardless of the sloppy report, this is absolutely true.

>"Security teams should experiment with applying AI for defense in areas like SOC automation, threat detection, vulnerability assessment, and incident response and build experience with what works in their specific environments."

... And it will be more so with every week that goes by. We are entering a new domain and security experts need to learn how to use the tools of the attackers.

guluarte
2d ago
those papers are marketing campaigns and should be seen as them
chaos_zhang
2d ago
The founders of Anthropic previously worked at Baidu, a Chinese tech company. I hope their perspective on China is based on rational analysis rather than personal grievances. Unfortunately, judging from this paper, I am inclined to believe it is the latter.
HardwareLust
1d ago
Everything Anthropic does smells like bullshit, just a question of degrees.
spacecadet
1d ago
Yaaaawn. If you know you know. This is script kiddy child's play with LLMs relative to security... My team is winning CTFs with fully local/distributed/private LLMs and automated agents. We deploy advanced AI honeypots and "chaos agents" using game theory orchestration and other cutting edge research. Anthropic isn't even on the radar relative to this. Microsoft/OpenAI are light years ahead given their proximity to gov/MIL... Adversarial machine learning is a fascinating area of study and practice, and relatively quiet when it comes to hype.

222 more comments available on Hacker News

ID: 45944296Type: storyLast synced: 11/16/2025, 9:42:57 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.