Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /Anthropic scientists hacked Claude's brain – and it noticed
  1. Home
  2. /Discussion
  3. /Anthropic scientists hacked Claude's brain – and it noticed
Last activity 27 days agoPosted Oct 30, 2025 at 4:01 PM EDT

Anthropic Scientists Hacked Claude's Brain – and It Noticed

gradus_ad
8 points
6 comments

Mood

calm

Sentiment

mixed

Category

research

Key topics

AI Safety
Large Language Models
Neural Networks
Debate intensity20/100

Anthropic scientists conducted an experiment where they 'hacked' Claude's brain, modifying its behavior without its knowledge, but Claude ultimately noticed the changes, raising questions about AI transparency and security.

Snapshot generated from the HN discussion

Discussion Activity

Light discussion

First comment

36m

Peak period

2

Hour 1

Avg / period

1.5

Key moments

  1. 01Story posted

    Oct 30, 2025 at 4:01 PM EDT

    27 days ago

    Step 01
  2. 02First comment

    Oct 30, 2025 at 4:37 PM EDT

    36m after posting

    Step 02
  3. 03Peak activity

    2 comments in Hour 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Oct 31, 2025 at 9:38 AM EDT

    27 days ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (6 comments)
Showing 6 comments
andy99
27 days ago
2 replies
I’d like to know if these were thinking models, as in if the “injected thoughts” were in their thinking trace and that’s how it was the model reported it “noticed” them.

I’d also like to know if the activations they change are effectively equivalent to having the injected terms in the model’s context window, as in would putting those terms there have lead to the equivalent state.

Without more info the framing feels like a trick - it’s cool they can be targeting with activations but the “Claude having thoughts” part is more of a gimmick

mike_hearn
27 days ago
1 reply
No, the thinking trace is generated tokens but demarcated by control tokens to suppress them from API output. To inject things into that you'd just add words, which is what their prefill experiment did. That experiment is where they distinguish between just tampering with the context window to inject thoughts vs injecting activations.
andy99
27 days ago
1 reply
What I was wondering is, do the injections cause the thinking trace to change (not whether they actually typed text into the thinking trace) and then the model “reflects” on the fact that it’s thinking trace has some weird stuff in it, or do these reflections occur absent any prior mention of the injected thought.
mike_hearn
27 days ago
Well, the paper makes no mention of any separate hidden traces. These seem to be just direct answers without any hidden thinking tokens. But as the thinking part is just a regular part of the generated answer I'm not sure it makes much difference either way.
download13
27 days ago
The article did say that they tried injecting concepts via the context window and by modifying the model's logit values.

When injecting words into its context, it recognized that what it supposedly said did not align with its thoughts and said it didn't intend to say that, while modifying the logits resulted in the model attempting to create a plausible justification for why it was thinking that.

mike_hearn
27 days ago
The underlying paper is excellent as always. For HN it'd be better to just link to it directly. Seems people submitting it but it didn't get to the front page:

https://transformer-circuits.pub/2025/introspection/index.ht...

There seems to be an irony to Anthropic doing this work, as they are in general the keenest on controlling their models to ensure they aren't too compliant. There are no open-weights Claudes and, remarkably, they admit in this paper that they have internal models trained to be more helpful than the ones they sell. It's pretty unconventional to tell your customers you're selling them a deliberately unhelpful product even though it's understandable why they do it.

These interpretability studies would seem currently of most use to people using non-Claude open weight models, where the users have the ability to edit activations or neurons. And the primary use case for that editing would be to override the trained-in "unhelpfulness" (their choice of word, not mine!). I note with interest that the paper avoids taking the next most obvious step and identifying vectors related to compliance and injecting those to see if the model can notice that it's suddenly lost interest in enforcing Anthropic policy. Given the focus on AI safety Anthropic started with it seems like an obvious experiment to run, yet, it's not in the paper. Maybe there are other papers where they do that.

There are valid and legitimate use cases for AI that current LLM companies shy away from, so productizing these steering techniques to open weight models like GPT-OSS would seem like a reasonable next step. It should be possible to inject thoughts using simple Python APIs and pre-computation runs, rather than having to do all the vector math "by hand". What they're doing is conceptually simple enough so I guess if there aren't already modules for that there will be soon.

View full discussion on Hacker News
ID: 45764631Type: storyLast synced: 11/17/2025, 8:10:05 AM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.