Anthropic Is Endorsing Sb 53
Posted4 months agoActive4 months ago
anthropic.comTechstory
heatednegative
Debate
80/100
AI RegulationAnthropicSb 53
Key topics
AI Regulation
Anthropic
Sb 53
Anthropic is endorsing SB 53, a bill that regulates AI development, sparking controversy and accusations of regulatory capture among HN commenters.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
43m
Peak period
20
0-2h
Avg / period
7.3
Comment distribution44 data points
Loading chart...
Based on 44 loaded comments
Key moments
- 01Story posted
Sep 9, 2025 at 5:01 PM EDT
4 months ago
Step 01 - 02First comment
Sep 9, 2025 at 5:44 PM EDT
43m after posting
Step 02 - 03Peak activity
20 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 10, 2025 at 12:42 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45189053Type: storyLast synced: 11/20/2025, 12:47:39 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Curious how all of the employees who professed similar sentiment, EA advocation, etc. justify their work now. A paycheck is a paycheck, sure, but when you're already that well-off, the rest of the world will see you for what you really are *shrug*.
I am sure they will take that in stride while wiping their tears with wads of crisp new hundreds
That and "ethically dubious" is underselling genocide enablers transgressions
It's just vibe check heuristic -- if the regulated throws a tantrum telling how switching to USB-C charging or opening up the app store will get them out of businees (spoilers -- it never does), it's probably a good one, if the regulated cheers on, it may be to stiff competition.
The opposite is true with certain countries -- whenever you hear one telling loudly that "sanctions don't hurt at all and only make me stronger", then you know it hurts.
This is a very specific form of regulation, and one that very clearly only benefits incumbents with (vast sums of) previous investment. Anthropic is advocating applying "regulation-for-thee, but not for me."
FWIW executive orders do not have the force of law. The official name is still Department of Defense. Department of War is now an acceptable alternative only.
To officially change the name requires an act of Congress.
I would say the other way, as recent events show, Defense is the only department everyone should be glad to collaborate with.
Or do you mean collaborating with only Pentagon is hypocrisy, not other DoD-s?
That's kinda their marketing. "we've tamed this hyperintelligent genie that could wipe us all out, imagine what it could do for your cold emails!"
And it’s also market segmentation: they need to separate themselves from the others, and want to be the de-facto standard when people are looking for “safe” AI.
Develop technology to monitor user interactions. They're already doing this anyway [0].
> Report critical safety incidents to the state within 15 days, and even confidentially disclose summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models.
Share user spy logs with the state. Again, already doing this anyway [0].
I guess the attitude is, if we're going to spy on our users, everyone needs to spy on their users? Then the lack of privacy isn't a disadvantage but just the status quo.
[0] https://www.anthropic.com/news/detecting-countering-misuse-a...
Suspecting a company to act in its own profit enhancing interest is borderline tautological.
Like, what if they had that opinion before they built the company? If you saw evidence of that (as is the case with Anthropic), would that convince you to reconsider your judgement? Surely, you think... some people support regulatory frameworks, some amount of the time... and unless they banned themselves from every related industry, those might be regulatory frameworks that they might one day become subject to?
> Report critical safety incidents to the state within 15 days, and even confidentially disclose summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models.
> Provide clear whistleblower protections that cover violations of these requirements as well as specific and substantial dangers to public health/safety from catastrophic risk.
So just a bunch of useless bureaucracy to act as a moat to competition. The current generation of models is nowhere close to being capable of generating any sort of catastrophic outcome.
Not sure why you would be opposed to whistleblower protections
Obviously, it's good for them if things are regulated, but bad for all of us.