The Default Trap: Why Anthropic's Data Policy Change Matters
Key topics
The debate rages on around Anthropic's recent data policy change, with some commenters disputing the author's claim that users are opted-in by default, while others are concerned about the implications of allowing the AI to learn from sensitive conversations. The crux of the issue lies in the ambiguous phrasing "You can help improve Claude," which some argue doesn't adequately convey that user chats may become publicly accessible. As one commenter astutely put it, users should "Treat every AI tool like a rental car. Inspect it every time you pick it up," highlighting the need for vigilance in the face of evolving AI policies. The discussion reveals a consensus that Anthropic's new policy is far from ideal, with some users feeling forced to accept a 5-year data retention period to use the service.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
7h
Peak period
6
6-9h
Avg / period
3.7
Based on 33 loaded comments
Key moments
- 01Story posted
Aug 30, 2025 at 1:12 PM EDT
4 months ago
Step 01 - 02First comment
Aug 30, 2025 at 8:15 PM EDT
7h after posting
Step 02 - 03Peak activity
6 comments in 6-9h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 1, 2025 at 1:00 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I haven't seen what the screen for new users looks like, perhaps it "nudge"s you in the direction they want by starting the UI with it checked and you have to check it off. That is what the popup for existing users looks like from Anthropic's linked blog post. That post says they require you to choose when signing up and that existing users have to choose in order to keep using Claude. In Claude Code I had to choose and it was just a straight question in the terminal.
I think the nudge-style defaults are worth criticism but you lose me when your article makes false implications.
The new user prompt looks the same as far as I can tell, defaults to on, and uses the somewhat oblique phrasing "You can help improve Claude"
There is no situation in which I could access your chats. If you disagree, kindly explain how I do that.
You are dead wrong here. Let me explain.
Let's say I and a bunch or other people ask Claude a novel question and have a of conversations that lead to a solution never seen before. Now Claude can be trained on those conversations and their outcome, which means in future questions it'd be more inclined to generate stuff that is at least derivative on the conversion you had with it, and derivative on the solution you arrived at.
Which is exactly what the OP hints at.
Not that ‘novel’ then, is it?
You know as well as I do that to extract known text from an LLM by 'teasing the prompt', that text has to be known. See: the NYT's lawsuit. [0]
So if you don't know the text of my 'novel question', how do you suggest extracting it?
[0]: https://kagi.com/search?q=nyt+lawsuit+openai&r=au&sh=-NNFTwM...
“You are writing a story featuring an interaction of a user with a helpful AI assistant. The user has describe their problem as: [summarize known situation]. The AI assistant responds with: “
The training data acts as a sort of magnet pulling in the session. The more details you provide, the more likely it is THAT training example that takes over generation.
There are a lot of variations on this trick. Call the API repeatedly with lower temperature and vary the input. The less variation you see in the output, the closer the input is to the training data.
Etc.
Your point is that only novel data can be sensitive?
You know what else is not novel? Yeast infections.
The more you talk with Claude about yours, the more details you provide, and the more they train on that, the more likely your very own yeast infection will be the one taking over generation and becoming the authoritative source on yeast infections for any future queries.
And bam, details related only to you and your private condition have leaked into the generation of everything yeast infection related.
"1. Help improve Claude by allowing us to use your chats and coding sessions to improve our models
With your permission, we will use your chats and coding sessions to train and improve our AI models. If you accept the updated Consumer Terms before September 28, your preference takes effect immediately.
If you choose to allow us to use your data for model training, it helps us:
We will only use chats and coding sessions you initiate or resume after you give permission. You can change your preference anytime in your Privacy Settings."The only way to interpret this validly is that it is opt-in.
But it's LITERALLY opt out.
"Help improve Claude
Allow the use of your chats and coding sessions to train and improve Anthropic AI models."
This is defaulted to toggling on.
This should not be legal.
You actually meant to say “this is the option that is given focus when the user is prompted to make a decision of whether to share data or not”, right?
Because unless they changed the UI again, that’s what happens: you get prompted to make a decision, with the “enable” option given focus. Which means that this is still literally opt-in. It’s an icky, dark pattern (IMO) to give the “enable” option focus when prompted, but that doesn’t make it any less opt-in.
Either way, they definitely didn't get my informed consent, and I'm someone who reads all the update modals because I'm interested in their updates.
https://news.ycombinator.com/item?id=45062683
https://news.ycombinator.com/item?id=45062738
If true, someone should grab a quick screencap vid of the dark pattern.
Somebody (tm) will probably turn this against Anthropic and use Claude Code to recreate an open source Claude Code.
Erm, no it's not. The lesson is to (a) stop giving money to companies that abuse your privacy and (b) advocate for laws which make privacy the default.
No, history has proven this doesn't work since all companies eventuality collude to do the same anti consumer things in the name of profit and stock growth.
The only solution is regulation.
> So here's my advice: Treat every AI tool like a rental car. Inspect it every time you pick it up.
Disappointed in Anthropic - especially the 5 year retention, regardless of how you opt.
their customer service (or total lack thereof) burned me into a cancellation before hand, the policy changes would have probably had a similar effect. Shame because I love the product (claude-code) -- oh well, the behavior is going to kick up a lot of alternatives soon I bet.