Updates to Consumer Terms and Privacy Policy
Key topics
The AI landscape just got a whole lot murkier as Anthropic updated its consumer terms and privacy policy, sparking heated debates about data retention and model training. Users are up in arms over the new defaults, which allow the company to store chats for five years and train AI models on user data unless opted out - a move some see as a betrayal of Anthropic's "ethical AI" mantra. Commenters are divided, with some arguing that opting out of training isn't a viable option since it ties into the longer data retention period, while others are calling for the right to train on foundation model outputs, essentially pushing back against companies profiting from user data without giving back. As one commenter astutely pointed out, users can still train on model outputs, and there's no clear legal basis stopping them, leaving the door open for creative workarounds.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
16m
Peak period
155
Day 1
Avg / period
53.3
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 29, 2025 at 7:29 AM EDT
4 months ago
Step 01 - 02First comment
Aug 29, 2025 at 7:45 AM EDT
16m after posting
Step 02 - 03Peak activity
155 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 7, 2025 at 5:28 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I think Claude saw that OpenAI was reaping too much benefit from this so they decided to do it too.
That's why the usual ethos in places like HN of treating any doubt about government actions as lowbrow paranoid conspiracy theory stuff, is so exasperating, for those of us who came from either the former soviet bloc or third world nations.
Well, probably easier than you think. Given that it looks like Palantir is able to control the software and hardware of the new fangled detention centers with immunity, how difficult do you think it is for them to disappear someone without any accountability?
It is precisely the blurring of the line between gov and private companies that aid in subverting the rule of law in many instances.
[0] https://thegrayzone.com/2025/06/18/palantir-execs-appointed-...
But the question was "why trust a company and not the government?"
So even now it's between:
And So it's still "could maybe do harm" versus "already controls an army of masked men who are undeniably active in doing harm."The post you were replying to simply said the behavior of this administration made them care more about this issue, not that they trusted companies more than the government. That statement is not even implied in anyway in the comment you responded to?
The fact is whereas in the past it would be expected that the government could regulate the brutal and illegal overreaches of private companies, giving military rank to private companies execs makes that even less likely. The original comment is alluding to a simpler point: A government that gives blank checks to private companies in military and security matters is much worse than one that doesn't.
I'll still take an increasingly stacked US federal court that still has to pay lip service to the constitution over private arbitration hired by the company accountable only to their whims.
What you mentioned has been repeatedly ruled unconstitutional, but the administration is ignoring the courts.
There's tradeoffs. The government, at least, has to abide by the constitution. Companies don't have to abide by jack shit.
That means infinite censorship, searches and seizures, discrimination, you name it.
We have SOME protection. Very few, but they're there. But if Uber was charging black people 0.50 cents more on average because their pricing model has some biases baked in, would anyone do anything?
If they were charging wealthy people 0.50 more on average because the model showed that they don't care about price that much, they would be fine.
No: because Uber doesn't have to tell you how their model works and they probably don't even know.
Apple/FBI story in question: https://apnews.com/general-news-c8469b05ac1b4092b7690d36f340...
On the other hand, what Apple did is a tangible thing and is a result.
This gives them better optics for now, but there is no law says that they can't change.
Their business model is being an "accessible luxury brand with the privacy guarantee of Switzerland as the laws allow". So, as another argument, they have to do this.
It seems strange to not be able to grasp the difference in kind here.
What happens the same company locks all your book drafts because an algorithm deemed that you're plotting something against someone?
Both are real events, BTW.
The government forces me to do business with them; if I don't pay them tens (and others hundreds) of thousands of dollars every year they will send people with guns to imprison me and eventually other people with guns to seize my property.
Me willingly giving Google some data and them capriciously deciding to not always give it back doesn't seem anything like the same to me. (It doesn't mean I like what Google's doing, but they have nowhere near the power of the group that legally owns and uses tanks.)
A company "applied what the law said", and refused that they made a mistake and overreached. Which is generally attributed to governments.
So, I you missed the effects of this little binary flag on their life.
[0]: https://www.theguardian.com/technology/2022/aug/22/google-cs...
What?! Google locked them out of Google. I'm sure they can still get search, email, and cloud services from many other providers.
The government can lock you away in a way that is far more impactful and much closer to "life stopped; locked out of everything" than "you can't have the data you gave us back".
Why do you think the military and police outsource fucking everything to the private sector? Because there are no rules there.
Wanna make the brown people killer 5000 drone? Sure, go ahead. Wanna make a facial crime recognition system that treats all black faces as essentially the same? Sure, go ahead. Wanna run mass censorship and propaganda campaigns? Sure, go ahead.
The private sector does not abide by the constitution.
Look, stamping out a protest and rolling tanks is hard. Its gonna get on the news, it's gonna be challenged in court, the constitution exists, it's just a whole thing.
Just ask Meta to do it. Probably more effective anyway.
Corporate surveillance is government surveillance. Always has been.
The part that irks me is that this includes people who are literally paying for the service.
https://www.anthropic.com/news/updates-to-our-consumer-terms
These bastard companies pirated the world's data, then they train on our personal data. But they have the gall to say we can't save their model's inputs and outputs and distill their models.
We need a Galoob vs. Nintendo [1], Sony vs. Universal [2], or whatever that TiVo case was (I don't think it was TiVo vs. EchoStar). A case that establishes anyone can scrape and distill models.
[1] https://en.wikipedia.org/wiki/Lewis_Galoob_Toys,_Inc._v._Nin....
[2] https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Unive....
While those developers are not well paid (usually around 30/40 USD hour, no benefits), you need a lot of then, so, it is a big temptation to create also as much synthetic data sets from your more capable competitor.
Given the fact that AI companies have this Jihad zeal to achieve their goals no matter what (like, fuck copyright, fuck the environment, etc, etc), it would be naive to believe they don't at least try to do it.
And even if they don't do it directly, their outsourced developers will do it indirectly by using AI to help with their tasks.
$40/hour for a full time would put you just over the median household income for the US.
I suspect this provides quite a good living for their family and the devs doing the work feel like they’re well-paid.
For comparison, I live in a place that is typically considered as tier 3 or 4 out of 4 in the US by employers (4 being the cheapest). Costs of living are honestly more like tier 2 cities, but it’s a small city in a poor state. 7 years ago, the going rate for an unlicensed handyman was $32/hour, often paid under the table in cash (I don’t have more recent numbers because I find DIY better and easier than hiring someone reliable).
https://news.ycombinator.com/item?id=45053806
It was the kick in the pants I needed to cancel my subscription.
I'm looking at
> "When you use the Assistant by Kagi, your data is never used to train AI models (not by us or by the LLM providers), and no account information is shared with the LLM providers. By default, threads are deleted after 24 hours of inactivity. This behavior can be adjusted in the settings."
https://help.kagi.com/kagi/ai/assistant.html#privacy
And trying to reconcile those claims with the instant thread. Anthropic is listed as one of their back-end providers. Is that data retained for five years on Anthropic's end, or 24 hours? Is that data used for training Anthropic models, or has Anthropic agreed in writing not to, for Kagi clients?
Implicit consent is not transparent and should be illegal in all situations. I can't tell you that unless you opt out, You have agreed to let me rent you apartment.
You can say analogy is not straightforward comparable but the overall idea is the same. If we enter a contract for me to fix your broken windows, I cannot extend it to do anything else in the house I see fit with Implicit consent.
Essentially, because they are presented in a form that is so easy to bypass and so very common in our modern online life, provisions that give up too much to the service provider or would be too unusual or unexpected to find in such an agreement are unenforceable.
Local office will do a blood draw, send it to a 3rd party analysis which isn't covered by insurance, then bill you full. And you had NO contractual relationship with the testing company.
Same scam. And its all because our government is completely captured by companies and oligopoly. Our government hasn't represented the people in a long time.
Except not:
> The interface design has drawn criticism from privacy advocates, as the large black "Accept" button is prominently displayed while the opt-out toggle appears in smaller text beneath. The toggle defaults to "On," meaning users who quickly click "Accept" without reading the details will automatically consent to data training.
Definitely happened to me as it was late/lazy.
Opt-out leads to very high adoption and is the immoral choice.
Guess which one companies adopt when not forced through legislation?
Grabbing users during start up with the less privacy focused option preselected isn't being "very transparent"
They could have forced the user to make a choice or defaulted to not training on their content but they instead they just can't help themselves.
Never mind the complete 180 on privacy.
The fact that there's no law mandating opt-in only for data retention consent (or any anti-consumer "feature") is maddening at times
“If you do not choose to provide your data for model training, you’ll continue with our existing 30-day data retention period.“
From the support page: https://privacy.anthropic.com/en/articles/10023548-how-long-...
“If you choose not to allow us to use your chats and coding sessions to improve Claude, your chats will be retained in our back-end storage systems for up to 30 days.”
https://artificialanalysis.ai/leaderboards/models
I have to admit, I've used it a bit over the last days and still reactivated my Claude pro subscription today so... Let's say it's ok for casual stuff? Also useful for casual coding questions. So if you care about it, it's an option.
[0] https://lumo.proton.me/
Self plug here - If you aren't technical and still want to run models locally, you can try our App [1]
1] https://ai.nocommandline.com
You could try programming with your own brain
Export data
Shared chats
Location metadata
Review and update terms and conditions
I'm in the EU, maybe that's helping me?
It's part of the update
No one cares about anything else but they have lots of superflous text and they are calling it "help us get better", blah blah, it's "help us earn more money and potentially sell or leak your extremely private info", so they are lying.
Considering cancelling my subscription right this moment.
I hope EU at leat considers banning or extreme-fining companies trying to retroactively use peoples extremely private data like this, it's completely over the line.
I'd love to live in a society where laws could effectively regulate these things. I would also like a Pony.
Its only utopian because it's become so incredibly bad.
We shouldn't expect less, we shouldn't push guilt or responsibility onto the consumer we should push for more, unless you actively want your neighbour, you mom, and 95% of the population to be in constant trouble with absolutely everything from tech to food safety, chemicals or healthcare - most people aren't rich engineers like on this forum and i don't want to research for 5 hours every time i buy something because some absolute psychopaths have removed all regulation and sensible defaults so someone can party on a yacht.
That's why we don't hand billions of dollars to a child. Maybe we should treat AI companies similar.
https://claude.ai/settings/data-privacy-controls
It was easy to not opt-in, I got prompted before I saw any of this.
I think they should keep the opt-in behavior past Sept 28 personally.
Nitpicking: “opt in by default” doesn’t exist, it’s either “opt in”, or “opt out”; this is “opt out”. By definition an “opt out” setting is selected by default.
At which exact point is language prohibited from evolving, and why it super coincidentally the exact years you learnt it?
Never?
https://en.m.wikipedia.org/wiki/Semantic_change
By default, you are opted in. Perfectly clear.
The purpose of language is communication, not validating your politics.
That's called opt-out. You're doing exactly what I described: gaslighting people into believing that opt-in and opt-out are synonymous, rendering the entire concept meaningless. The audacity of you labeling people as "political" while resorting to such Orwellian manipulation is astounding. How can you lecture others about the purpose of languages with a straight face when you're redefining terms to make it impossible for people to express a concept?
These are examples of what "opt-in by default" actually means. It means having the user manually consent to something every time, the polar opposite your definition.
- https://arstechnica.com/gadgets/2024/06/report-new-apple-int...
- https://github.com/rom1504/img2dataset/issues/293
It's also just pure laziness to label me as "hysterical" when PR departments of companies like Google have, like you, misused the terms opt-out and opt-in in deceptive ways.
https://news.ycombinator.com/item?id=37314981
> Diluting the distinction between opt-in and opt-out is gaslighting
> That seems like an ungenerous and frankly somewhat hysterical take.
... however, this comment was a reasonable response.
Projective framing demonstrates your own lack of concern for accuracy, clarity or conviviality, that is 180 degrees at odds with the point you are making and the site you are making it on.
[0] https://news.ycombinator.com/item?id=26346688
No, (IMO) an "opt out" setting / status is assumed/enabled without asking.
So, I think this is opt-in, until Sept 28.
Opt-in, whether pre-checked/pre-ticked or not, means the business asks you.
GDPR requires "affirmative, opt-in consent", perhaps we use that term to mean an opt-in, not pre-ticked.
> So, I think this is opt-in, until Sept 28.
If the business opted for consent, then you will effectively have the choice for refusal, a.k.a. opt-out.
You can say that you want to opt out. What Anthropic will decide to do with your declaration is a different question.
Also, for others who want to opt-out, the toggle is in the T&C modal itself.
As if barely two 9s of uptime wasn't enough.
(and as diggan said, the web isn't the only source they use anyway. who knows what they're buying from data brokers.)
I realize there's a whole legal quagmire here involved with intellectual "property" and what counts as "derivative work", but that's a whole separate (and dubiously useful) part of the law.
If you can use all of the content of stack overflow to create a “derivative work” that replaces stack overflow, and causes it to lose tons of revenue, is it really a derivative work?
I’m pretty sure solution sites like chegg don’t include the actual questions for that reason. The solutions to the questions are derivative, but the questions aren’t.
Privacy makes sense, treating data like property does not.
The users did provide the data, which is a good point. But there’s a reason SO was so useful to developers and quora was not. It also made it a perfect feeding ground for hungry LLMs.
Then again I’m just guessing that big models are trained on SO. Maybe that’s not true
AI companies will get bailed out like the auto industry was - they won't be hurt at all.
370 more comments available on Hacker News