Privacy Doesn't Mean Anything Anymore, Anonymity Does
Key topics
The debate rages on: can businesses still claim to prioritize privacy, or is anonymity the new gold standard? Commenters weigh in, with some arguing that any company unwilling to operate anonymously, like Mullvad, must have a shady business model. Others point to Japan's strict data protection laws as a model to follow, although one commenter disputes the effectiveness of these regulations, citing a high-profile data breach with no apparent consequences. The discussion highlights the tension between data collection for legitimate purposes, such as debugging, and the risks associated with holding sensitive user information, with some noting that hefty fines, like those imposed by GDPR, are meant to deter misconduct, not merely data breaches.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
25m
Peak period
112
0-12h
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 20, 2025 at 1:21 AM EST
21 days ago
Step 01 - 02First comment
Dec 20, 2025 at 1:46 AM EST
25m after posting
Step 02 - 03Peak activity
112 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 27, 2025 at 12:44 PM EST
13 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I don’t understand why any company would want the liability of holding on to any personal data if it wasn’t vital to the operations of the business, considering all the data breaches we’ve seen over the past decade or so. It also means they can avoid all the lawyers writing complicated and confusing privacy policies, or cookie approval pop-ups.
They're OK with the liability exactly because of this very sentence. As you said, there's so many data breaches... so where are the company-ending fines and managers/execs going to prison?
Up to EUR 10,000,000 or up to 2% of the total worldwide annual turnover of the preceding financial year, whichever is higher; applies to infringements such as controller and processor obligations, security of processing, record-keeping, and breach notification duties.
Up to EUR 20,000,000 or up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher; applies to infringements of basic principles for processing, data subjects’ rights, and unlawful transfers of personal data to third countries or international organisations.
https://ico.org.uk/action-weve-taken/enforcement/
Some went to prison, some were fined £14M and it's a mixture of small fry and big fry.
It’s not very hard to handle customer data in a legally compliant way, that’s why you don’t see companies deciding against retaining data.
This data is the tool we have to identify and fix bugs. It is considered a failing on our end if a user has to report an issue to us. Mullvad is in an ideal situation to not need this data because their customers are technical, identical, and stateless.
A lot of companies could be in similar situations, but choose not to be.
All of retail, for example. Target does significant amounts of data collection to track their customers. This is a choice. They could let users simply buy things, pay for them, and store nothing. This used to be the business model. For online orders, they could purge everything after the return window passed. The order data shouldn’t be needed after that. For brick and mortar, it should be a very straightforward business. However, I’m routinely asked for my zip code or phone number when I check out at stores. Loyalty cards are also a way to incentivize customers to give up this data (https://xkcd.com/2006/).
TVs are another big one. They are all “smart” now, and collect significant amounts of data. I don’t know anyone who would be upset with a simple screen that just let you change inputs and brightness settings, and let people plug stuff into it. Nothing needs to be collected or phone home.
A lot of the logs that are collected in the name of troubleshooting and bug fixing exist because the products are over-complicated or not thoroughly tested before release. The ability to update things later lowers the bar for release and gives a pass for adding all this complexity that users don’t really want. There is a lot of complexity in the smart TV that they might want logs for, but none of it improves the user experience, it’s all in support of the real business model that’s hidden from the user.
Well, that's like 99% of the businesses out there. Mind listing of some of the businesses you like aside from obvious mullvad?
A HN user posted about a site they made for faxing documents the other day. It’s a good example of how I think most things should be setup in many cases. You pay a fee and it sends a fax, that is very simple to understand. There are no accounts and the documents are only stored long enough to fulfill the service.
https://news.ycombinator.com/item?id=46310161
You can imagine how most “modern” sites would handle faxing. Make an account, link a credit card, provide your address to validate the credit card. Then store all the faxes that were sent, claiming it’s for easy reference. Meanwhile it’s running OCR on them in the background to build a profile with a wealth of personal data. After all, people don’t tend to fax trivial things. In addition to the profits from the user, they are making a killing on selling data to advertisers… but those details are hidden away in legalese of the fine print in a policy no one actually reads.
Browser fingerprinting: "Your unique combination of extensions/settings makes you identifiable among other users."
Service anonymity: "There are no other users to compare you against because we don't collect identifying data."
When you sign up with just a random 32-char string, there's nothing to fingerprint. No email to correlate. No IP logs to analyze. No usage patterns to build a profile from.
Fingerprinting matters when services collect behavioral data. We architected our way out of having that data to begin with.
There's STILL a browser fingerprint, IP logs to analyze, usage patterns to build a profile from. You may claim you don't collect it, but users need to take your word for it. This is just pseudonymity, which (as many BTC users found out) only gets you halfway there. Real anonymity is way harder, often impossible.
Don't get me wrong, it's good to see organisations that care about privacy and in fact this blog post encouraged me to consider your services in the future. We have some use cases for that at work.
Though by using cloudflare you're NOT putting your money where your mouth is.
But you are 100% right, I will look into alternatives for Cloudflare, which we are using because it seems like the cloud hosting industry LOVES to DDoS new players.
It might not be possible to verify 100% but the more transparency the better i guess. Seeing the 3 way handshake and connection information, the timings, location of the server. Would need to be quite elaborate to fake. Just thought was a fun idea. Have the customer allowed in to production. A lot more difficult then publish privacy page, source code, fake audit reports.
Without (1), people who really care about anonymity won't even care about you (tor is table stakes). (3) is a really strong vote for anonymity, but don't expect many customers that way.
I guess the lesson there is that if you don't want to be convicted of a crime, don't confess to a crime? They won't give you a lighter sentence for confessing.
Ever hear of moral integrity?
Unless the penalty is unjust (say, execution for a minor crime), a just man will confess and accept his punishment as right as just. He himself will want justice to be done and will want to pay for his crime.
A remorseful murderer knows he deserves death. He might ask for mercy, but failing that, he will accept the penalty with dignity and grace.
Morality is not a social convention. Morality concerns what you or I or any individual person should do as that individual. Because we are all human beings with a shared nature, the same general moral principles hold for all of us. Morality is about being a good person. Not a nice person. Not "good" in the opinion of others. Not a "goody two-shoes" or a suck-up. Good in the sense that you choose and do what you ought. The good life is the moral life, and it is absurd to say otherwise.
It is not good for you or me or anyone to lack integrity with the objective good. This is what too many people fail to understand. They think morality is just some set of external rules someone made up that have nothing to do with one's own flourishing as a human being. No, immoral acts corrupt the person choosing to perform them. They corrupt him from the inside. They cripple a person and rot him out. They stunt development and derail him, pushing him onto self-destructive trajectories. They produce misery. You will not find an immoral person who is joyful. Maniacal, maybe, but not joyful.
Of course, the concrete and particular choices we ought to make and acts we ought to choose in a given situation requires prudence, a quality we can only develop with experience. But prudence does not override moral principles. Lying, stealing, murdering do not become licit by circumstance.
Talk about doubly stupid, first sending the threat, second using Tor on campus. I often wonder what goes (or doesn't go) through the mind of such people.
It's basically rule number one. Tor is all about making all users look like the same user. The so called anonymity set. They all look the same, so you can't tell them apart from each other.
It's also part of the rules of proper OPSEC.
https://en.wikipedia.org/wiki/The_Moscow_rules
> Do not look back; you are never completely alone.
> Go with the flow, blend in.
> Vary your pattern and stay within your cover.
https://buttondown.com/grugq/archive/bad-opsec-considered-ha...
As noted in the article, it wasn't the failure of Tor that led to arrest, it was poor OPSEC. Failure to cover, failure to conceal and failure to compartment.
https://news.ycombinator.com/item?id=46334951
Many people online seem to think that they are anonymous and so were emboldened to do stuff that they might not have done if they had realized this. They continued to feel extremely good at this right up until the knock on the door.
There exists a grey area between not getting away with nefarious activities, and not having your life ruined by a lynch mob because you didn't approve their preferred CoC on a hobby project or some other perceived injustice.
Most UK and Australian writers would spell it "realised" so there's a bit right there.
Even if you include no personal information, there is information in writing style.
Stylometry is the study of this. Yes, there's also adversarial stylometry - distorting your writing style to fool an analysis. It's probably effective now, but that could change overnight and every archived post that every OSINT organisation has collected is deanomynised.
Yeah you can say "I change my style". But there's some bits that don't have false positives. If I EVER say "praise the omminsiah" I'm definetly au fait in 40k memes. If I ever say au fait I'm a person who has at least a rough idea of what it means. There's no false positive here, so if you can just find about 29 undeniable uncorrelated bits that are known to not have false positives ...
It's as old as history. In the days super-abbreviated telegrams (words were costly) you could even get two for the price of one--the author and the Morse code operator who actually sent the telegram. He could be recognized by his Morse fist, other Morse operators on the network would recognize him by the style of his sending even though they were only listening to dots and dashes,
I could try to prove it to you, but the only proof you need is that cybercrime exists and millions (or tens of millions) of dollars are stolen every day. If anonymity didn't exist it would be easy to stop this, wouldn't it?
I once spent an entire year issuing chargebacks on AWS charges coming from god knows what AWS account. Most likely some client project I forgot about and didn't have the login to anymore, who knows. Makes me think about that - for a service where you can't login if you lose the credentials, how do you cancel a subscription? In my case I had to eventually just cancel the credit card and get a new number.
> Server Logs > Like all web services, our servers may log: > IP addresses of visitors > Request timestamps > User agent strings > These logs are used for security and debugging purposes and are not linked to your account.
That's already a huge breach in comparison to mullvad privacy page. (https://mullvad.net/en/help/no-logging-data-policy)
And the "3 data points, that's it" of the blog post
Web server logs were not tied to user credentials in any way.
Also:
> // What we DON'T collect:
> - IP addresses (not logged, not stored, not tracked)
> - Usage patterns (no analytics, no telemetry, nothing)
> - Device fingerprints (your browser, your business)
so, I've read one blog from this company, and already they're lying or incompetent
Web server logs were not tied to user credentials in any way, they were used for debugging purposes and could not have been used to identify users.
I'm not here to debate, the reason I posted here is to hear what people thought and see how I could improve my platform based on the criticism.
https://nginx.org/en/docs/quic.html
https://apisix.apache.org/docs/apisix/http3/
Front page says "zero logs"
Some logs, including specifically datapoints you have promised not to log, but you mean (?) well is pretty different from zero logs
Sounds like a clear "lack of a depth of understanding" to me.
Your requests are going to be logged by all the intermediaries, both ISPs etc. If you're leaving breadcrumbs of your residential IP address all over the place that's on you
OP made the change because it's an easy switch to flip, but logging how many requests you served per second is a much smaller crime than setting session cookies
Even if this sounds innocent, these must be turned over if you are provided a warrant or subpoena (which ever would be appropriate, IANAL).
Shitting on well-intentioned people who merely failed to be perfect is not a great way to get the most of what you ultimately want.
If you think intent doesn't matter then what happens when well-intentioned people decide it's not worth trying because no matter what they will be crucified as murderers even if all they did wrong was fail to clean the break room coffee pot. The actual baddies are still there and have no inhibitions and now not even any competition.
But jumping to pitchforks just teaches companies to ignore the privacy crowd. Why cater to them when every action is interpreted as malicious? If you can do no right then realistically you can do no wrong either. If every action is "wrong" then none are. In this way I think the privacy community just shoots themselves in the foot, impeding us from getting what we want.
Even if they don't, it opens up more attack vectors for malicious 3rd parties who want that data. That's why you can't be careless.
At any time any company could turn evil, and any free(ish) government could become totalitarian overnight. This is a fact, but also pretty useless one.
The real questions to ask are, how likely it is to happen, and if that happens, how much did all these privacy measures accomplish.
The answer to those are, "not very", and "not much".
Down here on Earth, there are more real and immediate issues to consider, and balance to be found between preventing current and future misuse of data by public and private parties of all sides, while sharing enough data to be able to have a functioning technological civilization.
Useful conversations and realistic solutions are all about those grey areas.
If data exists, it can be subpoenaed by the government.
On the other hand, if I'm making death threats on Facebook, there's a much more realistic path: view the threats from a public source --> subpoena Facebook for private data.
Treating the two risks as similar is madness.
- https://sls.eff.org/technologies/real-time-location-tracking - https://www.wsj.com/politics/national-security/u-s-spy-agenc... - https://www.brennancenter.org/our-work/research-reports/clos...
They don't need to subpoena anyone if they can just get it without the hassle.
And that's even before malvertising comes into picture.
If a government has the data there’s a chance it will stay in the government at least
You either
1) don’t want it stored
2) are happy for government to have it but not companies
3) are happy for everyone to have it
Whether the one serving the content is exploiting data at the present moment has very little relevance. Because the end user has no means to assert whether it is happening or not.
My takeaway from this thread is an increased amount of trust in OP. Not because they made a mistake, but because of how they handled it. Well done OP!
I've been DDoS'ed exactly once. In 2003 I got into a pointless internet argument on IRC, and my home connection got hammered, which of course made me lose the argument by default. I activated my backup ISDN, so my Diablo 2 game was barely interrupted.
But have those webservers supported a small or medium-sized business?
I've periodically removed Cloudflare because of issues with reissuing SSL certs, Cloudflare being down, and other reasons, and haven't noticed any problems.
The biggest benefit I get from Cloudflare is blocking scraper robots, which I've just been too lazy to figure out how to do myself.
Also you can sue whoever DDoSes you and put them in jail. It's easier than it used to be, since the internet is heavily surveilled now.
(Asking because I really don't know)
But, banks and financial services now must obey "know your customer" laws so it's not beyond imagination that similar laws could be applied to websites and ISPs operating in a particular country.
The answer to both this and parent is yes: partial privacy improvements are still improvements. There are two big reasons for this and many smaller reasons as well:
First, legal actors prioritize who to take action against; some cases are “worth seeing if $law-enforcement-agency can get logs from self-hosted or colo’d servers with minimal legal trouble” but not “worth subpoenaing cloudflare/a vpn provider/ISP for logs that turned out not to be stored on the servers that received the traffic“.
Second, illegal actors are a lot more likely to break into your servers and be able to see traffic information than they are to be able to break into cloudflare/vpn/ISP infrastructure. Sure, most attackers aren’t interested in logs. But many of the kind of websites whose logs law enforcement is interested in are also interesting to blackmailers.
Not that I use it, but one of the best privacy features of Mullvad is that you can post them cash with your account number and they will credit it. That makes the transaction virtually, and for all practical purposes, untraceable.
It seems like you have the means to do exactly that too.
The post also misunderstands privacy
> Privacy is when they promise to protect your data.
Privacy is about you controlling your data. Promises are simply social contracts.
running three flavors of the same off brand browser, each optimised for different segments of online content is what seems to be the minimum.
they are so desperate to sell me something, (a truck) that it's wild, as it is one of the few monitisable things I consistently look for (parts, service procedures), the , pause, when I do certain searches gives me time to predict that yes, the machinery is grinding hard, and will ,shortly, triumphantly, produce, a ,truck.
“Privacy” = the data is private i.e. only on your devices. Or if the raw data is public but encrypted and the key is private, I think that qualifies.
“Anonymity” = the data is public but not tied to its owner.
If you’re sharing your data with a website (e.g. storing it unencrypted), but they promise not to leak it, the data is only “private” between you and them…which doesn’t mean much, because they may not (and sometimes cannot) keep that promise. But if the website doesn’t attribute the data except to a randomly-generated identifier (e.g. RSA public key), the data is anonymous. That’s the article.
Although a server does provide real privacy if it stores user data encrypted and doesn’t store the key, and you can verify this if you have the client’s unobfuscated source.
Also note that anonymity is less secure than privacy because the information provides clues to the owner. e.g. if it’s a detailed report on a niche topic with a specific bias and one person is known to be super interested in that topic with that bias, or if it contains parts of the owner’s PII.
126 more comments available on Hacker News