Mullvad VPN present And Then? (Chat Control is back on the menu)
Mood
heated
Sentiment
negative
Category
tech
Key topics
VPN
Chat Control
Surveillance
Privacy
Mullvad VPN criticizes the revival of Chat Control, a proposed EU regulation that could lead to increased surveillance and erosion of online privacy.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
8h
Peak period
11
Day 1
Avg / period
6
Based on 12 loaded comments
Key moments
- 01Story posted
11/14/2025, 11:55:48 AM
4d ago
Step 01 - 02First comment
11/14/2025, 7:59:07 PM
8h after posting
Step 02 - 03Peak activity
11 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/17/2025, 8:55:41 AM
2d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
The bad people can use encrypted services just like they use guns, even if they are illegal. But then there's a spike in arrests for posting on social media if people express opinions or content contrary to preferred narratives.
Fortunately, there are people exposing NGO money flows and who favors them.Fortunately, the US keeps free speech sacred.
I'm immensely grateful to the founding fathers and their ability to come up with something so helpful so many years down the line
For most of my life, news orgs have been treating national IC/LEO as if they have a history of truth-telling. Whenever a press conference comes up, journalists/editors reliably forget that they've never been told a meaningful truth in one of these.
If the people's who's job it is to highlight the lies of the powerful - usually don't, what hope is there for the proletariat?
It's a noble fight trying to get E2EE be compatible with the law. But I think some perspective for privacy advocates is due. People don't want freedom and privacy at the cost of their own security. We shouldn't have to choose, but if nothing else, the government has one most important role, and that is not safeguarding freedoms, but ensuring the safety of its people.
No government, no matter how free or wealthy can abdicate its role in securing its people. There must be a solution to fight harmful (not neccesarily illegal) content incorporated into secure messaging solutions. I'm not arguing for backdoors in this post, but even things like Apple's CSAM scanning approach are met with fierce resistance from the privacy advocate community.
This stance that "No, we can't have any solutions, leave E2EE alone" is not a practical stance.
Speaking purely as a citizen, if you're telling me "you will lose civil liberties and democracy, if you let governments reduce cp content", my response would be "what's the hold up?". Even if governments are just using that as an excuse. As someone slightly familiar with the topic, of course I wouldn't want to trade my liberties and freedoms, but is anyone working on a solution? are there working groups? Why did Apple get so much resistance, but there are no opensource solutions?
There are solutions for anonymous payments using homomorphic encryption. Things like Zcash and Monero exist. But you're telling me privacy preserving solutions to combat illicit content are impossible? My problem is with the impossible part. Are there researchers working to make this happen using differential privacy or some other solution? How can I help? Let's talk about solutions.
If your position is that governments (who represent us,voters) should accept the status quo, and just let their people suffer injustice, I don't think I can support that.
Mullvad is also in for a rude awakening. If criminals use Tor or VPNs, those will also face a ban. We need to give governments solutions that lets them do what they claim they want to do (protect the public from victimization) while preserving privacy to avoid a very real dystopia.
Freedoms and liberties must not come at the cost of injustice. And as i argued elsewhere on HN, in the end, ignoring ongoing injustice will result in even less freedoms and liberties. If there was a pluralistic referendum in the EU over chat control, I would be surprised if the result isn't a law that is even far worse than chat control.
EDIT: Here is one idea I had: Sign images/video with hardware-secured chips (camera sensor or GPU?) that is traceable to the device. When images are further processed/edited, then they will be subject to differential-privacy scanning. This can also combat deepfakes, if image authenticity can be proven by the device that took the image.
Yes. You cannot have a system that positively associates illicit content with an owner while preserving privacy.
Apple tried and made good progress. They had bugs which could be resolved but your insistence that it couldn't be done caused too much of an uproar.
You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
These are just some of the things that are possible that I came up with in the last minute of typing this post. Better and more well thought out solutions can be developed if taken seriously and funded well.
However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.
> You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
What about this is privacy preserving?
> However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
It's not "materially false." Bringing a human into the picture doesn't do anything to preserve privacy. If, like in your example, a parent's family photos with their children flag the system, you have already violated the person's privacy without just cause, regardless of whether the people reviewing it can identify the person or not.
You cannot have a system that is scanning everyone's stuff indiscriminately and have it not be a violation of privacy. There is a reason why there is a process where law enforcement must get permission from the courts to search and/or surveil suspects - it is supposed to be a protection against abuse.
The Pleyel’s corollary to Murphy’s law is that all compromises to individuals’ rights made for the sake of security will eventually be used to further deprive them of those rights.
(I especially liked the line “You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.”)
These things solve entirely different problems that are different from the ones you suggest. The main problem is there are no products that solve the problem Chat Control aims to solve without infringing massively on everyone's privacy, (including children). Any suggestions that do exist come with serious risks. The reason for that is because it's easier to encrypt data than develop some kind of system with a magical key only authorized people are able to use under certain circumstances.
What Mullvad highlights is that the whole chat control proposal is mired in corruption, a particular individual with an agenda to sell something he has adjacent interests to as the solution. No doubt they will want funding for "research", because they don't have a global universal solution. Instead they try to make it appear as if they do, and then get others to do the heavy lifting eg, making something a law and then expecting everyone else (companies, and any software developer) to comply, who may not even be required to do so, as they may not be in the EU.
Make no mistake about it, this proposal has nothing to do with children, and the demonizing of encryption, so that when law enforcement arrest someone, they can say "has encryption = offence = something they can charge that person with" rather than actually having to have evidence of a crime against a child.
> EDIT: Here is one idea I had: Sign images/video with hardware-secured chips (camera sensor or GPU?) that is traceable to the device. When images are further processed/edited, then they will be subject to differential-privacy scanning. This can also combat deepfakes, if image authenticity can be proven by the device that took the image.
And there obviously will be totally like no way to like not do that and then have an anonymous photo.
>
If your position is that governments (who represent us,voters) should accept the status quo, and just let their people suffer injustice, I don't think I can support that.*Things can be always worse, and you shouldn't assume that the powers that be will use these things to prosecute the things you find morally offensive. Which is another problem as well.
> Mullvad is also in for a rude awakening. If criminals use Tor or VPNs, those will also face a ban. We need to give governments solutions that lets them do what they claim they want to do (protect the public from victimization) while preserving privacy to avoid a very real dystopia.
The space will innovate regardless of what governments want, so that's the rude awakening. Criminals always will be criminals and they'll just get better at doing what they want to do regardless.
> Freedoms and liberties must not come at the cost of injustice. And as i argued elsewhere on HN, in the end, ignoring ongoing injustice will result in even less freedoms and liberties. If there was a pluralistic referendum in the EU over chat control, I would be surprised if the result isn't a law that is even far worse than chat control.
Okay then guess we can all "think of the children" whenever anyone is worrying about the injustice caused by abuse of these new powers.
> I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.
Placation of government and law enforcement is never complete. For them every goal post moved is perceived as making their job easier. They only have one job, and that's to convict people of things. That is the only metric they care about. That includes making up new offences to charge people with, including "the defendant used non compliant products to hide their offending which may or may not exist" - not a crime in the EU right now, but you can bet that will be the next step.
> Let me post a longer reply later. But for your last point, we do have automated machine generated alarms in form of smoke detectors. We're legally required to have them in our homes.
A smoke alarm has very little room for abuse as it only does one thing which largely aligns with the occupant's interests. A more comparable argument would be that you must have cameras in every room in your house to record burglars, home invaders and potential child abductors.
Funny how nobody has ever made that argument.
Every VPN provider's IPs are blocked now. IP data providers finally got serious about identifying them, 5 or 10 years back.
If you want to look for alternatives: https://kumu.io/embed/9ced55e897e74fd807be51990b26b415#vpn-c...
It's true that servers are flagged, but every VPN has that issue. Usually switching to a new server resolves it and I've noticed some servers aren't use much and are very fast and not flagged by many websites.
What I like about Mullvad is not only the commitment to privacy, but also the VPN speeds. I get 300-500MB/s pretty regularly. Some servers get congested during peak times, but switching to another I'll usually find a fast one in a desirable country very quickly.
I admire that they're saying this, and wish other VPN companies would do similar public relations to highlight the risks of ad targeting.
Tbh, I'm a customer. Before Mullvad, I used PIA.
https://mullvad.net/en/blog/advertising-that-targets-everyon...
6 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.