Stop Breaking Tls
Key topics
The debate rages on: should organizations be allowed to intercept and inspect TLS traffic, or is it time to "stop breaking TLS"? Commenters skewered the idea of compromising certificate authorities, with one pointing out that an attacker would need to breach multiple Certificate Transparency logs to go undetected. The author, Mark Round, chimed in to acknowledge the correction and clarify their original rant, revealing a nuanced discussion around the complexities of TLS inspection. As the conversation unfolded, it became clear that the real challenge lies in balancing security concerns with the need for secure, private connections.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
135
Day 1
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 10, 2025 at 2:06 AM EST
24 days ago
Step 01 - 02First comment
Dec 10, 2025 at 3:43 AM EST
2h after posting
Step 02 - 03Peak activity
135 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 22, 2025 at 2:48 PM EST
11 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
At another job I was handling a support ticket where a customer was asking, in so many words, "can I get HTTP headers of requests flowing through my Envoy TLS reverse proxy?" I said that they could terminate TLS at the proxy and redo things that way, but then that wouldn't be a TLS proxy it'd be a MITM or a gateway. They could log the downstream/upstream and duration of connections, but that wouldn't help.
It doesn't matter if every certificate authority is compromised or just one. One is all that is needed to sign certificates for all websites.
It is striking that we don't see that. We reliably see people saying "obviously" the Mossad or the NSA are snooping but they haven't shown any evidence that there's tampering
It is NSA practice to avoid targets knowing for sure what happened. However their colleagues at outfits like Russia's GRU have no compunctions about being seen and yet likewise there's no indication they're tampering either.
Although Cloudflare are huge, a lot of transactions you might be interested in don't go through Cloudflare.
> the hardware that generates those keys in the first place
That's literally any general purpose computer. So this ends up as the usual godhood claim, oh, they're omniscient. Woo, ineffable. No action is appropriate.
Of course spooks expend resources to spy on people, but that's an expenditure from their finite budget. If it costs $1 to snoop every HTTP request a US citizen makes in a year, that's inconsequential so an NSA project to trawl every such request gets green lit because why not. If it costs $1000 now there's pressure to cut that, because it'll be hundreds of billions of dollars to snoop every US citizen.
That's why it matters that these logs are tamper-evident. One of the easiest ways to cheaply snoop would be to be able to impersonate any server at your whim, and we see that actually nope, that would be very expensive, so that's not a thing they seem to do.
> That's never been my stance
It took you about a day to go from being absolutely sure of a thing, to absolutely sure you've never believed that thing.
It probably just means they are asking the providers to hand over the data, no need to perform active attacks.
If I wanted to intercept all your traffic to any external endpoint without detection I would have to compromise the exact CA that signed your certificates each time, because it would be a clear sign of concern if e.g. Comodo started issuing certificates for Google. Although of course as long as a CA is in my trust bundle then the traffic could be intercepted, it's just that the CT logs would make it very clear that something bad had happened.
A few apps do use certificate pinning nowadays, which creates similar problems, but saying "you can never add your own MitM TLS cert" is not far from certificate pinning everything everywhere all the time. Good luck creating a new home assistant integration for your smart airfryer when you can't read any of the traffic from its app.
Imo: let's make it easier! Standardize TLS configuration for all tools, make easy cert configuration of devices a legal requirement (any smart device sold with hardcoded CA certificates is a device with a fixed end date, where the CA certs expire and it becomes a brick), guarantee user control over their own TLS trust, and provide good tools to check exactly who you're trusting (and expose that clearly to users). Not really practical obviously, but there are upsides here as well.
I think this is the right idea (it’s configuring dozens of things which causes problems) but the other idea I’d consider is standardizing a key escrow mechanism where the session keys could be exported to a monitoring server. That avoids needing active interception with all of the problems that causes, and would pair well with a standardized OS-level warning that all communications are monitored by «name from the monitor cert» which the corporate types are required to display anyway.
And some of the arguments are just very easily dismissed. You don't want your employer to see you medical records? Why were you browsing them during work hours and using your employers' device in the first place?
Using a device owned by your company to access your personal GMail account does NOT void your legal right to privacy.
https://english.ncsc.nl/binaries/ncsc-en/documenten/factshee...
Even the most basic law like "do not murder" is not "do not pull gun triggers" and a gun's technical reference manual would only be able to give you a vague statement like "Be aware of local laws before activating the device."
Legal privacy is not about whether you intercept TLS or not; it's about whether someone is spying on you, which is an end-to-end operation. Should someone be found to be spying on you, then you can go to court and they will decide who has to pay the price for that. And that decision can be based on things like whether some intermediary network has made poor security decisions.
This is why corporations do bullshit security by the way. When we on HN say "it's for liability reasons" this is what it means - it means when a court is looking at who caused a data breach, your company will have plausible deniability. "Your Honour, we use the latest security system from CrowdStrike" sounds better than "Your Honour, we run an unpatched Unix system from 1995 and don't connect it to the Internet" even though us engineers know the latter is probably more secure.
I don’t really need to know, but a bunch of people seemed really confident they knew the answer and then provided no actual information except vague gesticulation about PII.
Given that a regulator publishes a document with guidelines about DPI I think it rules out the impossibility of implementing it. If that were the case it would simply say "it's not legal". It's true that it doesn't explicitly say all the conditions you should met, but that wasn't your question.
It's not as simple as in the US where companies consider everything on company device their property even if employees use it privately.
GDPR does not care how the data got “in the hands of” the company; the same rules apply. Another important thing is the pricipals of GDPR. They sort of unline everything. One principal to consider here is that of data minimization. This basically means that IF you have a valid reason to handle an individuals PII, you must limit the data points you handle to exactly what you need and not more.
So - company proxy breaking TLS and logging everything? Well, the company has valid reason to handle some employee data obviously. But if I use my work laptop to access privat health records, then that is very much outside the scope of what my company is allowed handle. And logging (storing) my health data without valid reason is not GDPR compliant.
Could the company fire me for doing private stuff on a work laptop? Yes probably. Does it matter in terms of GDPR? Nope.
I’m trying to understand the GDPR equivalent of this, which seems to exist since every text fields in a database does not appear to require the full PII treatment in practice (and that would be kind of insane).
Privacy laws are about the end-to-end process, not technical implementation. It's not "You can't MITM TLS" - it's more like "You can't spy on your employees". Blocking viruses is not spying on your employees. If you take the logs from the virus blocker and use them to spy on your employees, then you are spying on your employees. (Virus blockers aiming to be sold in the EU would do well not to keep unnecessary logs that could be used to spy on employees.)
- has established a detailed policy about personal use of corporate devices
- makes a fair attempt to block work unrelated services (hotmail, gmail, netflix)
- ensures the security of the monitored data and deletes it after a reasonable period (such as 6–12 months)
- and uses it only to apply cybersecurity-related measures like virus detection, UNLESS there is a legitimate reason to target a particular employee (legal inquiry, misconduct, etc.)
I would say that it's very much doable.
A solution is required to limit the network to work related activities and also inspect server communications for unusual patterns.
In one example someone’s phone was using the work WiFi to “accidentally” stream 20 GB of Netflix a day.
There are better ways to ensure people are getting their work done that don't involve spying on them in the name of "security".
Having branch offices with 100 Mbps (or less!) Internet connections is still common. I’ve worked tickets where the root cause of network problems such as dropped calls ended up being due to bandwidth constraints. Get enough users streaming Spotify and Netflix and it can get in the way of legitimate business needs.
Sure, there’s shaping/qos rules and dns blocking. But the point is that some networks are no place for personal consumption. If an employer wants to use a MITM box to enforce that, so be it.
This looks a lot like using the MITM hammer to crack every nut.
If this is an actual concern, why not deny personal devices access to the network? Why not restrict the applications that can run on company devices? Or provide a separate connection for personal devices/browsing/streaming?
Why not treat them like people and actually talk to them about the potential impacts. Give people personal responsibility for what they do at work.
There’s a famous fable where everyone is questioning the theft victim about what they should’ve done and the victim says “doesn’t the thief deserve some words about not stealing?”
Similarly, it’s a corporate network designed and controlled for work purposes. Connecting your personal devices or doing personal work on work devices is already not allowed per policy, but people still do it, so I don’t blame network admins for blocking such connections.
Normally no personal device have the firewall root certs installed, so they just experience network issues from time to time, and dns queries and client hello packets are used for understanding network traffic.
However, with recent privacy focused enhancements, which I love by the way because it protects us from ISP and other, we (as in everybody) need a way to monitor and allow only certain connections in the work network. How? I don’t know, it’s an open question.
Availability: Ensures that information and systems are accessible and operational when needed by authorized users
And on balance I'd say losing Integrity is a bad trade off to make here.
This means devs/users will skip TLS verification ("just make it work") making for a dangerous precedent. Companies want to protect their data? Well, just protect it! Least privilege, data minimization, etc is all good strategies for avoiding data leaking
Some software reads "expected" env variables for it, some has its own config or cli flags, most just doesn't even bother/care about supporting it.
Even putting aside the MITM and how horrendous that is, the amount of time lost from people dealing with the fallout got to have cost so much time (and money). I can't fathom why anyone competent would want to implement this, let alone not see how much friction and safety issues it causes everywhere.
Compliance with anti-security policies that: break TLS, thwart certificate pinning, encourage users to ignore certificate errors, increase the attack surface, etc. While lowering system performance and draining your wallet.
Zscaler and its ilk have conned the IT world. Much like Crowdstrike did before it broke the airlines.
Breaking TLS does not decrease the risk for data exfiltration.
And it doesn't help with data recovery.
Rust's solution is "it depends". You can use OpenSSL (system or statically compiled) or rustls (statically compiled with your own CA roots, system CA roots, or WebPKI CA roots).
I'm afraid that until the *ix operating systems come out with a new POSIX-like definition that stabilises a TLS API, regardless of whether that's the OpenSSL API, the WolfSSL API, or GnuTLS, we'll have to keep hacking around in APIs that need to be compatible with arbitrary TLS configurations. Alternatively, running applications through Waydroid/Wine will work just fine if Linux runtimes can't get their shit together.
Are you sure? It's been a few years, but last I tried Firefox used its own CA store on Windows. I'm pretty sure openjdk uses "<JAVA_HOME>/jre/lib/security/cacerts" instead of the system store too.
Only reason why it works on macOS curl is because they're a few versions behind
Is it, though? It is absolutely trivial for an Android app (like the one you use for banking) to pin a specific CA or even a specific server certificate, and as far as I'm aware it is pretty much impossible to universally override this.
In fact, by default Android apps don't accept any user-installed certs. It uses separate stores for system-installed CA roots and user-installed CA roots, and since Android 7.0 the default is to only include the system-installed store. Apps have to explicitly opt-in to trusting the user-installed store.
Fun!
And so many of those products deliver broken chains, and your client needs to download more certificates transparently ( https://systemweakness.com/the-hidden-jvm-flag-that-instantl... )
Double the fun!
He's also absolutely right about the architectural problems too, single points of failure, performance bottlenecks, and the complexity in cloud-native environments.
That said, it can be a genuinely valuable layer in your security arsenal when done properly. I've seen it catch real threats, such as malware C2 comms, credential phishing, data exfiltration attempts. These aren't theoretical; they happen daily. Combined with decent threat intelligence feeds and behavioural analytics, it does provide visibility that's hard to replicate elsewhere.
But, and this is a massive but, you can't half-arse it. If you're going to do TLS inspection, you need to actually commit:
Treat that MITM private key like it's the crown jewels. HSMs, strict access controls, proper rotation schedules. The point about concentrated risk is bang on, you've turned thousands of distributed CA keys into one single target. So act like it. Run it like a proper CA with proper key signing ceremonies and all the safeguards etc.
Actually invest in proper cert distribution. Configuration management (Ansible/Salt/whatever), golden container base images with the CA bundle baked in, MDM for endpoints, cloud-init for VMs. If you can't reliably push a cert bundle to your entire estate, you've got bigger problems than TLS inspection.
Train people properly on what errors are expected vs "drop everything and call security". Document the exceptions. Make reporting easy. Actually investigate when someone raises a TLS error they don't recognise. For dev's, it needs to just work without them even thinking about it. Then they don't need to work around it, ever. If they need to, the system is busted.
Scope it ruthlessly. Not everything needs to go through the proxy. Developer workstations with proper EDR? Maybe exclude them. Production services with cert pinning? Route direct. Every blanket "intercept everything" policy I've seen has been a disaster. Particularly for end-users doing personal banking, medical stuff, therapy sessions, do you really want IT/Sec seeing that?
Use it alongside modern defences. ie EDR, Zero Trust, behavioural analytics, CASB. It should be one layer in defence-in-depth, not your entire security strategy.
Build observability, you need metrics on what's being inspected, what's bypassing, failure rates, performance impact. If you can't measure it, you can't manage it.
But Yeah, the core criticism stands though, even done well, it's a massive operational burden and it actively undermines trust in TLS. The failure modes are particularly insidious because you're training people to ignore the very warnings that are meant to protect them.
The real question isn't "TLS inspection: yes or no?" It's: "Do we have the organisational maturity, resources, and commitment to do this properly?" If you're not in a regulated industry or don't have dedicated security teams and mature infrastructure practices, just don't bother. But if you must do it, and plenty of organisations genuinely must, then do it properly or don't do it at all.
Now they don't have to worry about it anymore, they bought a product that sits in the corner and delivers Cybersecurity™
There's no actual market pressure to be secure, so nobody cares about threat modeling, cost/benefit of security solutions, etc. The only pressure in case of breach is political blame that you need to deflect. The point of a cybersecurity solution is to be there, remind you it is there, and allow you to deflect blame in case of disaster. Whether it actually increases security is merely a bonus side-effect.
For those that don't know, its a MITM proxy with certificates so that it can inspect and unroll TLS traffic.
ostensibly its there to stop data exfiltration, as we've had a number of incidents where people have stolen data and sent it to competitors. (our c-suite don't have as much cyber shit installed, despite them being the ones that are both targets more, and broken the rules more....)
Now, I don't like zscaler, and I can sorta see the point of it. But.
Our cyber team is not a centre of technical excellence. They somehow managed to configure zscaler to send out the certs for a random property company, when people were trying to sign into our VPN.
this broke loads of shit and made my team (infra) look bad. The worrying part is they still haven't accepted that serving a random property company's website cert instead of our own/AWS's cert is monster fuckup, and that we need to understand _why_ that happened before trying anything again.
[1] this makes automatic pen testing interesting because everything we scan has vulnerabilities for NFS/CIFS, FTP and TCP dns.
By day two I started validating their setup. The CA literally had a typo in the company name, not a great sign.
A quick check with badssl.com showed that any self-signed(!) cert was being transparently MITM'ed and re-signed by their trusted corporate cert. Took them 40 days to fix it.
Another fun side-effect of this is that devs will just turned off TLS verification, so their codebase is full of `curl -k`, `verify_mode = VERIFY_NONE`, `ServerCertificateValidationCallback = () => true`, ... Exactly the thing you want to see at a big fintech company /s
Why do we all disdain local TLS inspection software yet half the Internet terminates their TLS connection at Cloudflare who are most likely giving direct access to US Intelligence?
It's so much worse as it's infringing on the privacy and security of billions of innocent people whilst inspection software only hurts some annoying enterprise folks.
I wish we all hopped off the Cloudflare bandwagon.
They could inject malicious keys into your config but would be hard to mask the evidence of that.
TLS inspection is for EVERYTHING in your network, not just your publicly reachable URLs.
Putting Cloudflare anti-DDoS in front of your website is not the same as breaking all encryption on your internal networks.
Google can already see the content of this site since it's hosted... on the internet.
So for all intents and purposes it's equivalent.
My point is: it's very hypocritical that we as industry professionals are complaining about poor cooperates being MITM'd whilst we're perfectly fine enabling the enfringement of fundamental human right to privacy of billions of people by all fronting the shit that we build by Cloudflare in the name of "security".
I find the lack of ethical compass in this regard very disturbing personally
That your healthcare, government, bank, etc. are using Cloudflare, is a third. In an ideal world I guess I'd agree with you, but asking any of these institutions to deploy proper DDoS protection may just be too much of an ask.
Who needs to let CF directly onto their network when they already sit between client and provider for critically-private, privileged communications and records access?
So it might be that they're using a custom one, which I believe is passed through end-to-end.
That said, we are not a business dealing with highly sensitive data or legal responsibilities surrounding data loss prevention.
If you are a business like that, say a bank or a hospital, you want to be able to block patient / customer data leaving your systems. You can do this by setting up a regex for a known format like patient numbers or bank account numbers.
This requires TLS inspection obviously.
Though this makes it harder to steal this data, not impossible.
Lmao not in a million fucking years will I upload our data to an American company in fucking plaintext.
And I find it hard to argue with that.
I’ve tried this in the past and had to revert as I found it made a noticeable difference in my day-to-day.
Curious to hear the experience of others.
It’s WireGuard underneath, which is designed to not be very chatty when idle, so I’d put this down to regular back and forth with Tailscale’s control plane, relays, etc.
It’s a shame really, because a huge value prop of TS is that it’s a VPN you just leave on and forget about. I hate having to toggle it when I inevitably forget to and wonder why I’m getting connection errors to private resources.
I don't know how much chromeOS is configurable and if you can e.g. force it to only use specific network and network interface, or if a student can connect it to a different network somehow, because it would be kinda pointless otherwise.
A VPN is involved, which is what made me assume they are doing TLS shenanigans—I could theoretically be wrong. The computers connect to this VPN automatically on startup. In the moments before the VPN connects, the internet won't work.
Considering that CloudFlare has managed the MitM a huge part of the internet, I'd say that probability is not just non-zero, but greater than by a worrying margin.
How do you propose compliance with their exfiltration protection requirements? (And “turn down $ from those customers” is not an answer)
Compliance with anti-security policies that: break TLS, thwart certificate pinning, train users to ignore certificate errors, increase the attack surface, etc.
Zscaler and its ilk have conned the IT world.
TLS inspection products can intercept the paste transaction before the data leaves the company network, hitting the user with a "No you didn't! Shame on you!"-banner and notify the admins how a user just tried to paste hundreds of customers' personal information and credit card details into some snooping website, or into otherwise allowed LLM chat which still is not allowed to be used with confidential information.
There can even be automations to lock the user/device out immediately if something like this is going on, be it the user or some undetected malware in the user's device attempting the intercepted action. Being able to do these kinds of very specifically targeted interceptions can prevent potentially huge disasters from happening while still allowing users more freedom in taking advantage of the huge variety of productivity tools available these days. No need to choose between completely blocking all previously unseen tools or living in fear of disastrous leaks when there are fine-grained possibilities to control what kind of information can be fed to the tools and from where.
There are plenty of organizations out there where it is completely justified to enforce such limitations and monitoring in company devices. Policies can forbid personal use entirely where it is deemed necessary and legal to do so. Of course the policies and the associated enforced monitoring needs to be clearly communicated and there needs to be carefully curated configurations to control where and how TLS is or isn't intercepted so employee privacy laws and regulations aren't breached either.
If you don't then you're simply open to encrypted comms over your deep inspection TLS breaking box anyway
Also, a lot of nominally serious companies care a lot more about preventing nontechnical employees from watching porn or netflix on company devices/connections than they do about data exfiltration, or any risks posed by employees technical enough to know what phrases like "double encryption" or "TLS MITM evasion" mean.
Like, I don't love TLS MITM-ing. It's not a good thing. But it's the least bad of the options available for solving a problem that many people have decided must be solved (regulating behavior on a LAN).
To some extent I agree with you. Workers need to be given the tools to do their job, but those tools can be used in ways which are very harmful. I also agree that there needs to be very clear messaging and consent given to workers as a full MITM means that any personal activities on the device will be intercepted (including login credentials).
On a practical level, I have yet to see MITM tools work satisfactorily. I am still recovering from Zscaler PTSD.
Are there tools that do this reliably today without a whole bunch of false positives?
For example, I've encountered zscaler setups in the wild which close TLS connections if non-HTTP traffic is encountered. Presumably the traffic inspection fails since there is no HTTP request, and this failure path closes the socket.
It's hard to say whether it's due to the customer's IT dept's config, or zscaler itself -- but as far as the customer is concerned, it's my problem.
At this point in time, Microsoft is the bigger enemy here - some of their policies are just insane and none of this MITM will help [0][1]
[0] https://www.microsoft.com/en-us/microsoft-365/roadmap?id=490...
[1] https://techcommunity.microsoft.com/blog/microsoft365copilot...
You know, the ones that really know about security. X-PAN-AUTHCHECK type of security.
The amount of CVEs some of the big firewall companies collect make it seem like it is a competition for the poorest security hygiene.
The real problem we have is compliance theatre where someone in management forces these solutions into their IT department just so they can check a box on their sheets and shift all responsibilities away.
So is the benefit worth it? Is there data to prove it? Or is it just authoritarian IT departments drunk on power implementing this stuff?
I'd love to know.
> what is the likelihood of every certificate authority on the Internet having their private keys compromised simultaneously
Who cares? It's not like all CAs would have to be breached, just one. CA certs are not scoped, so the moment one CA gets breached, we're all fucked. CT helps, but AFAIK it's still not enforced everywhere yet
https://www.iana.org/dnssec/ceremonies
Because the Framework laptop site at frame.work is malicious, of course.
God, I love CURLing crap from my workstation and not getting the files I needed but instead a bunch of mangled HTML telling me zScaler was going to scan what I was going to download.
6 more comments available on Hacker News