Let's Not Encrypt (2019)
Posted3 months agoActive3 months ago
michael.orlitzky.comTechstoryHigh profile
heatedmixed
Debate
80/100
Let's EncryptHTTPSCertificate AuthoritiesWeb Security
Key topics
Let's Encrypt
HTTPS
Certificate Authorities
Web Security
The article 'Let's Not Encrypt' criticizes Let's Encrypt and the widespread adoption of HTTPS, sparking a debate about the benefits and drawbacks of certificate authorities and encryption.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
8m
Peak period
119
0-3h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 14, 2025 at 9:44 AM EDT
3 months ago
Step 01 - 02First comment
Oct 14, 2025 at 9:52 AM EDT
8m after posting
Step 02 - 03Peak activity
119 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 15, 2025 at 11:13 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45579968Type: storyLast synced: 11/20/2025, 8:32:40 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It speaks to the problem of digital decay. We can still pull up a plain HTTP site from 1995, but a TLS site from five years ago is now often broken or flagged as "insecure" due to aggressive deprecation cycles. The internet is becoming less resilient.
And this has real, painful operational consequences. For sysadmins, this is making iDRAC/iLO annoying again.
(for those who don't know what iDRAC/iLO are, it's the out-of-band management controller that let you access a server's console (KVM) even when the OS is toast. The shift from requiring crappy, insecure Java Web Start (JWS) to using HTML5 was a massive win for security and usability - old school sysadmins might remember keeping some crappy insecure browser around (maybe on a bastion host) to interact with these things because they wouldn't load on modern browsers after 6mo)
Now, the SSL/TLS push is undoing that. Since the firmware on these embedded controllers can't keep pace with Chrome's release schedule, the controllers' older, functional certificates are rejected. The practical outcome is that we are forced to maintain an old, insecure browser installation just to access critical server hardware again.
We traded one form of operational insecurity (Java's runtime) for another (maintaining a stale browser) all because a universal security policy fails to account for specialised, slow-to-update infrastructure... I can already hear the thundering herd approaching me: "BUT YOU NEED FIRMWARE UPDATES" or "YOU NEED TO DEPRECATE YOUR FIRMWARES IF NOT SUPPORTED".. completely tone-deaf to the environments, objectives and realities where these things operate.
this is just a flat-out lie. yes, modern browsers will stilll load websites over http. come on.
Direct sites will load with a "Not Secure" warning, includes on the site might not load without chrome://settings/content/insecureContent
And of course: you won't manage to be visible to Google itself, as you'll be down-ranked for not having TLS.
If you happen to have a .dev domain: you're on the HSTS Preload list, so your site literally won't load.
You’ll be visible to Google (otherwise there would be nothing to downrank), you will just be less visible on Google.
And you, the owner, will likely be to blame by the user.
If you can call them a first world ISP ;)
https://arstechnica.com/tech-policy/2014/09/why-comcasts-jav...
It's been six years, this author is still right, and now the idiots at the CA/B have decided to move the bomb to a 47 day timer for the whole Internet.
Anybody could look up a guide online on how to monitor who at their starbucks was logging into Facebook or whatever. We were having to train a generation of humans to be afraid of public wifi.
I'm not sure if I would object to that if it would be used sparsely and you could opt out.
Things have improved significantly with HTTPS adoption.
MITM is a user->service concern. If someone is between a service and LE, there are much bigger problems.
There are a lot of random internet routers between CAs and websites which effectively have the ability to get certificates for any domain they want. It just seems like such an obvious vulnerability I'm kinda shocked it hasn't been exploited yet. Perhaps the fact that it hasn't is a sign such an attack is more difficult than my intuition suggests.
Still, I'd be a lot more comfortable if DNSSEC or an equivalent were enforced for domain validation. Or perhaps if we just cut out the middleman and built a PKI directly into the DNS protocol, similar to how DANE or Namecoin work.
Also, Let's Encrypt validates DNSSEC for DNS-01 challenges, so you can use that if you like, although CAs in general are not required to do this, there are various reasons why a site operator might not want to, and most don't.
There are two fundamental problems with DANE that make it unworkable, and that would presumably also apply to any similar protocol. The first is compatibility: lots of badly behaved middleboxes don't let DNSSEC queries through, so a fail-closed system that required end-user devices to do that would kick a lot of existing users off the internet (and a fail-open one would serve no security purpose). The other is game-theoretic: while the high number of CAs in root stores is in some ways a security liability, it also has the significant upside that browsers can and do evict misbehaving CAs, secure in their knowledge that those CAs' customers have other options to stay online. And since governments know that'll happen, they very rarely try to coerce CAs into misissuing certificates. By contrast, if the keepers of the DNSSEC keys decided to start abusing their power, or were coerced into doing so, there basically wouldn't be anything that anyone could do about it.
I think you're wrong about DANE's flaws applying to "any similar protocol". The ossification problem could be solved by DNS over HTTPS cutting out the middle boxes, though I agree adoption of that will take time; much as adoption of HTTPS itself has. The game theory problem has been solved by CT; as you noted. You just need to subject certificates issued through the new system to the same process.
Remember that any actor capable of siezing control of DNS can already compromise the existing PKI by fulfilling DNS-01 challenges. You're not going to be able to solve that problem without completely replacing DNS with a self-sovereign system similar to Namecoin, though I can't imagine that happening anytime soon.
> If someone is between a service and LE
There is always someone there: my ISP, my government that monitors my ISP, the LE's ISP, and the US government that monitors the LE's ISP.
In reality, successful society lives halfway down tons of slippery slopes at any given point in time, and engineers in particular hate this. Yet this has been true since basically forever.
I'm sure cavemen engineers complained about how it's not secure to trust that your cave is the one with the symbol you made on the wall, etc.
But also, There is no choice now. Best we can do is encourage people to use web browsers that let people visit http sites, and afaik, that doesn't exist anymore.
I'm not using other browsers often, so I my perception is maybe skewed, but I wouldn't have expected them to block http and if I would see that, I would perceive that as a bug.
(Hugs)
I think they are implying that if someone can man in the middle your website, then they can also man in the middle this request, and issue a certificate for you domain. However, the threat model of man in the middle between a user and your web server is very different than man in the middle between let's encrypt and your web server.
Before that widespread use of HTTPS it was trivial to connect to a coffeeshop's wifi network and sniff everyone else's traffic, and ISPs would man in the middle you to inject their own adds in websites you were looking at.
On the other hand to man in the middle Let's Encrypt -> your web server, you likely need to be state level actor and/or be or have hacked a major telecom (assuming your web server is running in a reputable data center). Folks like that can almost certainly already issue a certificate for your domain without running a man in the middle on Let's Encrypt.
His critiques of why LE is flawed security wise are spot on and I suspect something like SSH keys as he suggests would be pretty much as good.
But there's a reason we're encrypting everything, and the time when we started encrypting offers a clue as to why. Mass surveillance threat actors are not going to go to the trouble and visibility of MITMing every cert connection, but they will (and in the case of NSA did) happily go to the trouble of hoovering up network traffic en masse and watching how people surf. HTTPS provides some protection there because it at least hides the paths to the specific pages you are reading as you surf online, including things like search engine query terms.
The idea that $3.6m is a lot of money to encrypt a huge chunk of web traffic, or that Google is eagerly guarding the money it makes (?) off web certs, which must be a tiny fraction of its actual income, is a clue that this is maybe not a greedy conspiracy.
Because Google forced us to, by throwing up scary warnings if we didn't do it.
Google doesn't care about $3.6mm. They do care about the additional control they have by this scheme.
> [HTTPS] at least hides the paths to the specific pages you are reading as you surf online, including things like search engine query terms.
This assumes there isn't a secret firehose feed from Google to the NSA, which I don't think is a safe assumption.
I'm far more amenable to the idea that Google didn't want ISPs to start injecting ads on websites. If that is control for Google in your view, then my interests aligned with Google for once in a blue moon.
I'm not as convinced as the author is that nation states can easily tamper with certificates these days. I am not sure how much CT checking we do before each page load, but either nation states are compelling the issue of certs that aren't in the CT database, or they are and you can just get a list of who the nation states are spying on. Seems like less of a problem than it was a decade ago.
The author seems to miss the one guarantee that certificates provide; "the same people that controlled this site on $ISSUANCE_DATE control the site right now". That can be a useful guarantee.
We were working on some feature for a client's website, and suddenly things started breaking. We eventually tracked it down to some shoddy HTML + Javascript being on our page that we certainly didn't put there, and further investigation revealed that our ISP - whom we were paying for a business connection - was just slapping a fucking banner ad on most of the pages that were being served.
This was around ... 2008? I wonder if they were injecting it into AJAX responses, too.
My boss called them up and chewed them several new assholes, and the banner was gone by afternoon.
How?
One thing that helps drive it away at work is that we're a University, and essentially all the world's universities have a common authenticated WiFi (because students and perhaps more importantly, academics, just travel from one to another and expect stuff to work, if you got a degree in the last 20 or so years you likely used this, eduroam) but obviously they don't trust each other on this stuff so their sites all use the Web PKI, the same public trust as everybody else, internal stuff might not, but the moment you're asking some History professor to manually install a certificate you might as well assign them a dedicated IT person, so, everything facing ordinary users has public certs from, of course, Let's Encrypt.
Edited to name eduroam specifically.
AHHHH - I just called a friend of mine at one of the French schools. He told me that this is for researchers only and thsi is why I was given another (permanent) access.
I stand corrected and I apologize. This is actually awesome. Working in the field, this is probably one of the most interesting deployments I have seen over many years and I will have a close look at it now.
Tbh makes it kinda sense for those systems, when used only with internal tools and on company devices... but yeah I’d just (of course) Let’s Encrypt if I was setting it up for a client.
1. You're somehow connecting to Facebook and Amazon over HTTP, not HTTPS
2. Your browser has an extension from your ISP installed that's interfering with content
3. You've trusted a root CA from your ISP in your browser's trust store
I feel like there needs to be a name for this. For now, "Those who do not learn from history are doomed to repeat it." is the most apt I think.
Happens constantly when you're essentially born on 3rd base. Maybe that's the proper name. Born on 3rd Base Syndrome.
I've often said that my grandmother was so grateful for all the childhood vaccines that came out during my mom's and my aunt's childhoods, or around that time (the Baby Boom era), because my grandmother really concretely saw how terrible some of those diseases were, with people in her generation actually contracting them in childhood, maybe even dying of them. But if you've really never seen them, it's pretty natural that they start to seem like something that barely even exists at all.
Like, I don't even know the different between typhus and typhoid, or what their symptoms are, or what you actually do to prevent them, or exactly how they're spread, or whether they've been eradicated in certain regions or whatever (or even whether there are any vaccines against them or whether or not I've personally received those vaccines in infancy!). I just barely have a vague sense that these are truly awful things that apparently exist in the world, probably relate to water contamination somehow, and may potentially come back in war zones or disaster zones. (Way to go, people who do ... something? ... to prevent those two!)
https://en.wikipedia.org/wiki/Preparedness_paradox
[1] https://en.wikipedia.org/wiki/Glass%E2%80%93Steagall_legisla...
This inspired me to add a list of all script tags to error reports.
Amateur level ... Around 2006, we enjoyed some clients complaining why information on our CMS was being duplicated.
No matter what we did, there was no duplication on our end. So we started to trace the actions from the from the client (inc browser, ip etc). And low and behold, we got one action coming from the client, and another from a different IP source.
After tracing back the IP, it was a anti virus company. We installed the software on a test system, and ... Yep, the assh** duplicated every action, inc browser settings, session, you name it.
Total and complete mimic beyond the IP. So any action the user did + the information of the page, was send to their servers for "analyzing".
Little issue ... This was not from the public part of our CMS but the HTTPS protected admin pages!
Sure, our fault for not validating the session with extra IP checks but we did not expect the (admin only) session to leak out from a HTTPS connection.
So we tried to see if they reacted to login attempts at several bank pages. O, yes, they send the freaking passwords etc. We tried on a unused bank account, o, look, it was duplicating bank actions (again, bank at fault for not properly checking the session / ip).
It only failed on a bank transfer because the token for authorization was different on their side, vs our request.
You can imagine that we had a rather, how to say, less then polite talks / conversation with the software team behind that anti-virus. They "fixed it" in a new release. Did they remove the whole tracking? Nowp, they just removed the code for the session stealing if the connection was secure.
O, and the answer to why they did it. "it a bug" (yea, right, your mimic a total user behavior, and its a "bug"). Translation: Legal got up their behinds for that crap and they wanted to avoid legal issues with what they did.
Remember folks, if its free your the product. And when its paid, you are often STILL the product. And yes, that was a paid anti-virus "online protection". And people question why i never run any anti-virus software beyond a off-line scan from time to time, and have Windows "online" protections disabled.
Companies just can not stop themselves from being greedy. Same reason why i NEVER use Windows 11... You expect if you paid for Windows, Office or whatever, to not be the product, but hey ...
You can stop ISP ad injection with solutions much less complex than WebPKI.
Simply using TOFU-certificates (Trust On First Use) would achieve this. It also gives you the "people who controlled this website the first time I visited it still control it" guarantee you mention in your last paragraph.
TOFU isn't ideal, but it's an easy counterexample to your claims.
As a user how would I know if I should trust the website's public key on first use?
It's a counterexample, not a recommendation.
If you need this guarantee, use self-certifying hostnames like Tor *.onion sites do, where the URL carries the public key. More examples of this: https://codeberg.org/amjoseph/not-your-keys-not-your-name
I can set which CAs can sign certs for my domains, and monitor if any are issued that I didn't expect.
This only matters, when your view of the entity isn't solely determined by the domain name. For example you care when someone impersonates google.com, because you expect it to belong to Alphabet Inc., you perceive impersonating when the entity you are talking to changes. When some domain is always constantly resolved to your ISP, then that domain is owned by the ISP in your network.
But that's exactly what we're trying to avoid. When I want to visit my bank website, I don't want the ISP to become the real website.
I don't understand your comments. Your solution to how I trust the website on first use (TOFU) is to trust whatever public key the middle man (the ISP) serves? If you're okay with that, I guess you don't have a problem. But I'm not okay with that, so TOFU doesn't solve my problem.
If you were to redesign name and address resolution, to enforce connecting to the real physical world entity, this should happen out-of-band. Well, know that I think of it, I think that's what the GNU Name System is trying to address: https://www.gnunet.org/en/gns.html .
-----
The question seams to be, what do you consider to be the real website? The one that answers to your request? Then the ISP is the real website. In your example you seem to have preconceptions of who might own the website, but these are outside of the network. TLS from Let's Encrypt only enforces that the entity never changes. There are validation schemes where the physical/legal entity is validated, but this is not the case here.
I think you've fundamentally misunderstood the problem. Nobody here is talking about the domain resolving to ISP.
We're talking about an MITM attack where a middle-man (doesn't even need to be the ISP, it could be a router in the middle acting like middle-man) intercepts our request and serving its own public key instead of the actual server's public key so that it can carry out an MITM attack.
With TOFU, there is no way to detect this attack the first time I am connecting to the website. Once you have foolishly trusted the middle-man's public key, you may not notice any problem for months or years. Then one day, the middle-man may decide to misuse your credentials that it has collected during MITM attack.
Still doesn't explain how I'll confirm that if the website has not been intercepted by a middle man the first time I visit it.
What about TLS certificates attested by CAs who validate the real world legal entity? Would you agree that this is a solved problem there?
They can MITM the connection between the host and LE (or any other CA resolver, ACME or non-ACME, doesn't matter). This was demonstrated by the attack against jabber.ru, at the time hosted in OVH. I recommend reading the writeup by the admin (second link from the top in TFA).
This worked, because no-one checked CT.
That said, I don't think there's a way to stop a nation state from seizing control of a domain they control the TLD name servers for without something like Namecoin where the whole DNS system is redesigned to be self-sovereign.
The system is tamper evident not tamper proof. A nation state adversary can indeed impersonate my web site and obtain a new certificate, but the Web Browser doesn't trust that certificate without seeing Proof it was in the CT logs. So, now the nation state adversary need Proof it was Logged.
Whoever issued them the proof has 24 hours to include that dodgy certificate in their public logs for everyone to see. If they lie and don't actually log it, the proof will be worthless and if shown to a trust root this bad proof will result in distrust of the log's operator. That's likely a six or seven figure investment thrown away, for each time this happens.
On the other hand if they do log it, everybody can see what was issued and when, which is inconvenient if you'd prefer to be subtle like the NSA and to some extent Mossad. If you're happy to advertise that you're the bad guys, like the Russians and North Koreans, you do have the small problem that of course nobody trusts you, so, you can't expect any co-operation from the other actors...
This isn't like a missisuance where you can blame the CA and remove them from the root stores; they'd just be following the normal domain validation processes prescribed in the BRs.
Going to Portland to check whether it's on fire would be a lot of effort - so to some extent I must take it on trust that it's not actually on fire despite Donald Trump's statement - whereas visiting crt.sh to check for the extra certificates somebody claims the US government issued is trivial.
I'm not saying there's no value in being able to detect when you're compromised. I'm just saying it would be better if the compromise wasn't possible to begin with.
When I looked at this ~10 years ago it was overwhelmingly "Fuck it they'll click past the warning" and today that doesn't work† but I don't work in an industry where it's my job to go find out what's happening to valuable targets (in that case military and government systems, typically in Asia or Africa) any more.
† There are more obstacles, more awareness, and better tooling so "doesn't work" is over-stating it but I'd be very surprised if "fuck it" (ie just don't get certificates and impersonate an HTTP-only site instead) was enough today.
What would somewhat help would be CAA record with specified ACME account key. The attackers would then have to alter DNS record, would be harder as you describe. (Or pull the key from VM disk image, which would cross another line).
> the CA would be immediately distrusted by browsers, not as punishment but to deter state actors.
Do you think browsers operate outside of states?
> Compelling by the state to do something that destroys a company is illegal in many jurisdictions
How would it destroy the company? It might affect reputation, but as long as it wasn't the company doing it on its own, they can just claim to be the victim (, which they are). It will only affect the company, if is becomes public knowledge, which the state actor doesn't want anyway. I don't think reputation to not respond to legal warrants is protected by the law. Also for example the USA is famous for installing malware on other countries head of state.
Honestly this is the kind of law enforcement, which is fair in my opinion. It is much more preferable to mandated scanning (EU Chat Control), making the knowledge or selling of math illegal or sabotaging public encryption standards. No general security is undermined. It's just classic breaking in into some system and intercepting. Granted I think states shouldn't do it outside of their jurisdiction, but that is basically intelligence services fighting with each other.
If you're in business of selling X.503 certs trusted by browsers, then not being trusted by browsers kinda limits the marketability of your product.
I don't believe the browsers could be coerced to not distrust such a CA. In every root program I know there's a clause that membership to the program is at browser's pleasure. (Those that have public terms, i.e. not msft, but I'd assume those have similar language.)
Re: they can just do it, well, I think they'd be distrusted the same.
In Symantecgate one of the reasons for distrust was that they signed FPKI bridge, so I think no CA in the future will sign a subca that will sign FPKI certs.
> Also for example the USA is famous for installing malware on other countries head of state.
Yeah, exactly. I think they have more targeted ways that risk less detection and less collateral damage.
Do you thing Google or Apple are going to care? They bowed down to China, I think the state they have their headquarters in has even more leverage. As for Mozilla Firefox on Linux, maybe, but I wouldn't trust this too much either.
> I think they have more targeted ways that risk less detection and less collateral damage.
I think they don't really need to care about this, it was quite clear that no other state is publicly doing anything against this.
This is not practically possible for browsers to do, as it would also cause all of the legitimate certificates signed by that CA to become distrusted and break large swathes of the internet. This was one of the main complaints Moxie Marlinspike had in his 2011 talk on TLS (the contents of which are sadly just as true today as they were then)[1].
In fact, there is fairly credible evidence that the NSA did actually do this already back in 2011 with the DigiNotar hack to steal the contents of Iranian emails[2]. This case was so egregious that DigiNotar did get distrusted by browsers, but other hacks like that of Comodo did not result in their CA certificates being distrusted.
The CAB does apparently block CAs more aggressively than they did a decade ago, but I wonder if they would actually block a big CA like LetsEncrypt if it came out they did something shady or got hacked. It just seems incredibly unlikely they would flip the "turn off >60% of the internet" switch regardless of what LetsEncrypt hypothetically did (for reference, in 2011 Comodo signed only 20-25% of website certificates).
[1]: https://www.youtube.com/watch?v=UawS3_iuHoA [2]: https://en.wikipedia.org/wiki/DigiNotar
This has to be a rage bait comment, but anyway, how do you expect 'injections' to show up on 'http-only' ?
"Don't mind us, we're just sitting in the middle of your traffic here and recording your logins in plaintext"
Kinda sorta. In transit most email is encrypted, the big mail providers all both speak and expect TLS encryption when moving mail. Almost everybody configures TLS encrypted IMAP if they use a client, or reads email over HTTPS
> A public invitation to protest against my authoritarian government should not turn on total paranoia mode
The expectations ordinary people have for how the web works are not met by the basic HTTP protocol. They need HTTPS to deliver those basic assumptions. Who decides the hours of the local bakery? Is it Jeff Bezos? HTTP says that seems fine, but HTTPS says no, the bakery gets to decide, not Jeff.
While the situation with emails is worse it does not mean it should be like that.
Not a viable option in a lot of places. Nor does anyone really even want to consider this possibility of their ISP being able to MITM something in the first place.
I sure love when decisions reduce themselves to single points of consideration by virtue of them being discussed in a heated internet forum thread
Thats the least of the problems, they (anyone with basic access to your network actually) could easily overwrite every cooking or session on your machine to use their referral links. IE : Honey &* PayPal's Fraud [0] without you having any idea, now maybe you don't care, but it's stealing other peoples potential earnings.
[0] https://www.theverge.com/24343913/paypal-honey-megalag-coupo...
HTTPS is a really important defense against this, but it's really hard to know when it worked or when it was relevant, because the wiretappers weren't announcing what they were doing and similarly don't usually announce when it's being thwarted.
There are lots of limitations there. For example, traffic analysis may sometimes allow identifying pairs of people who communicate with each other in low-latency ways like a real-time call, or possibly also those who communicate in a distinctive way in high-latency ways. It may also allow determining, for example, which Wikipedia page you looked at, because the pages are different sizes and contain different numbers of images, so the timing and volume of your browser communications could be distinctive depending on which page you browsed to. But, if you don't do the HTTPS part, then you're basically just saying "we're going to allow anyone who controls network infrastructure to permanently record 100% of all communications in an easily searchable way, if they so choose".
But the certificate is signed with the key of Let's Encrypt and your own, both of which the private key never leave the server.
EDIT: I understand how it works. This wasn’t my point.
The point (I think) that TLA is trying to make is that encryption isn’t enough. It wouldn’t be a good situation where someone looks at their house burning and says “well at least nobody could ever read my https traffic.”
The browser not trusting the CA that signed the certificate prevents this. As the commenter said above, they would first need to install a certificate into your list of trusted certs for this to work. Your IT department can do that because they have root on your machine, vpn-du-jour.com can not, and neither can anybody else without root.
Also, I believe that when I download “Shoot Your Friends Online” and install that, it also asks for root privileges (in order to make sure that no cheating software runs on my computer that would allow me to “shoot more of my friends quicker.”)
I also think that when I install “Freecell Advanced,” it also comes with “Freecell Advanced Updater” that needs root privileges (in order to “update Freecell Advanced.”)
Do I understand correctly that there is nothing stopping all three of these — running with root privileges — from installing certificates?
It's fine to still run the software if you trust it, though.
How many web site owner really do that? I mean, even Cloudflare hasn't been running a tight ship in this regard[0] until recently.
[0]: https://blog.cloudflare.com/unauthorized-issuance-of-certifi...
Being generous I would say they are referring to if the client has an invalid ssl approved on their local, in which case its a client problem.
To ignore Encryption altogether is a silly idea. Maybe it shouldn't be so centralised to 1 company though.
Manual long term keys are frowned upon due to potential keyleaks, such as heartbleed, or admin misuse, such as copy of keys on lots of devices when you were signing that 10 year key.
Automated and short lived keys are the solutions to these problems and they're pretty hard to argue against, especially as the key never leaves the server, so the security concerns are invalid.
That's not to say you can't levy valid criticism. I'm not sure if the author is entirely serious either though.
p.s. Certbot and Cert-manager are probably fine, but they're also fairly interesting attack vectors
This is completely backwards: TOFU schemes aren't acceptable for the public web because the average user (1) isn't equipped to compare certificate fingerprints for their bank, and (2) shouldn't be exposed to any MITM risk because they forget to. The entire point of a public key infrastructure like the Web PKI is to ensure that technical and non-technical people alike get transport security.
(The author appears to unwittingly concede this point with the SSH comparison -- asking my grandparents to learn SSH's host pinning behavior to manage their bank accounts would be elder abuse. It works great for nerds, and terribly for everyone else.)
Why is it reasonable to trust the key on first use? What if the first use itself has a man-in-the-middle that presents you the middle-man's key? Why should I trust it on first use? How do I tell if the key belongs to the real website or to a middle-man website?
No, but I was extending a charitable amount of credulousness :-)
Please do tell. I'm curious what forced him to join The Borg.
99 more comments available on Hacker News