HTTPS by Default
Posted2 months agoActiveabout 2 months ago
security.googleblog.comTechstoryHigh profile
controversialmixed
Debate
80/100
HTTPSWeb SecurityBrowser PolicyCertificate Management
Key topics
HTTPS
Web Security
Browser Policy
Certificate Management
Google announces plans to enable HTTPS by default in Chrome, sparking debate among developers and users about the implications for security, usability, and certificate management.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
41m
Peak period
101
0-12h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 28, 2025 at 2:04 PM EDT
2 months ago
Step 01 - 02First comment
Oct 28, 2025 at 2:44 PM EDT
41m after posting
Step 02 - 03Peak activity
101 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 5, 2025 at 10:59 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45736499Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
My first reaction was along the lines of "What? That can't possibly be right..."
After testing a bit, it looks like you can load https://neverssl.com but it'll just redirect you to a non-https subdomain. OTOH, if the initial load before redirecting is HTTPS then it shouldn't work on hotel wifi or whatever, so still seems like it defeats the purpose.
Huh.
http.rip will probably show a "website unavailable" error at some point unless you manually type in the http:// prefix.
HSTS might also interact with this, but I'd expect an HSTS site to just cause Chrome to go for HTTPS (and then that connection would either succeed or fail).
> to force network-level auth flows (which don't always fire correctly when hitting HTTPS)
The whole point of HTTPS is basically that these shouldn't work, essentially. Vendors need to stop implementing weird network-level auths by MitM'ing the connection, and DHCP has an option to signal to someone joining a network that they need to go to a URL to do authentication. These MitM-ers are a scourge, and often cause a litany of poor behavior in applications…
(But also at some point that seems like a bug in Android.)
Why is Linux adoption at 80% when MacOS/Android/Windows are at 95%? Quite unexpected.
It means that if someone has patched into your local network they can access anything in there, but they have to get in first, right? So how concerned should one be in these scenarios
(a) one has wifi with WPA2 enabled
(b) there's a Verizon-style router to the outside world but everything is wired on the house side?
Public CAs don't issue (free) certificates for internal hostnames and running your own CA has the drawback that Android doesn't allow you to "properly" use a personal CA without root, splitting it's CA list between the automatically trusted system CA list and the per-application opt-in user CA list. (It ought to be noted that Apple's personal CA installation method uses MDM, which is treated like a system CA list). There's also random/weird one-offs like how Firefox doesn't respect the system certificate store, so you need to import your CA certificate separately in Firefox.
The only real option without running into all those problems is to get a regular (sub)domain name and issue certificates for that, but that usually isn't free or easy. Not to mention that if you do the SSL flow "properly", you need to issue one certificate for each device, which leaks your entire intranet to the certificate transparency log (this is the problem with Tailscale's MagicDNS as a solution). Alternatively you need to issue a wildcard certificate for your domains, but that means that every device in your intranet can have a valid SSL certificate for any other domain name on your certificate.
You can get $2/yr domain names on weird TLDs like .site, .cam, .link, ...
> which leaks your entire intranet to the certificate transparency log
Not necessarily, you don't route the domain externally, and use offline DNS challenge/request to renew the certificate.
You can, but as stated - that's not free (or easy). That's still yet another fee you have to pay for... which hurts adoption of HTTPS for intranets (not to mention it's not really an intranet if it's reliant on something entirely outside of that intranet.)
If LetsEncrypt charged 1$ to issue/renew a certificate, they wouldn't have made a dent in the public adoption of HTTPS certificates.
> Not necessarily, you don't route the domain externally, and use offline DNS challenge/request to renew the certificate.
I already mentioned that one, that's the wildcard method.
Also, if WPA2 ever becomes extremely broken. There was a period of 3-5 yrs where WEP was taking forever to die at the same time that https was taking forever to become commonplace and you could easily join networks and steal facebook credentials out of the air. If you lived in an apartment building and had an account get hacked between maybe 2008-2011, you were probably affected by this.
> If you exclude navigations to private sites, then the distribution becomes much tighter across platforms. In particular, Linux jumps from 84% HTTPS to nearly 97% HTTPS when limiting the analysis to public sites only.
Sounds like it's just because a large chunk of Linux usage is for web interfaces on the local machine or network, rather than everyday web browsing.
The answer is probably that people that run Linux are far more likely to run a homelab intranet that isn't secured by HTTPS, because internal IP addresses and hostnames are a hassle to get certificates for. (Not to mention that it's slightly pointless on most intranets to use HTTPS.)
I think it's important to emphasise that although Tim's toy hypermedia system (the "World Wide Web") didn't come with baked in security, ordinary users have never really understood that. It seems to them as though http://foo.example/ must be guaranteed to be foo.example, just making that true by upgrading to HTTPS is way easier than somehow teaching billions of people that it wasn't true and then what they ought to do about that.
I am reminded of the UK's APP scams. "Authorized Push Payment" was a situation where ordinary people think they're paying say, "Big Law Firm" but actually a scammer persuaded them to give money to an account they control because historically the UK's payment systems didn't care about names, so to it a payment to "Big Law Firm" acct #123456789 is the same as a payment to "Jane Smith" acct #123456789 even though you'd never get a bank to open you an account in the name of "Big Law Firm" without documents the scammer doesn't have. To fix this, today's UK payment systems treat the name as a required match not merely for your records, so when you say "Big Law Firm" and try to pay Jane's account because you've been scammed, the software says "Wrong, are you being defrauded?" and so you're safe 'cos you have no reason to fill out "Jane Smith" as that's not who you're intending to give money to.
We could have tried to teach all the tens of millions of UK residents that the name was ignored and so they need other safeguards, but that's not practical. Upgrading payment systems to check the name was difficult but possible.
And I noticed that Whatsapp is even worse than Chrome, it opens HTTPS even if I share HTTP links.
Probably a low-threat security risk for a blog.
But indeed, the ability to publish on my own outweights the risk of someone modding my content.
Most of us here read their news from work laptops, where the employer and their MiTM supplier are a much bigger threat even for HTTPS websites.
Their client will complain loudly until and unless they install it, but then for those who care you could offer the best of both worlds.
Almost certainly more trouble than it's worth. G'ah, and me without any free time to pursue a weekend hobby project!
You're not really offering that because the first connection could've be intercepted.
I can imagine alternate approaches (service that stores personal keys on an HTTPS server signed via a public cert, keys in peer-to-peer filesharing with the checksum provided side-channel), but that gets increasingly more elaborate for diminishing return.
There are ways to remove that dependency, but it's going to involve a decentralized DNS replacement like Namecoin or Handshake, many of which include their own built-in alternatives to the CA system too so if "no third parties" is something you truly care about you can probably kill two birds with one stone here.
What does this mean? Is that encryption not reliant on any third parties, or is it just relying on different third parties?
Proton Mail burned CPU time until they found a public key that started the way they wanted it to.
So that is the public key for an HTTPS equivalent as part of the tor protocol.
You can ALSO get an HTTPS certificate for an onion URL; a few providers offer it. But it’s not necessary for security - it does provide some additional verification (perhaps).
Its a shame these did put in a better built-in human readable url system. Maybe a free form text field 15-20 characters long appended to the public key and somehow be made part of that key. Maybe the key contains a checksum of those letters to verify the text field. So something like protonmail.rmez3lotcciphtkl+checksum.
But this being said, I think being a sort of independent 'not needing of third parties' ethic just isnt realistic. Its the libertarian housecat meme writ large. Once you're communicating with others and being part of a shared communal system, you lose that independence. Keeping a personal diary is independent. Anything past that is naturally communal and would involve some level of sharing, cooperation, and dependency on others.
I think this sort of anti-communal attitude is rooted in a lot of regressive stuff and myths of the 'man is an island' and 'great man' nonsense. Then leads to weird stuff like bizarre domain names and services no one likes to use. Outside of very limited use cases, tor just can't compete.
Could we finally stop acting like we know how other people's energy is being produced?
There is no magic do it all yourself. Communicating with people implies dependence.
I know about acme.sh, but still...
Like, the default for cars almost everywhere is you buy one made by some car manufacturer like Ford or Toyota or somebody, but usually making your own car is legal, it's just annoyingly difficult and so you don't do that.
It may be legal but good luck ever getting registration for it.
Now, getting required insurance coverage, that can be a different story. Btu even there, many states allow you to post a bond in lieu of an insurance policy meeting state minimums.
It’s trying to make and sell three or four that is nearly impossible.
https://en.wikipedia.org/wiki/Local_Motors
So, what you've said is true today, but historically Certbot's origin is tied to Let's Encrypt, which makes sense because initially ACME isn't a standard protocol, it's designed to become a standard protocol but it is still under development and the only practical server implementations are both developed by ISRG / Let's Encrypt. RFC 8555 took years.
And I couldn't praise enough acme.sh at the opposite that is simple, dependency less and reliable!
I've used their stuff since it came out and never used certbot, FWIW. If I were to set something up today, I'd probably use https://github.com/dehydrated-io/dehydrated.
So you're absolutely not dependent on the client software, or indeed anyone else's client software.
My hosting provider may accidentally fuck up, but they'll apologise and fix it.
My CA fucks up, they e-mail me at 7pm telling me I've got to fix their fuck-up for them by jumping through a bunch of hoops they have erected, and they'll only give me 16 hours to do it.
Of course, you might argue my hosting provider has a much higher chance of fucking up....
Mark my words, some day soon an enterprising politician will notice the CA system can be drawn into trade sanctions against the enemy of the day....
If you're required to (or choose to) not tell us about it, because of active monitoring when we notice it's likely your CA will be distrusted for not telling us, this is easier because there's a mechanism to tell us about it - same way that there's a way to officially notify the US that you're a spy, so, when you don't (because duh you're a spy) you're screwed 'cos you didn't follow the rules.
The tech centralization under the US government does mean there's a vulnerability on the browser side, but I wouldn't speculate about how long that would last if there's a big problem.
Except (a) your website doesn't let users create custom subdomains; (b) as the certificate is now in use, you the certificate holder have demonstrated control over the web server as surely as a HTTP-01 challenge would; (c) you have accounts and contracts and payment information all confirming you are who you say you are; and (d) there is no suggestion whatsoever that the certificate was issued to the wrong person.
And you could have gotten a certificate for free from Lets Encrypt, if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't.
An organisation with common sense policies might not need to revoke such a certificate at all, let alone revoke it with only hours of notice.
And have you seen how many actual security problems CAs have refused to revoke in the last few years? Holding them to their agreements is important, even if a specific mistake isn't a security problem [for specific clients]. Letting them haggle over the security impact of every mistake is much more hassle than it's worth.
> if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't
Then in this hypothetical I made a mistake and I should fix it for next time.
And I should be pretty mad at my CA for giving me an invalid certificate. Was there an SLA?
Fortunately, one can publish on the www without using ICANN DNS
For example http://199.233.217.201 or https://199.233.217.201
1. I have run own root server for over 15 years
An individual cannot even mention choosing to publish a personal blog over HTTP without being subjected to a kneejerk barrage of inane blather. This is truly a sad state of affairs
I'm experimenting with non-TLS, per packet encryption with a mechanism for built-in virtual hosting (no SNI) and collision-proof "domainnames" on the home network as a reminder that TLS is not the only way to do HTTPS
It's true we depend on ISPs for internet service but that's not a reason to let an unlimited number of _additional_ third parties intermediate and surveil everything we do over the internet
One advertising company through its popular "free browser", a Trojan Horse to collect data for its own purposes, may attempt to "deprecate" an internet protocol by using its influence
But at least in theory such advertising companies are not in charge of such protocols, and whether the public, including people who write server software or client software, can use them or not
Authoritative DNS nameserver that serves root.zone, e.g., the one provided by ICANN, or maybe a customised one
In own case it is served only to me on local network
Many years ago, one of the former ICANN board members mentioned on his personal blog running his own root
And this is why it's a good thing that every major browser will make it more and more painful, precisely so that instead of arguments about it, we'll just have people deciding whether they want their sites accessible by others or not.
Unencrypted protocols are being successfully deprecated.
Depending on yet another third party to provide what is IMHO a luxury should not be required, and I have been continually confused as to why it is being forced down everyone's throat.
Kinda like how Wikipedia benefits Google. Or public roads benefit Uber. Or clean water benefits restaurants
Not just Google: AI bots could use the information to look for juicy new data to scrape and ingest.
Probably not a significant thing, the information can be derived in other ways too if someone wants to track these things, but it is a thing.
Man in in the?
My navigation habits are boring but they are mine, not anyone else's to see.
A server has no way to know whether the user cares or not, so they are not in a position to choose the user's privacy preferences.
Also: a page might be fully static, but I wouldn't want $GOVERNMENT or $ISP or $UNIVERSITY_IT_DEPARTMENT to inject propaganda, censor... Just because it's safe for you doesn't mean it's safe for everyone.
It does MITM between you and the HTTPS websites you browse.
In fact it's just a regular laptop that I fully control and installed from scratch, straight out of Apple's store. As all my company laptops have been.
And if it was company policy I would refuse indeed. I would probably not work there in the first place, huge red flag. If I really had to work there for very pressing reasons I would do zero personal browsing (which I don't do anyways).
Not even when I was an intern at random corpo my laptop was MITMed.
I could maybe understand it for non-tech people (virus scanning yadda yadda) but for a tech person it's a nuisance at best.
Edit: I'm not saying I like it this way... but that's what you get when working in a small org in a larger org in a govt office. When I worked in a security team for a bank, we actually were on a separate domain and network. I generally prefer to work untrusted, externally and rely on another team for production deployment workflows, data, etc.
I'm lucky to be a dev both by trade and passion. I like my job, it's cozy, and we're still scarce enough that my employer and I are in a business relationship as equals: I'm just a business selling my services to another business under common terms (which in my case include trusting each other).
So to echo a sister comment: while sadly it is common in some jurisdictions, it is definitely not normal.
I've also seen similar configurations in Banking environments having done work for three major banking establishments over the years. The exception was when I was on a platform security team that managed access controls. Similarly at a couple of large airlines.
But this is mostly a waste of time, these days companies just install agents on each laptop to monitor activity. If you do not own the machine/network you are using then don’t visit sites hat you don’t want them to see.
For things other than work for my employer? Yes.
And work stuff doesn't touch my personal equipment, with the exception that I can connect to the company VPN from my personal laptop to remote to a work machine if I need to do DayJob work remote in an emergency when I don't have the company laptop with me.
> It does MITM between you and the HTTPS websites you browse.
My employer doesn't. Many don't.
Of course many do, but that is them controlling what happens on their equipment and they are usually up front about it. This is quite different to an ISP, shady WiFi operator, or other adversarial network node, inspecting and perhaps modifying what I look at behind my back.
"I want my communications to be as secure as practical."
"Ah, but they're not totally secure! Which means they're totally insecure! Which means you might as well write your bank statements on postcards and mail them to the town gossip!"
It amazes me how anti-HTTPS some people can be.
If that were the universal state, then it would be easy to tell when someone was visiting a site that mattered, and you could probably infer a lot about it by looking at the cleartext of the non-HTTPS side they were viewing right before they went to it.
AFAIK it's still not that widely adopted or can be easily blocked/disabled on a network though.
However, the page you're fetching from that domain is encrypted, and that's vastly more sensitive. It's no big deal to visit somemedicinewebsite.com in a theocratic region like Iran or Texas. It may be a very big deal to be caught visiting somemedicinewebsite.com/effective-abortion-meds/buy. TLS blocks that bit of information. Today, it still exposes that you're looking at plannedparenthood.com, until if/when TLS_ECH catches on and becomes pervasive. That's a bummer. But you still have plausible deniability to say "I was just looking at it so I could see how evil it was", rather than having to explain why you were checking out "/schedule-an-appointment".
[0]https://developers.cloudflare.com/ssl/edge-certificates/ech/
Most of the site hosted general information about the agency and its functions, but they also had a section where you could provide information.
TLS traffic analysis can still reveal which pages you accessed with some degree of confidence, based on packet sizes, timings, external resources that differ between pages (e.g. images)
Surprised they're still posting, with their employers being shut down at the moment and all.
No, it's a warning sign that you may be an active victim of an HTTPS downgrade attack where an attacker is blocking HTTPS communication and presenting you with an HTTP version of the website that you intended to visit, capturing and modifying any information you transmit and receive.
> By throwing scary warnings in front of users when there is no actual security threat
Most of these situations may be innocent but the problem is that they look identical to "actual security threats" so you don't have a choice. If there was a way to distinguish between them we/they would be doing it already.
With http it is trivial.
So you say you don’t care if my ISP injects whole bunch of ads and I don’t even see your content but only the ads and I blame you for duping me into watching them.
Nowadays VPN providers are popular what if someone buys VPN service from the shitty ones and gets treated like I wrote above and it is your reputation of your blog devastated.
And while at it, lobby to make corporate MiTM tools illegal as well.
Because if you are bothered about my little blog, you should be bothered that your employer can inspect all your HTTPS traffic.
More to the point: serving your blog with HTTPS via Let's Encrypt does not in any way forbid you from also serving it with HTTP without "depending on third parties to publish content online". It would take away from the drama of the statement though, I suppose.
Shine on you crazy diamond, and all that, but...
> I have been continually confused as to why it is being forced down everyone's throat.
Have you never sat on public wifi and tried to open an http site? These days it is highly likely to be MITM'd by the wifi provider to inject ads (or worse). Even residential ISPs that one pays for cannot be trusted not to inject content, if given the opportunity, because they noticed that they are monopolies and most users cannot do anything about it.
You don't get to choose the threat model of those who visit your site.
I honestly don't remember a single case where that happened to me. Internet user since 1997.
They've taken that strategy with newer enhancements (for instance, you can't use passkeys over non-secured channels), but the bar for widespread breakage of existing deployments is pretty high - even if changes like this make it harder to navigate to those existing deployments.
You’re exaggerating a bit. I have a static website that hasn’t changed in over 15 years. Okay, not completely static, as one page has a (static) HTML form that creates some file templates as a utility, but everything is working like it did in 2010. Except that I added TLS support at some point so that people don’t get scary warnings.
What is funny about HTTPS is that early arguments for its existence IIRC were often along the lines of protecting credit card numbers and personal information that needed to be sent during e-commerce
HTTPS may have delivered on this promise. Of course HTTPS is needed for e-commerce. But not all web use is commercial transactions
Today, it's unclear who or what^2 HTTPS is really protecting anymore
For example,
- web users' credit card numbers are widely available, sold on black markets to anyone; "data breaches" have become so common that few people ask why the information was being collected and stored in the first place nor do they seek recourse
- web users' personal information is routinely exfiltrated during web use that is not e-commerce, often to be used in association with advertising services; perhaps the third parties conducting this data collection do not want the traffic to be optionally inspected by web users or competitors in the ad services business
- web users' personal information is shared from one third party to another, e.g., to "data brokers", who operate in relative obscurity, working against the interests of the web users
All this despite "widespread use of encryption", at least for data in transit, where the encryption is generally managed by third parties
When the primary use of third-party mediated HTTPS is to protect data collection, telemetry, surveillance and ad services delivery,^1 it is difficult for me to accept that HTTPS as implemented is primarily for protecting web users. It may benefit some third parties financially, e.g., CA and domainname profiteers, and it may protect the operations of so-called "tech" companies though
Personal information and behavioral data are surreptitiously exfiltrated by so-called "tech" companies whilst the so-called "tech" company's "secrets", e.g., what data they collect, generally remain protected. The companies deal in information they do not own yet operate in secrecy from its owners, relentlessly defending against any requests for transparency
1. One frequent argument for the use of HTTPS put forth by HN commenters has been that it prevents injection of ads into web pages by ISPs. Yet the so-called "tech" companies are making a "business" out of essentially the same thing: injecting ads, e.g., via real-time auctions, into web pages. It appears to this reader that in this context HTTPS is protecting the "business" of the so-called "tech" companies from competition by ISPs. Some web users do not want _any_ ads, whether from ISPs or so-called "tech" companies
2. I monitor all HTTPS traffic over the networks I own using a local forward proxy. There is no plaintext HTTP traffic leaving the network unless I permit it for a specific website in the proxy config. The proxy forces all traffic over HTTPS
If HTTPS were optionally under user control, certainly I would be monitoring HTTPS traffic being automatically sent from own computers on own network to Google by Chrome, Android, YouTube and so on. As I would for all so-called "tech" companies doing data collection, surveillance and/or ad services as a "business"
Ideally one would be able to make an informed decision whether they want to send certain information to companies like Google. But as it stands, with the traffic sometimes being protected from inspection _by the computer owner_, through use of third party-mediated certificates, the computer owner is prevented from knowing what information is being sent
In own case, that traffic just gets blocked
Whenever I visit a HTTP-only site, I assume the administrator is either old and does not understand how to set up SSL, or it's an unmaintained/forgotten web server that hasn't been touched in about a decade.
If it's (1) obviously recent content*, and (2) something that needs little security - a city council member's blog, or recipes - then how much do you care that it's HTTP-only?
*Or just date-insensitive
As can every recipe site with httpS - but a vulnerable WordPress plugin, or too-easy admin password, or malvertising, or a zillion other things.
But conveniently, "all sites gotta be httpS" puts the biggest part of the blame/load on the littlest little guys - who want to make and post good, unmonetized content. But don't have an IT skill set, nor want to deal with yet more admin overhead & costs.
- Massive government spying programs, people forget that Chat Control used to be the standard, everything you ever browsed, posted or said online could be monitored
- Tracking that you could not disable, where your ISP would work with publishers appending http headers to every request that uniquely identified you.
- Not only little guys, as you say, were using http, it was government sites, news sites, a huge part of the internet was unencrypted and vulnerable to mitm. As you say, yes, it's not the only attack vector but it was one of the easiest to exploit, where any random wifi access point you're connected to could steal your credentials.
Sure, but if you dont have the skills to self host you are using an online service and ~100% of them will do HTTPS for you.
If you are self hosting, HTTPS can take as little as zero configuration - I use Caddy and it does it for me.
Firefox does this when I type in a URL and the server is down. I absolutely hate this behaviour, because I run a bunch of services inside my network.
If I tell my browser ‘fetch http://site.example,’ I mean for it to connect to site.example on HTTP on port 80 nothing more. If there is a web server run ning which wants to redirect me to https://site.example, awesome, but my browser should never assume I mean anything I did not say.)
Equally your preference for HTTP should not stand in the way of a more secure default for the average person.
Honestly I'd prefer that my mom didn't browse any http sites, it's just safer that way. But that doesn't detract from your ability to serve unencrypted pages which can easily be intercepted or modified by an ISP (or worse.)
https://multiplayeronlinestandard.com/goto.html (the reason for the domain is I will never waste time on HTTPS but github does it automatically for free up to 100GB/month)
It's not a strawman, it's a real attack that we've seen for decades.
The entire guidance of "don't connect to an open wireless AP"? That's because a malicious actor who controlled the AP could read and modify your HTTP traffic - inject ads, read your passwords, update the account number you requested your money be transferred to. The vast majority of that threat is gone if you're using HTTPS instead of HTTP.
Say we all move to HTTPS but then let’s encrypt goes away, certificate authority corps merge, and then google decides they also want remote attestation for two way trust or whatever - the whole world becomes walled up into an iOS situation. Even a good idea is potentially very bad at the hands of unregulated corps (and this is not a hypothetical)
The problem in the above was not actually caused by the AP being open, nor is it just limited to APs in the path between you and whatever you're trying to connect to on the internet. Another common example is ISPs which inject content banners into unencrypted pages (sometimes for billing/usage alerts, other times for ads). Again, this is just another example - you aren't going to whack-a-mole an answer to trusting everything a user might transit on the internet, that's how we came to HTTPS instead.
> There are still legitimate uses for HTTP including reading static content.
There are valid reasons to do a lot of things which don't end up making sense to support in the overall view.
> Say we all move to HTTPS but then let’s encrypt goes away, certificate authority corps merge, and then google decides they also want remote attestation for two way trust or whatever - the whole world becomes walled up into an iOS situation. Even a good idea is potentially very bad at the hands of unregulated corps (and this is not a hypothetical)
There are at least 2 other decent sized independent ACME operators at this point, but say all of the certificate authority corps merge but we planned ahead and kept HTTP support: our banking/payments, sites with passwords, sites with PII, medical sites, etc is in a stranglehold but someone's plain text blog post about it will be accessible without a warning message. Not exactly a great victory, we'll still need to solve the actual problem just as desperately at that point.
.
The biggest gripe I have with the way browsers go about this is they only half consider the private use cases, and you get stuck with the rough edges. E.g. here they call private addresses out to not get a warning, but my (fully in browser, single page) tech support dump reader can't work when opened as a file:/// because the browser built-in for calculating an HMAC (part of WebCrypto) is for secure contexts only, and file:/// doesn't qualify. Apart from being stupid because they aren't getting rid of JavaScript support on file:/// origins until they just get rid of file:/// completely and it just means I need a shim, it's also stupid because file:/// is no less a secure origin than localhost.
I'd like for every possible "unsecure" private use case to work, but I (and the majority of those who uses a browser) also has a conflicting want to connect to public websites securely. The options and impacts for these conflicting desires have to be weighed and thought through.
At least mongoose will serve stuff in 100KB.
This can still be MITM'd. Maybe they can't drain your bank account by the nature of the content, but they can still lie or something. And that's not good.
It would be ideal if people only browsed from trusted networks, but telling people "don't do the convenient, useful, obvious thing" only goes so far. Hence the desire to secure connections from another angle.
Just switch to ZeroSSL - it's the default certificate provider for the acme.sh script now.
95 more comments available on Hacker News