Ssl Certificate Requirements Are Becoming Obnoxious
Key topics
The increasingly stringent SSL certificate requirements are sparking heated debates, with some commenters hailing the changes as a necessary push for better security practices, while others lament the added complexity and frustration. At the center of the controversy is the shift towards shorter certificate lifespans, with some arguing that a month is a reasonable compromise between security and manageability, while others advocate for even shorter or longer validity periods. Proponents of shorter lifespans point out that it's a necessary evil due to the historical failure of certificate revocation mechanisms, while skeptics question the relative impact of stolen or fraudulent certs compared to other security threats. As the discussion unfolds, it becomes clear that the "sweet spot" for certificate validity is still a matter of debate, with some stakeholders pushing for more aggressive automation and others prioritizing human manageability.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
23m
Peak period
149
0-12h
Avg / period
32
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 26, 2025 at 8:50 AM EDT
4 months ago
Step 01 - 02First comment
Aug 26, 2025 at 9:12 AM EDT
23m after posting
Step 02 - 03Peak activity
149 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 31, 2025 at 12:45 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
For all the annoyance of SOC2 audits, it sure does make my manager actually spend time and money on following the rules. Without any kind of external pressure I (as a security-minded engineer) would struggle to convince senior leadership that anything matters beyond shipping features.
Why wouldn't you go with a week or a day? isn't that better than a whole month?
Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?
Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?
Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.
a month is better than a year because we never ever ever managed to make revocation work, and so the only thing we can do is reduce the length of certs so that stolen or fraudulently obtained certs can be used for less time.
https://www.darkreading.com/endpoint-security/china-based-bi...
I'm sure that you are perfectly able to do your own research, why are you trying to push that work onto some stranger on the internet?
a whole month put you in the "if you don't have the resource to automate it, it's still doable by a human, not enough to crush somebody, but still enough to make the option , let's automate fully something to consider"
hence why it's better than a week or a day (it's too much pressure for small companies) better than hours/minutes/secondes (it means you go from 1 year to 'now it must be fully automated right now ! )
a year or two years was not a good idea, because you loose knowledge, it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)
A month, you either start to fully document it, or at least to have it fresh in your mind. A month give you time to everytime think "ok, we have 30 certicates, can't we have a wild card, or a certificate with several domain in it?"
> Perhaps it's time to go with another method entirely.
I think that's the way forward, it's just that it will not happen in one step, and going to one month is a first step.
source: We have to manage a lot of certificate for a lot of different use cases (ssh, mutual ssl for authentification, classical HTTPS certificate etc. ) and we learned the hard way that no 2 years is not better than 1 , and I agree that one month would be better
also https://www.digicert.com/blog/tls-certificate-lifetimes-will...
Ah yes, let's make a terrible workflow to externally force companies who can't be arsed to document their processes to do things properly, at the expense of everyone else.
Monthly expiration is a simple way to force you to automate something. Everyone benefits from automating it, too.
(Why not less than six days? Because I think at that point you might start to face some availability tradeoffs even if everything is always fully automated.)
> Perhaps it's time to go with another method entirely.
What method would you suggest here?
Could it work that your long-term certificate (90 days, whatever) gives you the ability to sign ephemeral certificates (much like, e.g. LetsEncrypt signs your 90 day certificate)? That saves calling out to a central authority for each request.
Then if your CA went down for an hour, you would go down too. With 47 days, there's plenty of time for the CA to fix the outage and issue you a new cert before your current one expires.
Using LetsEncrypt and ZeroSSL together is a popular approach. If you need a stronger guarantee of uptime, reach for the paid options.
https://github.com/acmesh-official/acme.sh?tab=readme-ov-fil...
>If you need a stronger guarantee of uptime, reach for the paid options.
We don't. If we had 1 minute or 1 second lifetimes, we would.
He said six figures for the price would be fine. This is an instance where business needs and technology have gotten really out of alignment.
It'll take about fifteen minutes of time, and executive level won't ever have to concern themselves with something as mundane as TLS certificates again.
Business culture devaluing security is the root of this and I hope people see the above example of everything that's wrong with how some technology companies operate, and "just throw money at the problem because security in an annoying cost center" is super bad leadership. I'm going to guess this guy also have an MFA exception on his account and a 7 character password because "it just works! It just makes sense, nerds!" I've worked with these kinds of execs all my career and they are absolutely the problem here.
I completely agree with you but you would be astonished by how many companies, even small/medium companies that uses recent technologies and are otherwise pretty lean, still think that restarting/redeploying/renewing as less as possible is the best way to go instead of fixing the root issue that makes restarting/redeploying/renewing a pain in the ass.
And not even at the "math" level. I mean, like, how to get them into a Java keystore. Or how to get Apache or nginx to use them. That you need to include the intermediate certificate. How to get multiple SANs instead of a wildcard certificate. How to use certbot (with HTTP requests or DNS verification). How to get your client to trust a custom CA. How to troubleshoot what's wrong from a client.
I think the most rational takeaway is just that it's too difficult for a typical IT guy to understand, and most SMBs that aren't in tech don't have anyone more knowledgeable on staff.
Where would that kind of thinking lead us..? Most medical procedures are too complex for someone untrained to understand. Does that mean clinics should just not offer those procedures anymore, or should they rather make sure to train their physicians appropriately so they’re able to… do their job properly?
Even if your server admins fully understand TLS, there are still issues like clock skew on clients breaking things, old cipher suites needing to be reviewed / sunset, users clicking past certificate warnings despite training, and the list of (sometimes questionable) globally trusted CAs that the security of the Internet depends upon.
Of course they should do their job properly, but I'm skeptical that we (as software developers) can't come up with something that can more reliably work well.
I actually watched for crashes (thank you inventory control department shenanigans) so that I can sneak in changes during a reset.
I mean… There's a tradeoff to be sure. I also have a list of things that could be solved properly, but can't justify the time expense to doing so compared to repeating the shortcut every so often.
It's like that expensive espresso machine I've been drooling over for years—I can go out and grab a lot of great coffee at a barista shop before the machine would have saved me money.
But in this particular instance, sure; once you factor the operational risk in, proper automation often is a no-brainer.
Most solutions: make the peons watch a training video or attend a training session about how they should speak up more.
Imagine you run an old-school media company who's come into possession of a beloved website with decades of user-generated and reporter-generated content. Content that puts the "this is someone's legacy" in "legacy content." You get some incremental ad revenue, and you're like "if all I have to do is have my outsourced IT team do this renewal thing once a year, it's free money I guess."
But now, you have to pay that team to do a human-in-the-loop task monthly for every site you operate, which now makes the cost no longer de minimis? Or, fully modernize your systems? But since that legacy site uses a different stack, they're saying it's an entirely separate project, which they'll happily quote you with far more zeroes than your ads are generating?
All of a sudden, something that was infrequent maintenance becomes a measurable job. Even a fully rational executive sees their incentives switch - and that doesn't count the ones who were waiting for an excuse to kill their predecessors' projects. We start seeing more and more sites go offline.
We should endeavor not to break the internet. That's not "don't break the internet, conditional on fully rational actors who magically don't have legacy systems." It's "don't break the internet."
And, if you haven't been using a reverse proxy before, or for business/risk reasons don't want to use your main site's infrastructure to proxy the inherited site, and had been handling certificates in your host's cPanel with something like https://www.wpzoom.com/blog/add-ssl-to-wordpress/ - it is indeed a dedicated project to install a reverse proxy!
Now they are doing next plausible solution. Seems like 47 days is something they found out by let’s encrypt experience estimating load by current renewals but that last part I am just imagining.
But CRL sizes are also partly controlled by expiry time, shorter lifetimes produce smaller CRLs.
There is in fact work on making this an option: https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...
> Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?
> Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?
Eventually the overhead actually does start to matter
> Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.
Like what?
A short cycle ensures either automation or keeping memory fresh.
Automation of course can also be forgotten and break, but it's at least somewhere written down in some form (code) rather than personal memory of a long gone employee who previously uploaded certs to some CA website for signing manually etc
Then you only have to follow the stricter rules for only the public facing certs.
https://github.com/linsomniac/lessencrypt
I've toyed with the idea of adding the ability for the server component to request certs from LetsEncrypt via DNS validation. Acting as a clearing house so that individual internal hosts don't need a DNS secret to get certs. However, we also put IP addresses and localhost on our internal certs, so we'd ahve to stop doing that to be able to get them from LetsEncrypt.
(You say hijacking the HTTP port, but I don't let the ACME client take over 80/443, I make my reverse proxy point the expected path to a folder the ACME client writes to, I'm not asking for a comparison with a setup where the acme client takes over the reverse proxy and edits its configuration by itself, which I don't like)
Active Directory Certificate Services is a fickle beast but it's about to get a lot more popular again.
Automated renewal is... probably about a decade or two from being supported well enough to be an actual answer.
In our case, we'll be spending the next couple years reducing our use of PKI certificates to the bare functional minimum.
???
All my servers use certbot and it works fine. There's also no shortage of SaaS/PaaS that offer free ssl with their service, and presumably they've got that automated as well.
It may help you to understand that it is not an assumption any given product even supports HTTPS well in the first place, and a lot of vendors look at you weird when you express that you intend to enable it. One piece of software requires rerunning the installer to change the certificate.
Yeah, there are also some very expensive vendors out there to manage this for big companies with big dollars.
Plus, how would you ever get enterprise tool vendors to add support if not for customers pestering them with support requests because manual certificate renewal has gotten too painful?
> I do not think PKI will survive the 47 day change. […] In our case, we'll be spending the next couple years reducing our use of PKI certificates to the bare functional minimum.
Maybe PKI will die… or you will. Progress doesn't treat dinosaurs too well usually.
> In our case, we'll be spending the next couple years reducing our use of PKI certificates to the bare functional minimum.
Good. A certificate being publicly trusted is a liability, which is why there are all these stringent requirements around it. If your certificates do not in fact need to be trusted by random internet users, then the CA/B wants you to stop relying on the Web PKI, because that reduces the extent to which your maintenance costs have to be balanced against everybody else's security.
As I said in another comment, private CAs aren't that popular right now in the kinds of organizations that have a hard time keeping up with these changes, because configuring clients is too painful. But if you can do it, then by all means, do!
I suspect when companies who are members actually realize what happened, CA/B members will be told to reverse the 47 day lifetime or be fired and replaced by people who will. This is a group of people incredibly detached from reality, but that reality is going to come crashing through to their employers as 2029 approaches.
> Good.
You may assume that most organizations will implement private CAs in these scenarios. I suspect the use of encryption internally will just fall. And it will be far easier for attackers to move around inside a network, and take over the handful of fancy auto-renewing public-facing servers with PKI anyways.
If an org is tech-forward enough to have bothered setting up HTTPS for internal use cases on their own initiative, just because it was good for security, then they're not going to have major problems adapting to the 47-day lifetime. The orgs that will struggle to deal with this are the ones that did the bare minimum HTTPS setup because some external factor forced them to (with the most obvious candidate being browsers gradually restricting what can be done over unencrypted HTTP). Those external factors presumably haven't gone anywhere, so the orgs will have to set up private CAs even if they'd rather not bother.
Most of the other forum members either won't oppose longer lifetimes (every cert vendor would be happy) or will bow to the only two companies that matter.
And I really hope you are wrong that it will not get reversed. (I hope I am wrong about the above, but I doubt it.)
When the Internet breaks, people die. It's all fun and games to talk about hypothetical security problems that you aren't actually solving as an excuse to make the Internet incredibly transient and fragile, but it has a real human cost.
Right now, over 80% of organizations have outages do to a certificate issue every year. That's really bad, and already due to the CA/B's poor decisionmaking. But at the existing certificate lifetimes, at least it's predictable. Now the CA/B wants to multiply the possible problem occurrences by a factor of ten. And an organization can't even just be concerned with their own certificates, because any layer of their stack's software or infrastructure having a certificate error can have downstream effects.
The reason I believe this change will be undone, is because ultimately it will have to. It will be so obviously wrong if it goes into effect that people opposed to undoing it will get removed from the decisionmaking until it is undone.
It is short enough to force teams to automate the process.
You're not supposed to be human-actioning something every month.
But yes, it'll be a huge headache for teams that stick their head in the sand and think, "We don't need to automate this, it's just 6 months".
As the window decreases to 3 months it'll be even more frustrating, and then will come a breaking point when it finally rests at 47 days.
But the schedule is well advertised. The time to get automation into your certificate renewal is now.
In the real world however, this will be a LOT of teams. I think the organisations defining this has missed just how much legacy and manual processes are out there, and the impact that this has on them.
I don't think this post makes that argument well enough, instead trying to argue the technical aspect of ACME not being good enough.
ACME is irrelevant in the face of organisations not even trying, and wondering why they have a pain every 6 weeks.
The solution is just like with any other automation - document it.
Even your unrelated question is another argument for shortened certificate lifetimes. :-)
What typically does work for this kind of thing, is finding a hook to artificially rather than technically necessitate it, while not breaking legacy.
For example, while I hate the monopoly that Google has on search, it was incredibly effective when they down-ranked HTTP sites in favour of HTTPs sites.
( In 2014: See https://developers.google.com/search/blog/2014/08/https-as-r... )
Almost overnight, organisations that never gave a shit, suddenly found themselves rushing through the any required tech debt to get SSL certs and HTTPs in place.
It was only after that drove up HTTPs to a critical mass did Google have the confidence to further nudge through bigger warnings in Chrome. ( 2018 ).
Perhaps ChatGPT and has impacted Google's monopoly too much to try again, but they could easily rank results based on certificate validity length and try the same trick again.
CRLs become gigantic and impractical at the sizes of the modern internet, and OCSP has privacy issues. And there's the issue of applications never checking for revocation at all.
So the obvious solution was just to make cert lifetimes really short. No gigantic CRLs, no reaching out to the registrar for every connection. All the required data is right there in the cert.
And if you thought 47 days was unreasonable, Let's Encrypt is trying 6 days. Which IMO on the whole is a great idea. Yearly, or even monthly intervals are long enough that you know a bunch of people will do it by hand, or have their renewal process break and not be noticed for months. 6 days is short enough that automation is basically a must and has to work reliably.
It's really annoying because I have to carve outs for browsers and other software that refuse to connect to things with unverifiable certs and adding my CA to some software or devices is a either a pain or impossible.
It's created a hodge podge of systems and policies and made our security posture full of holes. Back when we just did a fully delegated digicert wildcard (big expense) on a 3 or 5 year expiration, it was easy to manage. Now, I've got execs in other depts asking about crazy long expirations because of the hassle.
Pick your poison.
Plenty of people leave these devices without encrypted connections, because they are in a "secure network", but you should never rely on such a thing.
We used to use Firefox solely for internal problem devices with IP and subnet exclusions but even that is becoming difficult.
Why not encode that TXT record value into the CA-signed certificate metadata? And then at runtime, when a browser requests the page, the browser can verify the TXT record as well, and cache that result for an hour or whatever you like?
Or another set of TXT records for revocation, TXT _acme-challenge-revoked.<YOUR_DOMAIN> etc?
It's not perfect, DNS is not at all secure / relatively easy to spoof for a single client on your LAN, I know that. But realistically, if someone has control of your DNS, they can just issue themselves a legit certificate anyway.
Also, I don't see how that last paragraph follows; is your argument just that client-side DNS poisoning is an attack not worth defending against?
Also, there's maybe not much value in solving this for DNS-01 if you don't also solve it for the other, more commonly used challenge types.
[0]: https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...
[1]: https://github.com/mozilla/clubcard
Everywhere I've read, one "must validate domain control using multiple independent network perspectives". EG, multiple points on the internet, for DNS validation.
Yet there is not one place I can find a very specific "this is what this means". What is a "network perspective", searching shows it means "geographical independent regions". What's a region? How big? How far apart from your existing infra qualifies? How is it calculated.
Anyone know? Because apparently none of the bodies know, or wish to tell.
Also, there are loads of other requirements except this one and they are there for good reasons. It’s not easy to get your root certificate accepted by Firefox/Google/Microsoft/Apple and it shouldn’t be.
https://cabforum.org/working-groups/server/baseline-requirem...
You can also just search the document for the word "Perspective" to find most references to it.
"Effective December 15, 2026, the CA MUST implement Multi-Perspective Issuance Corroboration using at least five (5) remote Network Perspectives. The CA MUST ensure that [...] the remote Network Perspectives that corroborate the Primary Network Perspective fall within the service regions of at least two (2) distinct Regional Internet Registries."
"Network Perspectives are considered distinct when the straight-line distance between them is at least 500 km."
I.e they check from multiple network locations in case an attacker has messed with network routing in some way. This is reasonable and imposes no extra load on the domain needing the certificate all the extra work falls on the CA, and if Letsencrypt can get this right there is no major reason why "Joe's garage certs" can't do the same thing.
This is outrage porn.
What does this even mean? Does he check the certificates for typos, or that they have the correct security algorithm or something?
I'm pretty sure such an "approval" could be replaced by an automatic security scanner or even a small shall script
FWIW the idea of inspecting the certificate "for typos" or similar doesn't make sense. What you're getting from the CA wasn't really the certificate but the act of signing it, which they've already done. Except in some very niche situations your certificate is always already publicly available when you receive it, what you've got back is in some sense a courtesy copy. So it's too late to "approve" this document or not, the thing worth approving already happened.
Also the issuing CA was required by the rules to have done a whole bunch of automated checks far beyond what a human would reasonably do by hand. They're going to have checked your public keys don't have any of a set of undesirable mathematical properties (especially for RSA keys) for example and don't match various "known bad" keys. Can you do better? With good tooling yeah, by hand, not a chance.
But then beyond this, modern "SSL certificates" are just really boring. They're 10% boilerplate 90% random numbers. It's like tasking a child with keeping a tally of what colour cars they saw. "Another red one? Wow".
The CA is going to look at the requested names (to check they were authorized) and they'll also copy the requested public key, this combination is what's certified. But if your antiquated gear spits out a CSR which also gives a (possibly bogus) company name and an (maybe invalid) street address "checking" that won't matter because the CA will just throw it away, the certificate they issue you isn't allowed to contain information they didn't check, so that part of your CSR is just tossed away without reading it.
So even reviewing CSRs won't help you.
(The solution of course is to automate your cert request/issuance, which has the side effect of ensuring no human is involved in the cert process)
Yes, it's insane, but it sure makes fault analysis easier when the environment is that locked down and documented.
Side note, at some point I got an email telling me to stop issuing public certificates and only issue private certs. I had to get on a call with someone and explain PKI. To someone on the security team!
Many things need to be run and automated when running stuff, I don't understand what makes SSL certificates special in this.
For a hobbyist, setting up certbot or acme.sh is pretty much fire and forget. For more complex settings well… you already have this complexity to manage and therefore the people managing this complexity.
You'll need to pick a client and approve it, sure, but that's once, and that's true for any tool you already use. (edit: and nginx is getting ACME support, so you might already be using this tool)
It's not the first time I encounter them, but I really don't get the complaints. Sure, the setup may take longer. But the day to day operations are then easier.
For new certificate you can keep the existing amount of human oversight in place so nothing changes on that front.
With manual renewals, the cert either wouldn't get renewed and would become naturally invalid or the notification that the cert expired would prompt someone to finish the cleanup.
There are environments and devices where automation is not possible: not everything that needs a cert is a Linux server, or a system where you can run your own code. (I initially got ACME/LE working on a previous job's F5s because it was RH underneath and so could get Dehydrate working (only needs bash, cURL, OpenSSL); not all appliances even allow that).
I'm afraid that with the 47-day mandate we'll see the return of self-signed certs, and folks will be trained to "just accept it the first time".
* https://news.ycombinator.com/item?id=43693900
You linked to a whole thread in which the top comment asks a question that's a slippery slope, and of which the top answer lists advantages of a reduced validity time (while pointing out that too short like 30 seconds poses reliability and scale risks, to address the slippery slope argument).
What did you mean to point out?
When I saw the 47-day expiration period, it made me wonder if someone is trying to force everyone onto cloud solutions like what Azure provides.
The old geezer in me is disappointed that it's increasingly harder to host a site on a cable modem at home. (But I haven't done that in over two decades.)
> The old geezer in me is disappointed that it's increasingly harder to host a site on a cable modem at home. (But I haven't done that in over two decades.)
It might be harder to host at home, but only for network reasons. It is perfectly straightforward to use letsencrypt and your choice of acme client to do certificates; I really don't think that's meaningful point of friction even with the shorter certificate lifetimes.
And it's not like the automation is hard (when I first did letsencrypt certs I did a misguidedly-paranoid offline key thing - for my second attempt, the only reason I had to do any work at all, instead of letting the prepackaged automation work, was to support a messy podman setup, and even that ended up mostly being "systemd is more work than crontab")
The second side is that if it's so tedious to approve and install, use solutions that require neither. Surely you don't need to have some artisanal certificate installation process that involves a human if you already admit that stricter issuance reduces no risk of yours. Thus, simplify your processes.
There are automated solutions to pretty much all platforms both free and paid. Nginx has it, I just checked and Apache has a module for this as well. Could the author write a blog post about what's stopping them from adopting these solutions?
In the end I can think of *extremely* few and niche cases where any changes to a computer system are actually (human) time-consuming due to regulatory reasons that at the same time require public trust.
Probably because making sure that clients trust the right set of non-public CAs is currently too much of a pain in the ass. Possibly an underrated investment in the security of the internet would be inventing better solutions to make this process easier, the way Certbot made certificate renewal easier (though it'd be a harder problem as the environment is more heterogeneous). This might reduce the extent of conservative stakeholders crankily demanding that the public CA infrastructure accommodate their non-public-facing embedded systems that can't keep up with the constantly evolving security requirements that are part and parcel of existing on the public internet.
I don't see a reason why that should be a problem to solve for public CAs and rest of the internet? Complaining about multi-perspective validation or lifetime is silly if the hindrance is someone's own business needs and requirements.
From advertising companies, search engines (ok, sometimes both), certificate peddlers and other 'service' (I use the term lightly here) providers there are just too many of these maggots that we don't actually need. We mostly need them to manage the maggots! If they would all fuck off the web would instantly be a better place.
But it seems apparent to me that it will have to work over HTTP/QUIC, and TCP port 443.
Which prompts the obvious question ...
The question I was alluding to is: if it's HTTP-ish over tcp/443, wouldn't it still be the web anyway?
But thinking about it more, the server could easily select a protocol based on the first chunk of the client request. And the example of RTP suggests that maybe even TCP would be optional.
Desktop app development gets increasingly hostile and OSes introduce more and more TCC modals, you pretty much need a certificate to codesign an app if you sideload (and app stores have a lot of hassle involved), mobile clients had it bad for a while (and just announced that Android will require a dev certificate for sideloading as well).
edit: also another comment is correct, the reason it is like that is because it has the most eyes on it. In the past it was on desktop apps, which made them worse
I'm not sure why many people are still dealing with legacy manual certificate renewal. Maybe some regulatory requirements? I even have a wildcard cert that covers my entire local network which is generated and deployed automatically by a cron job I wrote about 5 years ago. It's working perfectly and it would probably take me longer to track down exactly what it's doing than to re-write it from scratch.
For 99.something% of use cases, this is a solved problem.
Just because someone’s homelab is fully cert’d through Caddy and LE that they slapped together over a weekend two years ago, doesn’t mean the process is trivial or easy for the masses. Believe me, I’ve been fighting this battle internally my entire career and I hate it. I hate the shitty state of PKI today, and how the sole focus seems to be public-facing web services instead of, y’know, the other 90% of a network’s devices and resources.
PKI isn’t a solved problem.
Also, I used to do IT, I get it but what do you think the fix here is? You could also run your own CA that you push to all the devices and then you can cut certificates as long as you want.
> PKI isn’t a solved problem.
PKI is largely a solved issue nowadays. Software like Vault from hashicorp (it's FIPS compliant, too: https://developer.hashicorp.com/vault/docs/enterprise/fips) let you create a cryptographically-strong CA and build the automation you need.
It's been out for years now, integrating the root CA shouldn't be much of an issue via group policies (in windows, there are equivalents for mac os and gnu/linux i guess).
> Just because someone’s homelab is fully cert’d through Caddy and LE that they slapped together over a weekend two years ago, doesn’t mean the process is trivial or easy for the masses.
Quite the contrary: it means that the process is technically so trivial the masses can do it in an afternoon and live off it for years with little to no maintenance.
Hence, if a large organization is not able to implement that, the issue is in the organization, not in the technology.
You have no idea the environment they work in. The "skill issue" here is you thinking your basic knowledge of Vault matters.
> Software like Vault from hashicorp (it's FIPS compliant, too: https://developer.hashicorp.com/vault/docs/enterprise/fips) let you create a cryptographically-strong CA and build the automation you need.
They didn't tell you their needs, but you're convinced this vendor product solves it.
Are you a non-technical CTO by chance?
> there are equivalents for mac os and gnu/linux i guess
You guess? I'm sensing a skill issue. Why would you say it's solved for their environment, "I guess??"
> Quite the contrary: it means that the process is technically so trivial the masses can do it in an afternoon and live off it for years with little to no maintenance.
I'm sensing you work in a low skill environment if you think "home lab trivial" translates to enterprise and defense.
> Hence, if a large organization is not able to implement that, the issue is in the organization, not in the technology.
Absolutely meaningless statement.
I've deployed Vault both at home and in two different companies, doing anything from pki, mutual-tls, secret storage and other stuff.
> > Software like Vault from hashicorp (it's FIPS compliant, too: https://developer.hashicorp.com/vault/docs/enterprise/fips) let you create a cryptographically-strong CA and build the automation you need.
> They didn't tell you their needs, but you're convinced this vendor product solves it.
It was an example from the ecosystem of available tools but in general yes, Vault can do that. Mentioning FIPS compliance was about letting you know that the software can be used also in governative environments. It's not just a "homelab toy".
> Are you a non-technical CTO by chance?
Senior cloud engineer here. Worked anywhere from not-so-small companies (250 people, 100 engineers) to faangs (tens of thousands of engineers).
> > there are equivalents for mac os and gnu/linux i guess
> You guess? I'm sensing a skill issue.
You're attacking me on a personal level because you can't argue otherwise. That's a common logical fallacy ("Ad Hominem" - https://www.britannica.com/topic/ad-hominem). You basically have skill issue at debating =)
> Why would you say it's solved for their environment, "I guess??"
When you account Windows, Mac OS and Linux you're accounting for pretty much the totality of the desktop computing landscape. The last two macbooks I had for work came with the mac os equivalent of group policies with certificates installed etc etc. Enterprise-tier Linux distributions can do that as well (eg: Red Hat Enterprise Linux).
> I'm sensing you work in a low skill environment if you think "home lab trivial" translates to enterprise and defense.
Again, worked anywhere from companies with 250 people to FAANGs. You have skill issue at sensing, it seems.
To get back to the point: homelab "triviality". In a way, yes. Large enterprises and even more defense can spend all the money not just for software but even for consulting from various company that can bring the skills to implement and maintain all the services that are needed, and train your people at that. Things become non trivial not on the base of technical issue, but on the base of organizational issues...
If we talk government and defense... Do you know the US government has dedicated cloud regions (eg: https://aws.amazon.com/govcloud-us/)? Do you really think that cloud providers offer those services at loss? Do you really think a few vault enterprise licenses are the issue there?
And by the way, Vault is just an example of one of the possible solutions. It was meant to be an example but you clearly missed the point.
> > Hence, if a large organization is not able to implement that, the issue is in the organization, not in the technology.
> Absolutely meaningless statement.
I think it's very meaningful.
It's not 1995, cryptography isn't arcane anymore. We had hardware crypto acceleration in cpu since at least 2010 (AES-NI). The tooling is well established on both servers and clients. The skills are on the market ready to be hired (either via employment or via contracting).
The issue is not technical in nature.
Oh and by the way: I've worked closely with engineers working for the US government. I wasn't close to the US government (because I am not an US citizen) but they were. They were "close enough" that they had to work in a SCIF and could only interact with me via phone. The systems they were working on... Those systems had their own private CA (among other things).
It's feasible. It's not a technical issue. If it's not done then it's an organizational issue.
My username is literally a cryptographic mode of operation. But you didn't know that, because you have a low skill issue.
> Do you know the US government has dedicated cloud regions (eg:
This is a joke, right? You're just an LLM going through training.
* Yes, I have experience with Vault. I have deployed it internally, used it, loathed it, and shelved it. It’s entirely too cumbersome for basic PKI and secrets management in non-programmatic environments, which is the bulk of enterprise and business IT in my experience.
* You’re right, the organization is the problem. Let me just take that enlightened statement to my leadership and get my ass fired for insubordination, again, because I have literally tried this before with that outcome. Just because I know better doesn’t mean the org has to respect that knowledge or expertise. Meritocracies aren’t real.
* The reason I don’t solve my own PKI issues with Caddy in my homelab is because that’s an irrelevant skill to my actual day job, which - see the point above - doesn’t actually respect the skills and knowledge of the engineers doing the work, only the opinions of the C-suite and whatever Gartner report they’re foisting upon the board. Hence why we have outdated equipment on outdated technologies that don’t meet modern guidelines, which is most enterprises today. Outside of the tech world, you’re dealing with comparable dinosaurs (no relation) who see neither the value or the need for such slick, simplified solutions, especially when they prevent politicians inside the org from pulling crap.
I’ve been in these trenches for fifteen years. I’ve worked in small businesses, MSPs, school campuses, non-profits, major enterprises, manufacturing concerns, and a household name on par with FAANG. Nobody had this solved, anywhere, except for the non-profit and a software company that both went all-in on AD CA early-on and threw anything that couldn’t use a cert from there off the network.
This is why I storm into the comments on blogs like these to champion their cause.
PKI sucks ass, and I’m tired of letting DevOps people claim otherwise because of Let’s Encrypt and ACME.
137 more comments available on Hacker News