Gpg.fail
Key topics
The cryptography world is abuzz with the revelation that GPG, a stalwart encryption tool, is facing a crisis of trust, with some declaring it a "lost cause" due to longstanding issues and a perceived lack of responsiveness from its maintainers. At the heart of the debate is a talk at the Chaos Computer Congress highlighting zero-day vulnerabilities and the GPG team's handling of reported issues, with some commenters pointing out that the "team" is essentially Werner Koch, the project's lead maintainer. As the discussion unfolds, proponents of alternative tools like Sequoia, age, and minisign are emerging, touting their improved security features and adherence to the latest standards. Meanwhile, others are scratching their heads, wondering why high-profile projects like Linux and QEMU still rely on GPG for signing, sparking a lively discussion about the future of cryptography and the role of GPG within it.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
14m
Peak period
94
0-6h
Avg / period
17.8
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 27, 2025 at 12:05 PM EST
9 days ago
Step 01 - 02First comment
Dec 27, 2025 at 12:19 PM EST
14m after posting
Step 02 - 03Peak activity
94 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 30, 2025 at 1:23 PM EST
5d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
People who are serious about security use newer, better tools that replace GPG. But keep in mind, there’s no “one ring to rule them all”.
This page is a pretty direct indicator that GPG's foundation is fundamentally broken: you're not going to get to a good outcome trying to renovate the 2nd story.
Why do high-profile projects, such as Linux and QEMU, still use GPG for signing pull requests / tags?
https://docs.kernel.org/process/maintainer-pgp-guide.html
https://www.qemu.org/docs/master/devel/submitting-a-pull-req...
Why does Fedora / RPM still rely on GPG keys for verifying packages?
This is a staggering ecosystem failure. If GPG has been a known-lost cause for decades, then why haven't alternatives ^W replacements been produced for decades?
GPG is what GP is referring to as a lost cause. Now, it can be debated whether PGP-in-general is a lost cause too, but that's not what GP is claiming.
It is though what both the fine article, and tptacek in these comments, are claiming!
Also no, the gpg.fail site makes no such claims. Now, tptacek, has said that, but he didn't write the comment you were replying to.
> then why haven't alternatives ^W replacements been produced for decades?
Actually we do have alternatives for it.
For example Git supports S/MIME and could absolutely be used to sign commits and tags. Even just using self-signed certificates wouldn't be far off from what GPG offers. However if people used their digital IDs like many countries offer, mission-critical code could have signatures with verifiable strong identities.
Though there are other approaches as well, both for signing and for encrypting. It's more that people haven't really considered migrating.
Valid question though, I want it answered by them.
As they said, they were on it...
From what I can piece together while the site is down, it seems like they've uncovered 14 exploitable vulnerabilities in GnuPG, of which most remain unpatched. Some of those are apparently met by refusal to patch by the maintainer. Maybe there are good reasons for this refusal, maybe someone else can chime in on that?
Is this another case of XKCD-2347? Or is there something else going on? Pretty much every Linux distro depends on PGP being pretty secure. Surely IBM & co have a couple of spare developers or spare cash to contribute?
I haven't seen those outside of old mailing list archives. Everyone uses detached signatures nowadays, e.g. PGP/MIME for emails.
If there were a PGP vulnerability that actually made it possible to push unauthorized updates to RHEL or Fedora systems, then probably IBM would care, but if they concluded that PGP's security problems were a serious threat then I suspect they'd be more likely to start a migration away from PGP than to start investing in making PGP secure; the former seems more tractable and would have maintainability benefits besides.
That's mostly incorrect in both counts. One is that lots of mirrors are still http-only or http default https://launchpad.net/ubuntu/+archivemirrors
The other is that if you get access to one of the mirrors and replace a package, it's the signature that stops you. Https is only relevant for mitm attacks.
Debian, and indeed most projects, do not control the download servers you use. This is why security is end-to-end where packages are signed at creation and verified at installation, the actual files can then pass through several untrusted servers and proxies. This was sound design in the 90s and is sound design today.
A major part of the problem is that GPG’s issues aren’t cash or developer time. It’s fundamentally a bad design for cryptographic usage. It’s so busy trying to be a generic Swiss Army knife for every possible user or use case that it’s basically made of developer and user footguns.
The way you secure this is by moving to alternative, purpose-built tools. Signal/WhatsApp for messaging, age for file encryption, minisign for signatures, etc.
In all other cases they only used sequoia as a tool to build data for demonstrating gpg vulnerabilities.
Fortunately, it turned out that there wasn't anything particularly wrong with the current standards so we can just do that for now and avoid the standards war entirely. Then we will have interoperability across the various implementations. If some weakness comes up that actually requires a standards change then I suspect that consensus will be much easier to find.
[1] https://articles.59.ca/doku.php?id=pgpfan:schism
Archive link: https://web.archive.org/web/20251227174414/https://www.gnupg...
(PGP/GPG are of course hamstrung by their own decision to be a Swiss Army knife/only loosely coupled to the secure operation itself. So the even more responsible thing to do is to discard them for purposes that they can’t offer security properties for, which is the vast majority of things they get used for.)
(I think you already know this, but want to relitigate something that’s not meaningfully controversial in Python.)
(I think you already know this as well)
This is exactly analogous to the Web PKI, where you trust CAs to identify individual websites, but the websites themselves control their keypairs. The CA's presence intermediates the trust but does not somehow imply that the CA itself does the signing for TLS traffic.
Again, I must emphasize that this is identical in construction to the Web PKI; that was intentional. There are good criticisms of PKIs on grounds of centrality, etc., but “the end entity doesn’t control the private key” is facially untrue and sounds more like conspiracy than anything else.
On my web server where the certificate is signed by letsencrypt I do have a file which contains a private key. On pypi there is no such thing. I don't think the parallel is correct.
With attestations on PyPI, the issuance window is 15 minutes instead of 90 days. So the private key is kept in memory and discarded as soon as the signing operation is complete, since the next signing flow will create a new one.
At no point does the private key leave your machine. The only salient differences between the two are file versus memory and the validity window, but in both cases PyPI’s implementation of attestations prefers the more ideal thing with respect to reducing the likelihood of local private key disclosure.
Most people have never heard of it and never used it.
(So I agree that it’s de facto dead, but that’s not the same thing as formal deprecation. The latter is what you do explicitly to responsibly move people away from something that’s not suitable for use anymore.)
Anything can be "considered harmful" at any time if you choose to, as en excercise. The title is summarizing the article content, not opinions of the author.
"Here is what comes out when considering X harmful" vs "People consider X to be harmful".
it's the GnuPG blog on gnupg.org with multiple authors.
this is a post by Werner Koch, not his blog.
But werner at this point has a history of irresponsible decisions like this, so it's sadly par for the course by now.
Another particularly egregious example: https://dev.gnupg.org/T4493
Keychain and 1Password are doing variants of the same thing here: both store an encrypted vault and then give you credentials by decrypting the contents of that vault.
In any case I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough. The fact the private key never leaves the 1Password vault unencrypted and is synced between my devices is pretty neat. From a security standpoint it is indeed a step down from having my key on a physical key device, but it is lots better than private keys on disk.
Did you try to SSH in verbose mode to ascertain any errors? Why did you assume the hardware "broke" without anyone objective qualifications of an actual failure condition?
> I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough
How is trusting a closed-source, for-profit, subscription-based application with your SSH credential "secure enough"?
Choosing convenience over security is certainly not unreasonable, but claiming both are achieved without any compromise borders on ludicrous.
I’m still working through how to use this but I have it basically setup and it’s great!
If you use crypographic command line tools to verify data sent to you, be mindful on what you are doing and make sure to understand the attacks presented here. One of the slides is titled "should we even use command line tools" and yes, we should because the alternative is worse, but we must be diligent in treating all untrusted data as adversarial.
Handling untrusted input is core to that.
The problem with PGP is that it's a Swiss Army Knife. It does too many things. The scissors on a Swiss Army Knife are useful in a pinch if you don't have real scissors, but tailors use real scissors.
Whatever it is you're trying to do with encryption, you should use the real tool designed for that task. Different tasks want altogether different cryptosystems with different tradeoffs. There's no one perfect multitasking tool.
All that is before you get to the absolutely deranged 1990s design of PGP, which is a complex state machine that switches between different modes of operation based on attacker-controlled records (which are mostly invisible to users). Nothing modern looks like PGP, because PGP's underlying design predates modern cryptography. It survives only because nerds have a parasocial relationship with it.
And there are lots of tools for file encryption anyways. I have a bash function using openssh, sometimes I use croc (also uses PAKE), etc.
I need an alternative to "gpg --armor --encrypt --recipient". :)
That's literally age.
https://github.com/FiloSottile/age
Would "fetch a short-lived age public key" serve your use case? If so, then an age plugin that build atop the AuxData feature in my Fediverse Public Key Directory spec might be a solution. https://github.com/fedi-e2ee/public-key-directory-specificat...
But either way, you shouldn't have long-lived public keys used for confidentiality. It's a bad design to do that.
I have an actual commercial use case that receives messages (which are, awkwardly, files sent over various FTP-like protocols, sigh), decrypts and verifies them, and further processes them. This is fully automated and runs as a service. For horrible legacy reasons, the files are in PGP format. I know the public key with which they are signed (provisioned out of band) and I have the private key for decryption (again, provisioned out of band).
This would be approximately two lines of code using any sane crypto library [0], but there really isn’t an amazing GnuPG alternative that’s compatible enough.
But GnuPG has keyrings, and it really wants to use them and to find them in some home directory. And it wants to identify keys by 32-bit truncated hashes. And it wants to use Web of Trust. And it wants to support a zillion awful formats from the nineties using wildly insecure C code. All of this is actively counterproductive. Even ignoring potential implementation bugs, I have far more code to deal with key rings than actual gpg invocation for useful crypto.
[0] I should really not have to even think about the interaction between decryption and verification. Authenticated decryption should be one operation, or possibly two. But if it’s two, it’s one operation to decapsulate a session key and a second operation to perform authenticated decryption using that key.
As a followup, is there anything in existence that supports "large-scale public key management (with unknown recipients)"? Or "automatic discovery, trust management"? Even X.509 PKI at its most delusional doesn't claim to be able to do that.
GPG's keyring handling has also been a source of exploits. It's much safer to directly specify recipient rather than rely on things like short key IDs which can be bruteforced.
Automatic discovery simply isn't secure if you don't have an associated trust anchor. You need something similar to keybase or another form of PKI to do that. GPG's key servers are dangerous.
You technically can sign with age, but otherwise there's minisign and the SSH spec signing function
That's a "great" idea considering the recent legal developments in the EU, which OpenPGP, as bad as it is, doesn't suffer from. It would be great if the author updated his advice into something more future-proof.
If you want a suggestion for secure messaging, it's Signal/WhatsApp. If you want to LARP at security with a handful of other folks, GPG is a fine way to do that.
The available open-source options come nowhere close to the messaging security that Signal/Whatsapp provide. So you're left with either "find a way to access Signal after they pull out of whatever region has criminalized them operating with a backdoor on comms" or "pick any option that doesn't actually have strong messaging security".
And that's even if you are innocent on the underlying charge or search.
Encryption is a pick your poison.
- Either you go to jail for years but you know your gov and other actors has no access to your data. - or you store on remote/proprietary apps, stay free, but your gov or other actors may or may not have access to it.
[1]: https://en.wikipedia.org/wiki/Key_disclosure_law
The answer is not to live with it, but become politically active to try to support your principles. No software can save you from an authoritarian government - you can let that fantasy die.
There are many reasons, but one of them is that for the overwhelming majority of humans on the planet, their apps aren't being compiled from source on their device. So since you have to account for the fact that the app in the App Store may not be what's in some git repo, you may as well just start with the compiled/distributed app.
I want secure messaging, not encrypted SMS. I want my messages to sync properly between arbitrary number of devices. I want my messaging history to not be lost when I lose a device. I want not losing my messaging history to not be a paid feature. I want to not depend on a shady crypto company to send a message.
I send long messages via Signal, typed on a desktop computer, all the time. (In fact, I almost exclusively use Signal through my desktop app.)
You don't have to use it like "encrypted SMS"! You're free.
as a recent dabbling reader of introductory popsci content in cryptography, I've been wondering about what are the different segmentation of expert roles in the field?
e.g. in Filippo's post about Age he clarified that he's not a cryptographer but rather a cryptography engineer, is that also what your role is and what are the concrete divisions of labor, and what other related but separate positions exists in the overall landscape?
where is the cutoff point of "don't roll your own crypto" in the different levels of expertise?
I do not have a Ph.D in Cryptography (not even an honorary one), so I do not call myself a Cryptographer. (Though I sometimes use "Cryptografur" in informal contexts for the sake of the pun.)
Ideally, this person will also design the system that uses the crypto, because no matter how skilled the people on a standards committee might be their product will always be, at best, a baroque nightmare with near-infinite attack surface, at worst an unusable pile of crap. IPsec vs. Wireguard is a prime example, but there are many others.
"don't" roll your own cover everything from "don't design your own primitive" to "don't make your own encryption algorithm/mode" to "don't make your own encryption protocol", to "don't reimplement an existing version of any of the above and just use an encryption library"
(and it's mostly "don't deploy your own", if you want to experiment that's fine)
(That's nothing to do with how hard cryptography is, just with how little demand there is for serious cryptography engineering, especially compared with the population of people who have done serious academic study of it.)
Which, from where I stand, means that PGP is the only viable solution because I don't have a choice. I can't replace PGP with Sigstore when publishing to Maven. It's nice to tell me I'm dumb because I use PGP, but really it's not my choice.
> Use SSH Signatures, not PGP signatures.
Here I guess it's just me being dumb on my own. Using SSH signatures with my Yubikeys (FIDO2) is very inconvenient. Using PGP signatures with my Yubikeys literally just works.
> Encrypted Email: Don’t encrypt email.
I like this one, I keep seeing it. Sounds like Apple's developer support: if I need to do something and ask for help, the answer is often: "Don't do it. We suggest you only use the stuff that just works and be happy about it".
Sometimes I have to use emails, and cryptographers say "in that case just send everything in plaintext because anyway eventually some of your emails will be sent in cleartext". Isn't it like saying "no need to use Signal, eventually the phone of one of your contacts will be compromised anyway"?
You don't have a choice today. You could have a choice tomorrow if enough people demanded it.
Don't let PGP's convenience (in this context) pacify you from making a better world possible.
To me, there will be no reason to use PGP the day I find practical alternatives for the remaining use-cases I have. And I feel like signing git commits is not a weird use-case...
This is true both for GPG and S/MIME
Email encryption self-compromises itself in a way Signal doesn't
https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/
I really would like to replace PGP with the "better" tool, but:
* Using my Yubikey for signing (e.g. for git) has a better UX with PGP instead of SSH
* I have to use PGP to sign packages I send to Maven
Maybe I am a nerd emotionally attached to PGP, but after a year signing with SSH, I went back to PGP and it was so much better...
This might be true of comparing GPG to SSH-via-PIV, but there's a better way with far superior UX: derive an SSH key from a FIDO2 slot on the YubiKey.
With GPG it just works.
(see "uv" option here https://fidoalliance.org/specs/fido-v2.0-ps-20190130/fido-cl... - the -sk key types in SSH are just a clever way of abusing the FIDO protocol to create a signing primitive)
189 more comments available on Hacker News