About Keepassxc's Code Quality Control
Posted2 months agoActive2 months ago
keepassxc.orgTechstory
heatedmixed
Debate
80/100
AI in Software DevelopmentCode Quality ControlPassword Manager Security
Key topics
AI in Software Development
Code Quality Control
Password Manager Security
KeePassXC's announcement about using AI for code contributions sparks debate about the role of AI in security-critical software development, with some users losing trust and others defending the project's approach.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
1h
Peak period
20
3-6h
Avg / period
4.7
Comment distribution42 data points
Loading chart...
Based on 42 loaded comments
Key moments
- 01Story posted
Nov 9, 2025 at 9:44 AM EST
2 months ago
Step 01 - 02First comment
Nov 9, 2025 at 10:57 AM EST
1h after posting
Step 02 - 03Peak activity
20 comments in 3-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 11, 2025 at 3:13 AM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45865921Type: storyLast synced: 11/20/2025, 3:41:08 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I mean... they are
isn't that the point? not as if "AI" leads to higher quality is it
> Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.
if this was true why the need to point out "we're not vibe coding", and create this process around it?
fork and move on
No, some projects take fundamental issues with AI, be it ethical, copyright related, or raising doubts over whether people even understand the code they're submitting and whether it'll be maintainable long term or even work.
There was some drama around that with GZDoom: https://arstechnica.com/gaming/2025/10/civil-war-gzdoom-fan-... (although that was a particular messy case where the code broke things because the dev couldn't even test it and also straight up merged it; so probably governance problems in the project as well)
But the bottom line is that some projects will disallow AI on a principled basis and they don't care just about the quality of the code, rather that it was written by an actual person. Whether it's possible to just not care about that and sneak stuff in regardless (e.g. using autocomplete and so on, maybe vibe coding a prototype and then making it your own to some degree), or whether it's possible to use it as any other tool in development, that's another story.
Edit: to clarify my personal stance, I'm largely in the "code is code" camp - either it meets some standard, or it doesn't. It's a bit like with art - whether you prefer something with soul or mindless slop, unfortunately for some the reckoning is that the purse holders often really do not care.
These issues are no different for normal submissions.
You are responsible for taking ownership and having sorted out copyright. You may accidentally through prior knowledge write something identical to pre-existing code with pre-existing copyright. Or steal it straight off StackOverflow. Same for an LLM - at least Github Copilot has a feature to detect literal duplicates.
You are responsible for ensuring the code you submit makes sense and is maintainable, and the reviewer will question this. Many submit hand-written, unmaintainable garbage. This is not an LLM specific issue.
Ethics is another thing, but I don't agree with any proposed issues. Learning from the works of others is an extremely human thing, and I don't see a problem being created by the fact that the experience was contained in an intermediate box.
The real problem is that there are a lot of extremely lazy individuals thinking that they are now developers because they can make ChatGPT/Claude write them a PR, and throw a tantrum over how it's discriminating against them to disallow the work on the basis that they don't understand it.
That is: The problem is people, as it always has been. Not LLMs.
No. These systems are still so mindboggingly bad at anything that involves manual memory management and pointers that even entertaining the idea of using them for something as critical as a non-trivial large c++ codebase, for a password manager no less, is nuts. It displays a lack of concern for security and propensity for shortcuts that I don't want to touch anything by people who even remotely consider this appropriate.
An obvious sign that something is going horribly wrong in this project.
In fact i think this kind of news is enough to garner a huge influx of international hackers all targeting this package now, if they weren't already. They will be looking closely at the supply chain, phishing the hell out of the developers, physical intrusions where they can, its a hint that the developers might be stressed and making poor decisions, with huge payoff for infiltrating
Now, is there hard evidence that AI use does lead to this in all cases? Not that I'm aware of. Just as there's no easy way to prove the difference between "I don't think this is impacting me, but it is" and "it really isn't".
It comes down to two unevidenced assertions - "this will reduce attentiveness" vs "no it won't". But I don't feel great about a project like this just going straight for "no it won't" as though that's something they feel with high confidence.
From where does that confidence come?
From decades of experience, quite honestly.
glad I don't work where you do!
it's actually even worse than that: the learning process to produce it doesn't care about correctness at all, not even slightly
the only thing that matters is producing plausible enough looking output to con the human into pressing "accept"
(can you see why people would be upset about feeding output generated by this process into a security critical piece of software?)
this statement is objectively false.
me too! what do I know?
(at least now we know where the push for this dreadful policy is coming from)
Followed by shortcuts
> As such, they are a net benefit and make KeePassXC strictly safer.
They can also waste author's/reviewer's time chasing imaginary ends, taking time away from the "regular" review, or with some level of trust add some plausibly explained vulnerability. Nothing is strict here
I'm sure if you ask your favorite AI bot, he'll come up with a few more reasons why the statement is overconfidently wrong.
Does not look like the original Keepass project is doing this which is the easiest migration away but I will check a bit deeper on their commits to be sure.
I have dug around a bit and found a thread mastodon thread that doesn't inspire confidence[1]. KeePassXC seems completely untrustworthy at this point not only have they jumped on the AI bandwagon, they also seemingly don't know what a zero-day is. I genuinely liked KeePassXC and used it for years now I am spending my Sunday evening researching alternatives.
[1] https://fosstodon.org/@2something@transfem.social/1148367097...
Oh, you can tell.
KeePassXC does not store any data. Nor does it receive connections from the Internet, like a server. Thus the risk is structurally lower than a commercial client-server application like LastPass or 1Password, which is actually in possession of your password data.
I use 1Password at work for its excellent collaboration features and good-enough security. For most people it replaces a post-it note or Excel file. It’s way better than those.
But for my passwords I use KeePass (the file format) and a variety of clients including KeePassXC. This statement about AI won’t change that, unless someone can give me a reason other than vague “AI bad” or “no vibe coding” like most comments so far.
Pushing Keepass vault to cloud storage:
* No per-item synchronisation
* Full control over encryption of the database
* Choice of cloud storage to trust with vault
* Free as in beer if no using cloud (or using a free/already paid for offering)
1Password:
* Per-item sync and collaboration
* Full trust on the (closed-source) client apps over encryption of vault
* No choice of cloud
* No choice of encryption
* Mandatory paying subscription
Hell if you leave your computer unlocked, a rubber-ducky could replace your executable and middleman your master password.
AI is just another way to write code. At the end of the day code is just text. It still needs to be reviewed - nothing about that is changing.
I choose not to use a vibe coded password manager, rigorous review or not, to protect my entire digital existence, monetary assets and reputation.
It's the pinnacle of safety requirements, memory unsafe language, cryptography, incredibly high stakes.
I have the distinct displeasure having to review LLM output in pull requests and unfailingly they contain code the submitted doesn't fully understand.
I think there's an analogous subset: "llm-security theater".
There's so much pearl-clutching, pedantry, and noise from people who are obviously 1) not contributing to KeePassXC AND 2) never would contribute AND 3) are unaware of EXISTING bugs/issues/CVEs with KeePassXC. All they provide are vague abstract arguments from their own experience with LLMs, and they argue with the maintainers of KeyPassXC without giving specifics, as though they have the right to tell others how to run their repo when they're unable to link a single concrete problematic issue or PR.
Instead, all they have are "vibes", which is ironic.
1 more comments available on Hacker News