Debian's Git Transition
Key topics
Debian's ambitious effort to transition to Git is sparking lively debate about the potential benefits and challenges of this monumental shift. As some commenters point out, a Git-based workflow could simplify the process of building Debian packages from personal repositories, while others suggest that alternative package management systems like pkgsrc offer valuable lessons. The discussion reveals a mix of optimism and skepticism, with some worrying that the transition is taking too long, while others highlight the nuance and complexity of the task at hand. Amidst the discussion, a pressing concern emerges: Debian's declining number of new developers, making the success of this effort crucial for the project's long-term viability.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5h
Peak period
40
6-12h
Avg / period
13.1
Based on 144 loaded comments
Key moments
- 01Story posted
Dec 22, 2025 at 3:24 AM EST
19 days ago
Step 01 - 02First comment
Dec 22, 2025 at 8:23 AM EST
5h after posting
Step 02 - 03Peak activity
40 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 25, 2025 at 3:43 AM EST
16 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
At the moment, it is nothing but pain if one is not already accustomed and used to building Debian packages to even get a local build of a package working.
[1] https://www.pkgsrc.org/
If you want a "simple custom repository" you likely want to go in a different direction and explicitly do things that wouldn't be allowed in the official Debian repositories.
For example, dynamic linking is easy when you only support a single Debian release, or when the Debian build/pkg infrastructure handles this for you, but if you run a custom repository you either need a package for each Debian release you care about and have an understanding of things like `~deb13u1` to make sure your upgrade paths work correctly, or use static binaries (which is what I do for my custom repository).
I would recommend looking into the chroot based build tools like pbuilder (.deb) and mock (.rpm).
It greatly simplifies the local setup, including targeting different distributions or even architectures (<3 binfmt).
But I tend to agree, these tools are not easy to remember, specially for the occasional use. And packaging a complex software can be a pain if you fall down the dependency rabbit hole while trying to honor distros' rules.
That's why I ended-up spending quite a bit of time tweaking this set of ugly Makefifes: https://kakwa.github.io/pakste/ and why I often relax things allowing network access during build and the bundling of dependencies, specially for Rust, Go or Node projects.
I don’t think it’s a bad move, but it also seems like they were getting by with patches and tarballs.
Debian may still be "getting by" but if they don't make changes like this Git transition they will eventually stop getting by.
(1) should be able does not imply must, people are free to continue to use whatever tools they see fit
(2) Most of Debian work is of course already git-based, via Salsa [1], Debian's self-hosted GitLab instance. This is more about what is stored in git, how it relates to a source package (= what .debs are built from). For example, currently most Debian git repositories base their work in "pristine-tar" branches built from upstream tarball releases, rather than using upstream branches directly.
[1]: https://salsa.debian.org
I prefer rebasing git histories over messing with the patch quilting that debian packaging standards use(d to use). Though last I had to use the debian packaging mechanisms, I roundtripped them into git for working on them. I lost nothing during the export.
I really wish all the various open source packaging systems would get rid of the concept of source tarballs to the extent possible, especially when those tarballs are not sourced directly from upstream. For example:
- Fedora has a “lookaside cache”, and packagers upload tarballs to it. In theory they come from git as indicated by the source rpm, but I don’t think anything verifies this.
- Python packages build a source tarball. In theory, the new best practice is for a GitHub action to build the package and for a complex mess to attest that really came from GitHub Actions.
- I’ve never made a Debian package, but AFAICT the maintainer kind of does whatever they want.
IMO this is all absurd. If a package hosted by Fedora or Debian or PyPI or crates.io, etc claims to correspond to an upstream git commit or release, then the hosting system should build the package, from the commit or release in question plus whatever package-specific config and patches are needed, and publish that. If it stores a copy of the source, that copy should be cryptographically traceable to the commit in question, which is straightforward: the commit hash is a hash over a bunch of data including the full source!
Perhaps, in the rather narrow sense that you can download a Fedora source tarball and look inside yourself.
My claim is that upstream developers produce actual official outputs: git commits and sometimes release tarballs. (But note that release tarballs on GitHub are often a mess and not really desired by the developer.). And I further think that verification that a system like Fedora or Debian or PyPI is building from correct sources should involve byte-for-byte comparison of the source tree and that, at least in the common case, there should be no opportunity for a user of one of these systems to upload sources that do not match the claimed upstream sources.
The sadly common workflow where a packager clones a source tree, runs some scripts, and uploads the result as a “source tarball” is, IMO, wrong.
In Python, there is a somewhat clearly defined source tarball. uv build will happily built the source tarball and the wheel from the source tree, and uv build --from <appropriate parameter here> will build the wheel from the source tarball.
And I think it’s disappointing that one uploads source tarballs and wheels to PyPI instead of uploading an attested source tree and having PyPI do the build, at least in simple cases.
SUSE and Fedora both do something similar I believe, but I'm not really familiar with the implementation details of those two systems.
It’s not so hard to do a pretty good job, and you can have layers of security. Start with a throwaway VM, which highly competent vendors like AWS will sell you at a somewhat reasonable price. Run as a locked-down unprivileged user inside the container. Then use a tool like gVisor.
Also… most pure Python packages can, in theory, be built without executing any code. The artifacts just have some files globbed up as configured in pyproject.toml. Unfortunately, the spec defines the process in terms of installing a build backend and then running it, but one could pin a couple of trustworthy build backends versions and constraint them to configurations where they literally just copy things. I think uv-build might be in this category. At the very least I haven’t found any evidence that current uv-build versions can do anything nontrivial unless generation of .pyc files is enabled.
For Debian, that's what tag2upload is doing.
so kudos to its authors
Sincere question. I haven't interacted with it much in ages.
1. Send an empty email to a special address for the bug.
2. Wait 15-30 minutes for Debian's graylisting mail server to accept your email and reply with a confirmation email.
3. Reply to the confirmation email.
The last time I tried to follow a bug, I never got the confirmation email.
In practically every other bug tracker, following a bug is just pressing a button.
Like most of Debian's developer tooling, the bug tracker gets the job done (most of the time) but it's many times more inconvenient than it needs to be.
Also the hoop can be as simple as "click here to sign in with <other account you already have>".
https://tracker.debian.org/pkg/reportbug
As far as I know, it is impossible to use the BTS without getting spammed, because the only way to interact with it is via email, and every interaction with the BTS is published without redaction on the web. So, if you ever hope to receive updates, or want to monitor a bug, you are also going to get spam.
Again, because of the email-only design, one must memorise commands or reference a text file to do things with bugs. This may be decent for power users but it’s horrible UX. I can only assume that there is some analogue to the `bugreport` command I don’t know of for maintainers that actually offers some amount of UI assistance, because having to copy and paste bug IDs and hand-write commands into my mail client just to do basic work sounds like a recipe for burnout.
Debian’s BTS was quirky in 1999. In 2025, it is awful.
Do the emails from the BTS come from a consistent source? If so, it's not a good solution, but you could sign up with a unique alias that blackholes anything that isn't from the BTS.
Also, patching reportbug to support XDG base directory spec is a chore (since maintainers don't want to accept the fix for it).
How many Debian packages have patches applied to upstream?
https://research.swtch.com/openssl
There seems to be a serious issue with Debian (and by extension, the tens of distros based on it) having no respect whatsoever for the developers of the software their OS is based on, which ends up hurting users the most. Not sure why they cannot just be respectful, but I am afraid they are shoveling Debian's grave, as people are abandoning stale and broken Debian-based distros in droves.
I didn't know about this. Link?
and
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=819703#158
Needless to say, Zawinski was more than a little frustrated with how the Debian maintainers do things.
But honestly, this took 30 seconds to Google and was highly publicized at the time. This whole "I never heard of this, link??" approach to defend a lost argument when the point made is easily verifiable serves to do nothing but detract from discussion. Which, you know, is what this place is for.
I genuinely wanted to know what this was about.
The longer answer is that a lot of people already use Git for Debian version control, and the article expands on how this will be better-integrated in the future. But what goes into the archive (for building) is fundamentally just a source package with a version number. There's a changelog, but you're free to lie in it if you so wish.
Obligatory XKCD reference: https://xkcd.com/927/
> Debian guarantees every binary package can be built from the available source packages for licensing and security reasons. For example, if your build system downloaded dependencies from an external site, the owner of the project could release a new version of that dependency with a different license. An attacker could even serve a malicious version of the dependency when the request comes from Debian's build servers. [1]
[1] https://wiki.debian.org/UpstreamGuide#:~:text=make%20V=1-,Su...
With Nix, any fetcher will download the source. It does so in a way that guarantees the shasum of what is fetched is identical, and if you already have something in the nix store with that shasum, it won't have to fetch it.
However, with just a mirror of the debian source tree, you can build everything without hitting the internet. This is assuredly not true with just a mirror of nixpkgs.
Nix isn't functional: it's a functional core that moved every bit of the imperative part to and even less parseable stage, labelled it "evaluation" and then ignored any sense of hygiene about it.
No: your dependency tree for packaging should absolutely not include an opaque binary from a cache server, or a link to a years old patch posted on someone else's bugzilla instance (frequently link rotted as well).
So long as the NAR files in cache.nixos.org exist, everything will work - that's not a problem. But if you actually choose to exercise that traceability - which is what I've been working on - suddenly you start finding all this stuff. The problem is nixpkgs doesn't expose or archive the code: it archives a reference to code that existed somewhere at some time, and worse it obfuscates what the code was - I can obviously still go get it from the NAR files, but I can't get any of the context surrounding it.
By contrast, things like the Fedora and Debian patching systems have - crucially - actual archives of what they're building, the patches they're building them with, and the commit messages or other notes on why those patches are being applied and the change record of them. With NixOS you get a bunch of hashes that terminates on "wefu123r23hjcowiejcwe.nar" and you don't know what that is until nixpkgs happens to evaluate it and calculate it, which means it's impossible to even know up-front what's going to be pulled in.
Then of course you get to practical matters: just because you can exactly specify dependencies doesn't mean you should - we all realized with containers that having a couple dozen versions of libraries kicking around is a bad idea (and lo and behold that's what traditional distro packaging tries to minimize) - and that's where all those calculated paths burn you anyway. Nix is a fairly freeform programming language, so it's nigh impossible to stop some snowflake package from pulling in a different version of a compiler or library even if I can see it happening (example I currently have: 5 different version of Rust, 5 different versions of Golang - and the invariant I want on that is "no, it's this version and you deal with it" - but there's a lot of ways nix will let you make this which are very resistant to static analysis or automated correction).
It's sad how much Linux stuff is moving away from apt to systems like snap and flatpak that ship directly from upstream.
What the OP was referring to, is that Debian's tooling stores the upstream code along with the debian build code. There is support tooling for downloading new upstream versions (uscan) and for incorporating the upstream changes into Debian's version control (uupdate) to manage this complexity, but it does mean that Debian effectively mirrors the upstream code twice: in its source management system (mostly salsa.debian.org nowadays), and in its archive, as Debian source archives.
Many packages have stopped shipping the whole source and just keep the debian directory in Git.
Notable examples are
- gcc-*
- openjdk-*
- llvm-toolchain-*
and many more.
But it's still nice to have when an upstream source goes dark unexpectedly, as does occasionally still happen.
On the other hand, it makes for a far easier life when bumping compile or run time dependency versions. There's only one single source of truth providing both the application and the packaging.
It's just the same with Docker and Helm charts. So many projects insist on keeping sometimes all of them in different repositories, making change proposals an utter PITA.
I kind of appreciate that debian put FOSS at a core value very early on; in fact, it was the first distribution I used that forced me to learn the commandline. The xorg-server or rather X11 server back then was not working so I only had the commandline, and a lean debian handbook. I typed in the commands and learned from that. Before this I had SUSE and it had a much thicker book, with a fancypants GUI - and it was utterly useless. But that was in 2005 or so.
Now, in 2025, I have not used debian or any debian based distribution in a long time. I either compile from source loosely inspired by LFS/BLFS; or I may use Manjaro typically these days, simply because it is the closest to a modern slackware variant (despite systemd; slackware I used for a long time, but sadly it slowed down too much in the last 10 years, even with modern variants such as alienbob's slackware variant - manjaro moves forward like 100x faster and it also works at the same time, including when I want to compile from source; for some reason, many older distributions failed to adapt to the modern era. Systemd may be one barrier here, but the issue is much more fundamental than that. For instance, you have many more packages now, and many things take longer to compile, e. g. LLVM and what not, which in turn is needed for mesa, then we have cmake, meson/ninja and so forth. A lot more software to handle nowadays).
Yeah definitely. I guess this is a result of their weird idea that they have to own the entire world. Every bit of open source Linux software ever made must be in Debian.
If you have to upgrade the entire world it's going to take a while...
Please, please, stop the nonsense with the patch quilting -- it's really cumbersome, it adds unnecessary cognitive load, it raises the bar to contributions, it makes maintenance harder, and it adds _zero value_. Patch quilting is a lose-lose proposition.
I've tried it
(I know that's not quite the Greenspun quote)
Really, Git has a solution to this. If you insist that it doesn't without looking, you'll just keep re-inventing the wheel badly.
mercurial has a patch queue extension that married it and quilt, which was very easy to use
E.g.,
At Mozilla some developers used quilt for local development back when the Mozilla Suite source code was kept in a CVS repository. CVS had terrible support for branches. Creating a branch required writing to each individual ,v file on the server (and there was one for every file that had existed in the repository, plus more for the ones that had been deleted). It was so slow that it basically prevented anyone from committing anything for hours while it happened (because otherwise the branch wouldn’t necessarily get a consistent set of versions across the commit), so feature branches were effectively impossible. Instead, some developers used quilt to make stacks of patches that they shared amongst their group when they were working on larger features.
Personally I didn’t really see the benefit back then. I was only just starting my career, fresh out of university, and hadn’t actually worked on any features large enough to require months of work, multiple rounds of review, or even multiple smaller commits that you would rebase and apply fixups to. All I could see back then were the hoops that those guys were jumping through. The hoops were real, but so were the benefits.
So it's clearly a way better solution and it's disappointing that they still haven't switched to it after 20 years? I dunno what else to say...
I'd say that `quilt` the utility is pretty much abandoned at this point. The name `quilt` remains in the format name, but otherwise is not relevant.
Nowadays people that maintain patches do it via `gbp-pq` (the "patch queue" subcommand of the badly named `git-buildpackage` software). `gbp-pq switch` reads the patches stored in `debian/patches/`, creates an ephemeral branch on top of the HEAD, and replays them there. Any change done to this branch (new commits, removed comments, amended commits) are transformed by `gbp-pq export` into a valid set of patches that replaces `debian/patches/`.
This mechanism introduces two extra commands (one to "enter" and one to "exit" the patch-applied view) but it allows Debian to easily maintain a mergeable Git repo with floating patches on top of the upstream sources. That's impossible to do with plain Git and needs extra tools or special workflows even outside of Debian.
Rebase.
If your patches never touch the same files as others, I think it doesn't matter. But, IIRC, if patch A and patch B both touch file F, and the changes in patch A are in context for diffs of patch B, it always fails if patch A changes patch B's context, but since merging incorporates all changes at once, these separate context changes don't apply.
It's been a while, but it might be only when you need to manually resolve patch A, then you also have to manually resolve patch B even if you wouldn't have had to touch it in a merge scenario.
You're referring to having to do conflict resolution for each commit in the rebase series, as opposed to all at once for a merge. Either way if the upstream has added thousands of commits since the last time, you're in for no fun.
This is a case where Git could be better, but as I responded to u/gioele there exist tools that greatly help with the conflict resolution issue, such as this one that I wrote myself:
https://gist.github.com/nicowilliams/ea2fa2b445c2db50d2ee650...
which basically bisects to find the upstream commit that introduces a conflict with each commit in the rebase series.
This has one major advantage over merge workflow conflict resolution: you get the most post possible context for each manual conflict resolution you have to do! And you still get to have clean, linear history when you're done.
[0] https://gist.github.com/nicowilliams/ea2fa2b445c2db50d2ee650...
Merges would avoid those problems, but are harder to do if there are lots of conflicts, as you can't fix conflicts patch by patch.
Perhaps a workflow based on merges-of-rebases or rebase-and-overwrite-merge would work, but I don't think it's fair to say "oh just rebase".
Cherry picks preserve that Commit-Id. And so do rebases; because they're just text in a commit message.
So you can track history of patches that way, if you needed to. Which you won't.
(PS some team at google didn't understand git or their true requirements, so they wasted SWE-decades at that point on some rebasing bullshit; I was at least able to help them make it slightly less bad and prevent other teams from copying it)
> Which you won't.
Why not? Doesn't it make sense to be able to track the history of what patches have been applied for a debian package?
> Doesn't it make sense to be able to track the history of what patches have been applied for a debian package?
... no. Each patch has a purpose, which will be described in the commit message. Hopefully it does what it says it does, which you can compare with its current diff.
If it was upstreamed with minimal changes, then the diff is near-empty. Drop it.
If it was upstreamed with significant changes, then the diff will be highly redundant. Drop it.
If the diff appears to do what the commit message says it does, then it probably does what it says.
If the diff is empty, either it was upstreamed or you fucked up rebasing. Don't be negligent when rebasing.
Let's say you have these version tags upstream: foo-1.0.1, foo-1.1.0, foo-1.3.0, and corresponding Debian releases 1.0.1-0, 1.1.0-0, 1.1.0-1, 1.3.0-0, 1.3.0-1, and 1.3.0-2, and the same 3 patches in all cases, except slightly different in each case. Then to see the several different versions of these patches you'd just `git log --oneline foo-${version}..debian-${version}-${deb_version}`.
For example, it's trivial from a web browser with a couple of clicks to go and find out all the downstream changes to a package. For example to see how glibc is currently customized in debian testing/unstable you can just navigate this webpage:
https://sources.debian.org/src/glibc/2.42-6/debian/patches
If everything gets merged in the same git tree it's way harder. Harder but doable with a rebase+force push workflow, which makes collaboration way harder. Just impossible with a merge workflow.
As an upstream maintainer of several project, being able to tell at a glance and with a few clicks how one of my projects is patched in a distribution is immensely useful when bug reports are opened.
In a past job it also literally saved a ton of money because we could show legal how various upstreams were customized by providing the content of a few .debian.tar.gz tarballs with a few small, detached patches that could be analyzed, instead of massive upstream trees that would take orders of magnitude more time to go through.
How is this not also true for Git? Just put all the Debian commits "on top" and use an appropriate naming convention for your branches and tags.
> If everything gets merged in the same git tree it's way harder.
Yes, so don't merge, just rebase.
> Harder but doable with a rebase+force push workflow, which makes collaboration way harder.
No force pushes, just use new branch/tag names for new releases.
> Just impossible with a merge workflow.
Not impossible but dumb. Don't use merge workflows!
> As an upstream maintainer of several project, being able to tell at a glance and with a few clicks how one of my projects is patched in a distribution is immensely useful when bug reports are opened.
Git with a suitable web front-end gives you exactly that.
> In a past job it also literally saved a ton of money because we could show legal how various upstreams were customized by providing the content of a few .debian.tar.gz tarballs with a few small, detached patches that could be analyzed, instead of massive upstream trees that would take orders of magnitude more time to go through.
`git format-patch` and related can do the moral equivalent.
After I left that company I ended up at a larger company (~14k employees) in part because I'd worked on SVN-to-Git migrations before. Definitely a different beast, since there were a huge amount of workflows that needed changing, importing 10 years of SVN history (some of which used to be CVS history), pruning out VM images and ISOs that had been inadvertently added, rewriting tons of code in their Jenkins instance, etc.
All this on top of installing, configuring, and managing a geographically distributed internal Gitlab instance with multiple repositories in the tens or hundreds of gigabytes.
It was a heck of a ride and took years, but it was a lot of fun at the same time. Thankfully 'the guy who suggested the transition' was the CEO (in the first company) or CTO (in the second) nothing went wrong, no one got thrown under buses, and both companies are still doing a-okay (as far as source control goes).
Huh. I just learned to use quilt this year as part of learning debian packaging. I've started using it in some of my own forks so I could eventually, maybe, contribute back.
I guess the old quilt/etc recommendation in the debian build docs is part of the docs updates project needed that the linked page talks about.
I see alot of value in how steam helped communicate which software was and wasn’t ready to run on their new gaming platform. Tools like verification ticks and defined statuses for packages are very useful to communicate progress and to motivate maintainers to upgrade. Consider designing a similar verifition approach that helps the community easily track progress and nudge slow players. If it’s all too technical the community can’t help move things along.
https://www.steamdeck.com/en/verified
Is that a fair general read of the situation? (I have further comments to make but wanted to check my basic assumptions first).
2 more comments available on Hacker News