Building Small Docker Images Faster
Key topics
The quest for leaner Docker images has sparked a lively debate, with commenters weighing in on the best approaches to building and optimizing containerized applications. While some argue that building containers isn't always the best practice, others highlight the benefits of using containers for repeatable builds and streamlined development workflows. Notably, multi-stage builds and tools like Nix are emerging as top strategies for reducing image sizes, with some commenters pointing out that many developers are still unaware of these techniques. As the discussion unfolds, it becomes clear that the containerization landscape is evolving, with a growing emphasis on reproducibility, determinism, and supply chain security.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
14h
Peak period
14
12-24h
Avg / period
6.8
Based on 27 loaded comments
Key moments
- 01Story posted
Dec 12, 2025 at 5:23 AM EST
22 days ago
Step 01 - 02First comment
Dec 12, 2025 at 6:57 PM EST
14h after posting
Step 02 - 03Peak activity
14 comments in 12-24h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 18, 2025 at 12:17 PM EST
16 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Containers nicely solve this problem. Then your builds get a little slow, so you want to cache things. Now your docker file looks like this. You want to run some tests - now it’s even more complicated. How do you debug those tests? How do those tests communicate with external systems (database/redis). Eventually you end up back at “let’s just containerise the packaging”.
Not dismissing, but it’s just caveats every which way. I think in an ideal world I just want Bazel or Nixos without the baggage that comes with them - docker comes so close but yet falls so short of the finish line.
Here's an example of that from the docker maven.
`docker run -it --rm --name my-maven-project -v "$(pwd)":/usr/src/mymaven -w /usr/src/mymaven maven:3.3-jdk-8 mvn clean install`
You can get as fancy as you like with things like your `.m2` directory, this just gives you the basics of how you'd do that.
The benefit of this approach is it's a lot easier to make sure dependencies end up on the build node so you aren't redownloading and caching the same dependency for multiple artifacts. But then you don't get to take advantage of docker build caching to speed up things when something doesn't change.
That's the part about docker I don't love. I get why it's this way, but I wish there was a better way to have it reuse files between images. The best you can do is a cache mount. But that can run into size issues as time goes on which is annoying.
> without having to figure out a per-language caching systems
But most companies, even large ones, tend to standardize on no more than a handful of languages. Typescript, Python, Go, Java... I don't need something that'll handle caching for PHP or Erlang or Nix (not that you can really work easily with Nix inside a container...) or OCaml or Haskell... Yeah I do think there's a lot of room for companies to say, this is the standardized supported stack, and we put in some time to optimize the shit out of it because the DX dividends are incredible.
A lot of teams should think long and hard about just taking build artifacts, throwing them into their expected places in a directory taking the place of chroot, generating a manifest JSON, and wrapping everything in a tar, which is indeed a container.
We have our base images, and in there we install dependencies by version. That package then is the base for our code build. (as apt seemingly doesn't have any lock file support?).
In the subsequent built EVERYTHING is versioned, which allows us to establish provenance all the way up to the base image.
And next to that when we promote images from PR -> main we don't even rebuild the code. It's the same image that gets retagged. All in the name of preserving provenance.
Once you have your container image, how you decide to promote it is a piece of cake, skopeo doesn't require root and often doesn't require re-pulling the full tar. Containerization is great, I'm specifically trying to point out that there are alternatives to Docker.
Multi gigabyte containers everywhere.
Also, I took a quick look and I don't understand how your tool could possibly produce "even smaller images". The article is using multi-stage builds to produce a final Docker image that is quite literally just the target binary in question (based on the scratch image), whereas your tool appears be a whole Linux distribution.
This would be a much smaller drop in replacement for the base images used in the post to give full source bootstrapped final binaries.
You can still from scratch for the final layer though of course and that would be unlikely to change size much though, to your point.
Doesn’t even need Docker, just writes the image files with a small Python script.
Can build from scratch, or use the very small Distroless images.