I Ditched Docker for Podman
Posted4 months agoActive4 months ago
codesmash.devTechstoryHigh profile
controversialmixed
Debate
80/100
ContainerizationPodmanDockerDevops
Key topics
Containerization
Podman
Docker
Devops
The author shares their experience of switching from Docker to Podman, sparking a discussion on the pros and cons of both containerization tools.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
16m
Peak period
114
0-6h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 5, 2025 at 7:56 AM EDT
4 months ago
Step 01 - 02First comment
Sep 5, 2025 at 8:12 AM EDT
16m after posting
Step 02 - 03Peak activity
114 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 9, 2025 at 11:15 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45137525Type: storyLast synced: 11/26/2025, 1:00:33 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
We ditched it for EC2s which were faster and more reliable while being cheaper, but that's beside the point.
Locally I use OrbStack by the way, much less intrusive than Docker Desktop.
Containers are the packaging format, EC2 is the infrastructure. (docker, crio, podman, kata, etc are the runtime)
When deploying on EC2, you still need to deploy your software, and when using containers you still need somewhere to deploy to.
You're basically splitting the process of building and distributed your application into: write the software, build the image, deploy the image.
Everyone who uses these tools, which is most people by this point, will understand these steps. Additionally, any framework, cloud provider, etc that speaks container images, like ECS, Kubernetes, Docker Desktop, etc can manage your deployments for you, since they speak container images. Also, the API of your container image (e.g. the environment variables, entrypoint flags, and mounted volumes it expects) communicate to those deploying your application what things you expect for them to provide during deployment.
Without all this, whoever or whatever is deploying your application has to know every little detail and you're going to spend a lot of time writing custom workflows to hook into every different kind of infrastructure you want to deploy to.
Though as someone who's used a lot of Azure infrastructure as code with Bicep and also done the K8s YAML's I'm not sure which is more complicated at this point to be honest. I suspect that depends on your k8s setup of course.
In general we do actually try to provide full context for errors from dockerd. Some things can be cryptic because, frankly, they are cryptic and require digging into what really happened (typical of errors from runc), but we do tend to wrap things so at least you know where the call site was.
There's also tracing data you can hook into, which could definitely be improved (some legacy issues around context propagation that need to be solved).
I've definitely seen, in the past, my fair share of errors that simply say "invalid argument" (typically this is a kernel message) without any context but have worked to inject context everywhere or do better at handling errors that we can.
So definitely interested in anything you've seen that could be improved because no one likes to get an error message that you can't understand.
Taking this further (self-plug), you can automatically map your Compose config into a NixOS config that runs your Compose project on systemd!
https://github.com/aksiksi/compose2nix
The new docs split that out into separate podman-container/volume/etc.unit(5) pages, with quadlet.7 being the index page. So they're still linking to the same documentation, just the organization happened to change underneath them.
If you must see what they linked to originally, the versions docs are still the original organization (i.e. all on one page): https://docs.podman.io/en/v5.6.0/markdown/podman-systemd.uni...
On the contrary, docker documentation *is* stable, I had bookmarks from 10-years ago on the *latest* editions, that still work today. The final link may have changed, but at least, there is a redirect (or a text showing that has been moved) instead of plain 404/not-found.
This is a crucial part of the quality applications offer. There might've been 100s of podmans probably since Docker was launched more than 10 years ago, but none came close to maintain high-quality of documentation and user-interface (ie. cli commands, switches). Especially in the backward-compatible way.
It's a different style of documentation organization: if you want to link to a specific version you should link to the specific version not latest. I won't argue it's necessarily a better way of doing things than Docker, but knowing it's the same thing as what's with the package is nice.
Just put this thread into a whatever LLM. I overall see 2 major themes here. Compatibility and stability issues, all over the place. Not just documentation, but with other tools. Compose schema v2 does not match the current/latest one, missing functionality (although this one is acceptable at certain level), etc.
Also, as soon as the docs were "posted", it became obsolete/useless/deprecated. I mean, what sort of quality are we talking about here?
https://github.com/containers/buildah/issues/4325#issuecomme...
https://docs.orbstack.dev/features/debug
Let alone the local resource monitor, increased, performance, automated local domains (no more complicated docker network settings to get your app working with local host), and more.
I basically use (orbstack) docker containers as light weight VM, easily accessible through multiple shells and they shutdown when nothing is running anymore.
I use them for development isolation? Or when I need to run some tool. It mounts the current directory, so your container is chrooted to that project.
Idk what the problem is, but it's ugly. I switched to orbstack because there was something like a memory leak happening with docker desktop, just using waaaaay too many resources all the time, sometimes it would just crash. I just started using docker desktop from the get-go because when I came on I had multiple people with more experience say 'oh, you're coming from linux? Don't even try to use the docker daemon, just download docker desktop'.
On Linux, for development, podman and docker are pretty similar but I prefer the k8s yaml approach vs compose so tend to use podman.
I don't think Apple really cares about dev use cases anymore so I haven't used a Mac for development in a while (I would for iOS development of course if that ever came up).
It's much more than a gui for it supports running k8s locally, managing custom vm instances, resource monitoring of containers, built in local domain name support with ssl mycontainer.orb, a debug shell that gives you ability to install packages that are not available in the image by default, much better and automated volume mounting and view every container in finder, ability to query logs, an amazing ui, plus it is much, much faster and more resource efficient.
I am normally with you that terminal is usually enough, but the above features really do make it worth it especially when using existing services that have complicated failure logs or are resource intensive like redis, postgres, livekit, etc or you have a lot of ports running and want to call your service without having to worry about remembering port numbers or complicated docker network configuration.
It feels a little hypocritical for us to feed our families through our tech talent and then complain that someone else is doing the same.
Remove layers, keep things simple.
That being said, it is here to stay. So any alternative tooling that forces Docker to get it's act together is welcome.
> Remove layers, keep things simple.
Due to the first line above, I'm not sure if I'm reading the second line correctly. But I'm going to assume that you're referring to the OCI image layers. I feel your pain. But honestly, I don't think that image layers are such a bad idea. It's just that the best practices for those layers are not well defined and some of the early tooling propagated some sub-optimal uses of those layers.
I'll just start with when you might find layers useful. Flatpak's sandboxing engine is bubblewrap (bwrap). It's also a container runtime that uses namespaces, cgroups and seccomp like OCI runtimes do. The difference is that it has more secure seccomp defaults and it doesn't use layers (though mounts are available). I have a tool that uses bwrap to create isolated build and packaging environments. It has a single root fs image (no layers). There are two annoyances with a single layer like this:
1. If you have separate environments for multiple applications/packages, you may want to share the base OS filesystem. You instead end up replicating the same file system redundantly.
2. If you want to collect the artifacts from each step (like source download, extract and build, 'make install', etc) into a separate directory/archive, you'll find yourself reaching out for layers.
I have implemented this and the solutions look almost identical to what OCI runtimes do with OCI image layers - use either overlayfs or btrfs/zfs subvolume mounts.
So if that's the case, then what's the problem with layers? Here are a few:
1. Some tools like the image builders that use Dockerfile/Containerfile create a separate layer for every operation. Some layers are empty (WORKDIR, CMD, etc). But others may contain the results of a single RUN command. This is very unnecessary and the work-arounds are inelegant. You'll need to use caches to remove temporary artifacts, and chain shell commands into a single RUN command (using semicolons).
2. You can't manage layers like files. The chain of layers are managed by manifests and the entire thing needs a protocol, servers and clients to transfer images around. (There are ways to archive them. But it's so hackish.)
So, here are some solutions/mitigations:
1. There are other build tools like buildah and packer that don't create additional layers unless specified. Buildah, a sister project of Podman, is a very interesting tool. It uses regular (shell) commands to build the image. However, those commands closely resemble the Dockerfile commands, making it easy to learn. Thus you can write a shell script to build an image instead of a Dockerfile. It won't create additional layers unless you specify. It also has some nifty features not found in Dockerfiles.
Newer Dockerfile builders (I think buildkit) have options to avoid creating additional layers. Another option is to use dedicated tools to inspect those layers and split/merge them on demand.
2. While a protocol and client/servers are rather inconvenient for lugging images around, they did make themselves useful in other ways too. Container registries these days don't host just images. They can host any OCI artifact. And you can practically pack any sort of data into such an artifact. They are also used for hosting/transferring a lot of other artifacts like helm charts, OPA policies, kubectl plugins, argo templates, etc.
> So any alternative tooling that forces Docker to get it's act together is welcome
What else do you consider as some bad/sub-optimal design choices of Docker? (including those already solved by podman)
Firstly, podman had a much worse performance compared to docker on my small cloud vps. Can't really go into details though.
Secondly, the development ecosystem isn't really fully there yet. Many tools utilizing Docker via its socket, fail to work reliably with podman. Either because the API differs or because of permission limitations. Sure, the tools could probably work around those limitations, but they haven't and podman isn't a direct 1:1 drop in replacement.
https://www.redhat.com/en/blog/generate-selinux-policies-con...
Are you using rootless podman? Then network redirection is done using user more networking, which has two modes: slirp4netns is very slow, pasta is the newer and good one.
Docker is always set up from the privileged daemon; if you're running podman from the root user there should be no difference.
Comparing root docker with rootless podman performance is apples to oranges. However, even for rootless pasta does have good performance.
[1]. https://docs.podman.io/en/latest/markdown/podman-system-serv...
If you're running rootless Podman containers then the Podman API is only running with user privileges. And, because Podman uses socket activation, it only runs when something is actively talking to it.
I often try to run something using podman, then find strange errors, then switch back to docker. Typically this is with some large container, like gitlab, which probably relies on the entirety of the history of docker and its quirks. When I build something myself, most of the time I can get it working under podman.
This situation where any random container does not work has forced me to spin up a VM under incus and run certain troublesome containers inside that. This isn't optimal, but keeps my sanity. I know incus now permits running docker containers and I wonder if you can swap in podman as a replacement. If I could run both at the same time, that would be magical and solve a lot of problems.
There definitely is no consistency regarding GPU access in the podman and docker commands and that is frustrating.
But, all in all, I would say I do prefer podman over docker and this article is worth reading. Rootless is a big deal.
That's good to know it works well for you, because I would prefer not to use docker.
we had some similar issues and it was due to containers running out of resources (mainly RAM/memory, by a lot, but only for a small amount of time). And it happens that in rootless this was correctly detected and enforced, but on non rootless docker (in that case on a Mac dev laptop) it didn't detect this resource spikes and hence "happened to work" even through it shouldn't have.
What you can do if you don't want to use Docker and don't want to maintain these images yourself is have two Podman machines running: one in rootful mode and another in rootless mode. You can, then, use the `--connection` global flag to specify the machine you want your container to run in. Podman can also create those VMs for you if you want it to (I use lima and spin them myself). I recommend using --capabilities to set limits on these containers namespaces out of caution.
Podman Desktop also installs a Docker compatibility layer to smooth over these incompatibilities.
You can also use this to create a VM for Podman that runs on Fedora, rootful by default: https://github.com/carlosonunez/bash-dotfiles/blob/main/lima...
If you go the Lima approach, use `podman system connection add` to add rootful and rootless VMs, then use the `--connection` flag to specify which you want to use. You can alias them to make that easier; for instance, use `alias podman=podman` for rootless stuff (assuming the rootless VM is your default) nad `alias rpodman=podman --connection rootful` for rootful stuff. I'll write a post describing how to set all of that up soon!
Which is probably one of the motivations for the blog post. Compatibility will only be there once a large enough share of users use podman that it becomes something that is checked before publish.
Plus, I don’t see the point in babysitting a separate copy of a user space if systemd has `DynamicUser`.
Podman rocks for me!
I find docker hard to use and full of pitfalls and podman isn't any worse. On the plus side, any company I work for doesn't have to worry about licences. Win win!
But Docker Engine, the core component which works on Linux, Mac and Windows through WSL2, that is completely and 1000% free to use.
The above features really do make it worth it especially when using existing services that have complicated failure logs or are resource intensive like redis, postgres, livekit, etc or you have a lot of ports running and want to call your service without having to worry about remembering port numbers or complicated docker network configuration.
Check it out https://docs.orbstack.dev/
i've been using an archlinux vm for everything development over the past year and a half and i couldn't be happier.
But it is not cross-platform, so we settled on Podman instead, which came (distant) second in my tests. The UI is horrible, IMO but hey… compromises.
I use OrbStack for my personal stuff, though.
It costs about $100/year per seat for commercial use, IIRC. But it is significantly faster than Docker Desktop at literally everything, has a way better UI, and a bunch of QoL features that are nice. Plus Linux virtualization that is both better and (repeating on this theme) significantly more performant than Parallels or VMWare Fusion or UTM.
[1]: https://github.com/microsoft/winget-pkgs/tree/master/manifes...
(base) kord@DESKTOP-QPLEI6S:/mnt/wsl/docker-desktop-bind-mounts/Ubuntu/37c7f28..blah..blah$ podman
Command 'podman' not found, but can be installed with:
sudo apt install podman
>This section describes how to install Docker Engine on Linux, also known as Docker CE. Docker Engine is also available for Windows, macOS, and Linux, through Docker Desktop.
https://docs.docker.com/engine/install/
I'm not an expert but everything I read online says that Docker runs on Linux so with Mac you need a virtual environment like Docker Desktop, Colima, or Podman to run it.
If you're building really arch-specific stuff, then I could see not wanting to go there, but Rosetta support is pretty much seamless. It's just slower.
And then there's the windowing system of macOS that feels like it's straight from the 90s. "System tray" icons that accumulate over time and are distracting, awful window management with clunky animations, the near useless dock (clicking on VS Code shows all my 6 IDEs, why?). Windows and Linux are much modern in that regard.
The Mac hardware is amazing, well worth its price, but the OS feels like it's from a decade ago.
I use WSL for work because we have no linux client options. It's generally fine, but both forced windows update reboots as well as seemingly random wsl reboots (assuming because of some component update?) can really bite you if you're in the middle of something.
and sharing files from the host, ide integration, etc.
Not that it can't be done. But doing it is not just, 'run it'. Now you manage a vm, change your workflow, etc.
Having used Docker Desktop on a Mac myself, it seems... fine? It does the job well enough, and it’s part of the development rather than production flow so it doesn’t need to be perfect, just unobtrusive.
Was this a deal breaker for any company?
I ask because the Docker Desktop paid license requirement is quite reasonable. If you have less than 250 employees and make less than $10 million in annual revenue it's free.
If you have a dev team of 10 people and are extremely profitable to where you need licenses you'd end up paying $9 a year per developer for the license. So $90 / year for everyone, but if you have US developers your all-in payroll is probably going to be over $200,000 per developer or roughly $2 million dollars. In that context $90 is practically nothing. A single lunch for the dev team could cost almost double that.
To me that is a bargain, you're getting an officially supported tool that "just works" on all operating systems.
You end up having to track who has it installed. Hired 5 more people this week? How many of them will want docker desktop? Oh, we’ve maxed the licenses we bought? Time to re-open the procurement process and amend the purchase order.
An IT department for a company of that size should have ironed out workflows and automated ways to keep tabs on who has what and who needs what. They may also be under various compliance requirements that expect due diligence to happen every quarter to make sure everything is legit from a licensing perspective.
Even if it's not automated, it's normal for a team to email IT / HR with new hire requirements. Having a list of tools that need licenses in that email is something I've seen at plenty of places.
I would say there's lots of other tools where onboarding is more complicated from a license perspective because it might depend on if a developer wants to use that tool and then keeping tabs on if they are still using it. At least with Docker Desktop it's safe to say if you're on macOS you're using it.
I guess I'm not on board with this being a major conflict point.
But I have to feed my family.
What could possibly go wrong?
Also, I don't want to have to troubleshoot why the docker daemon isn't running every time I need it
Where do you work ? Is that even possible in 2025?
There is no bottom to the barrel, and incompetence and insensitivity can rise quite high in some cases.
Or so it seems to me whenever I have to deal with them. We ended up with Microsoft defender on our corp Macs even.. :|
Or so I was told when I made the monumental mistake of trying to fight such a policy once.
So now we just have a don't ask don't tell kind of gig going on.
I don't really know what the solution is, but dev laptops are goldmines for haxxors, and locking them down stops them from really being dev machines. shrug
it's pretty stupid because the same curl | bash that could have done that could have just posted the same contents directly to the internet without the container. The best chance you actually have is to do as much development as possible inside a sealed environment like ... a container where at least you have some way to limit visibility of partially trusted code of your file system.
Or when your IT department is prohibited from purchasing anything that doesn't come from Microsoft or CDW.
Correct, but every additional software package and each additional license adds more to track.
Every new software license requires legal to review it.
These centralized departments add up all of the license and SaaS costs and it shows up as one big number, which executives start pushing to decrease. When you let everyone get a license for everything they might need, it gets out of control quickly (many startups relearn this lesson in their growth phase)
Then they start investigating how often people use software packages and realize most people aren't actually using most software they have seats for. This happens because when software feels 'free' people request it for one-time use for a thing or to try it out and then forget about it, so you have low utilization across the board.
So they start making it harder to add new software. They start auditing usage. They may want reports on why software is still needed and who uses it.
It all adds up. I understand you don't think it should be this way, but it is at big companies. You're right that that the $24/user per month isn't much, but it's one of dozens of fees that get added, multiplied by every employee in the company, and now they need someone to maintain licenses, get them reviewed, interact with the rep every year, do the negotiation battles, and so on. It adds up fast.
This is going to differ company to company but since we're narrowing it to large companies I disagree. Usually there's a TPM that tracks license distribution and usage. Most companies provide that kind of information as part of their licensing program (and Docker certainly does.)
> Every new software license requires legal to review it.
Yes, but this is like 90% of what legal does - contract review. It's also what managers do but more on the negotiation end. Most average software engineers probably don't realize it but a lot of cloud services, even within a managed cloud provider like AWS, require contract and pricing negotiation.
> These centralized departments add up all of the license and SaaS costs and it shows up as one big number, which executives start pushing to decrease. When you let everyone get a license for everything they might need, it gets out of control quickly (many startups relearn this lesson in their growth phase)
As I said earlier, I can't speak for other companies but at large companies I've worked at this just simply isn't true. There's metrics for when the software isn't being used because the corporation is financially incentivized to shrink those numbers or consolidate on software that achieves similar goals. They're certainly individually tracked fairly far up the chain even if they do appear as a big number somewhere.
Also, latest with 20 employees or computers, someone in charge of IT (sysadmin, IT department) would decide to use a software asset management tool (aka software inventory system) to automatically track, roll out, uninstall, monitor vetted software. Anything else is just unprofessional.
The business world is full of things that "should" be a certain way, but aren't.
For the technology world, double the number.
We'd all like to live in some magical imaginary HN "should" world, but none of us do. We all work in companies that are flawed, and sometimes those flaws get in the way of our work.
If you've never run into this, buy a lottery ticket.
I'm in IT consulting. If most companies could even get the basic best practices of the field implemented, I wouldn't have a job.
Large companies do have ways to deal with this: they negotiate flat rates or true-up cadences with vendors. But now you’ve raised the bar way higher than “just use podman”.
OT because not docker
In the realm of artistic software (thinking Alberton Live and Adobe suites) licensing hell is a real thing. In my recent experience it sorts the amateurs from the pros, in favour of amateurs
The time spent learning the closed system includes hours and dollars wrestling licenses. Pain++. Not just the unaffordable price, but time that could be spent creating
But for an aspiring professional it is the cost of entry. These tools must be mastered (if not paid for, ripping is common) as they have become a key part of the mandated tool chains, to the point of enshittification
The amateur is able to just get on with it, and produce what they want when they want with a dizzying array of possible tools
I don't quite get this argument. How is that different from any piece of software that an employee will want in any sort of enterprise setting? From an IT operations perspective it is true that Docker Desktop on Windows is a little more annoying than something like an Adobe product, because Docker Desktop users need their local user to be part of their local docker security group on their specific machine. Aside from that I would argue that Docker Desktop is by far one of the easiest developer tools (and do note that I said developer tools) to track licenses for.
In non-enterprise setups I can see why it would be annoying but I suspect that's why it's free for companies with fewer than 250 people and 10 million in revenue.
Open source is different in exactly that, no procurement.
Finance makes procurement annoying so people are not motivated to go through it.
Not that this should be an argument for docker. The idea that having someone to call makes a piece of software "safer" is as ridiculous at it sounds. Especially if you've ever tried "calling" a company you buy 20 licenses from, and when I say call what I really mean is talking with a chatbot and then waiting a month for them to get back to you via email. But IT's gonna IT.
I could reverse engineer all the cool user land stuff it does to make things seamless ... but who has the time ;-)
I know a lot of kubernetes fans migrate to podman, but if you use dev stacks.
Use in dev: devcontainer, podman can't replace docker!
494 more comments available on Hacker News