Why Did Containers Happen?
Posted3 months agoActive2 months ago
buttondown.comTechstoryHigh profile
calmmixed
Debate
80/100
ContainersDockerDevops
Key topics
Containers
Docker
Devops
The article discusses the history and evolution of containers, sparking a discussion on HN about the reasons behind their adoption and the impact on development workflows.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
9h
Peak period
104
Day 1
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 13, 2025 at 7:37 AM EDT
3 months ago
Step 01 - 02First comment
Oct 13, 2025 at 4:08 PM EDT
9h after posting
Step 02 - 03Peak activity
104 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 22, 2025 at 1:31 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45567241Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
In a past life, I remember having to juggle third-party repositories in order to get very specific versions of various services, which resulted in more than few instances of hair-pull-inducing untangling of dependency weirdness.
This might be controversial, but I personally think that distro repos being the assumed first resort of software distribution on Linux has done untold amounts of damage to the software ecosystem on Linux. Containers, alongside Flatpak and Steam, are thankfully undoing the damage.
Linux is just a kernel - you need to ship your own userland with it. Therefore, early distros had to assemble an entire OS around this newfangled kernel from bits and pieces, and those bits and pieces needed a way to be installed and removed at will. Eventually this installation mechanism gets scope creep and and suddenly things like FreeCiv and XBill are distributed using the same underlying system that bash and cron use.
This system of distro packaging might be good as a selling point for a distro - so people can brag about their distro comes with 10,000 packages or whatever. That said, I can think of no other operating system out there where the happiest path of releasing software is to simply release a tarball of the source, hope a distro maintainer packages it for you, hope they do it properly, and hope that nobody runs into a bug due to a newer or older version of a dependency you didn't test against.
Instead of designing a solution and perfecting it overtime, it's endless tweaking where there's a new redesign every years. And you're supposed to use the exact computer as the Dev to get their code to work.
Kubernetes was also not the obvious winner in its time with Mesos in particular seeming like a possible alternative when it wasn't clear if orchestration and resource management weren't possibly different product categories.
I was at Red Hat at the time and my impression was they did a pretty good job of jumping onto where the community momentum at the time was--while doubtless influencing that momentum at the time.
Hard agree. After getting used to "system updates are... system updates; user software that's not part of the base system is managed by a separate package manager from system updates, doesn't need root, and approximately never breaks the base system (to include the graphical environment); development/project dependencies are not and should not be managed by either of those but through project-specific means" on macOS, the standard Linux "one package manager does everything" approach feels simply wrong.
This predates macOS. The mainframe folks did this separation eons ago (see IBM VM/CMS).
On Unix, it's mostly the result of getting rid of your sysadmins who actually had a clue. Even in Unix-land in the Bad Old Days(tm), we used to have "/usr/local" for a reason. You didn't want the system updating your Perl version and bringing everything to a screeching halt; you used the version of Perl in /usr/local that was under your control.
Yes there was an idea of creating bespoke filesystems for apps, custom mount structures that plan9 had. That containers also did something semi-parallel to. But container images as read only overlays (with a final rw top overlay) feel like a very narrow craft. Plan9 had a lot more to it (everything as a file), and containers have a lot more to them (process, user, net namespaces, container images to pre-assembled layers).
I can see some shared territory but these concerns feel mostly orthogonal. I could easily imagine a plan9 like entity arising amid the containerized world: these aren't really in tension with each other. There's also a decade and a half+ gap between Plan9's hayday and the rise of containers.
https://en.wikipedia.org/wiki/Cgroups
(arguably FreeBSD jails and various mainframe operating systems preceded Linux containers but not by that name)
Other companies like Yahoo, Whatsapp, Netflix also followed interesting patterns of using strong understanding of how to be efficient on cheap hardware. Notably those three all were FreeBSD users at least in their early days.
Anyways digging it up, looks like the primary author was at Facebook for a year before cgroupsv2, redhat for three years before that, and Google before that. So... I don't know haha you'd have to ask him.
Some highlights:
- How far behind Kubernetes was at the time of launch. Docker Swarm was significantly more simple to use, and Apache Mesos scheduler could already handle 10,000 nodes (and was being used by Netflix).
- RedHat's early contributions were key, despite having the semi-competing project of OpenShift.
- The decision to Open Source K8S came down to one meeting brief meeting at Google. Many of the senior engineers attended remotely from Seattle, not bothering to fly out because they thought their request to go OS was going to get shutdown.
- Brief part at the end where Kelsey Hightower talks about what he thinks might come after Kubernetes. He mentions, and I thought this was very interesting ... Serverless making a return. It really seemed like Serverless would be "the thing" in 2016-2017 but containers were too powerful. Maybe now with KNative or some future fusing of Container Orchestration + K8S?
[1] - https://youtu.be/BE77h7dmoQU
Cgroups and namespaces were added to Linux in an attempt to add security to a design (UNIX) which has a fundamentally poor approach to security (shared global namespace, users, etc.).
It's really not going all that well, and I hope something like SEL4 can replace Linux for cloud server workloads eventually. Most applications use almost none of the Linux kernel's features. We could have very secure, high performance web servers, which get capabilities to the network stack as initial arguments, and don't have access to anything more.
Drivers for virtual devices are simple, we don't need Linux's vast driver support for cloud VMs. We essentially need a virtual ethernet device driver for SEL4, a network stack that runs on SEL4, and a simple init process that loads the network stack with capabilities for the network device, and loads the application with a capability to the network stack. Make building an image for that as easy as compiling a binary, and you could eliminate maybe 10s of millions of lines of complexity from the deployment of most server applications. No Linux, no docker.
Because SEL4 is actually well designed, you can run a sub kernel as a process on SEL4 relatively easily. Tada, now you can get rid of K8s too.
All of the hassle of installing things was in the Dockerfile, and it was run in containers so more reliable.
What did it let people do that they couldn't already do with static linking?
Good luck making _that_ static.
- things like "a network port" can also be a dependency, but can't be "linked". And so on for all sorts of software that expects particular files to be in particular places, or requires deploying multiple communicating executables
- Linux requires that you be root to open a port below 1024, a security disaster
- some dependencies really do not like being statically linked (this includes the GNU standard library!), for things like nsswitch
Oh, and the layer caching made iterative development with _very_ rapid cycles possible. That lowered the bar for entry and raised the floor for everyone to get going easier.
But back to Dockerfiles. The configuration language used made it possible for anyone[tm] to build a container image, to ship a container image and to run the container. Fire-and-forget style. (Operating the things in practice and at any scale was left as an exercise for the reader.)
And because Anyone[tm] could do it, pretty much anyone did. For good and ill alike.
Trying to get the versions of software you needed to use all running on the same server was an exercise in fiddling.
For me, it was avoiding dependencies and making it easier to deploy programs (not services) to different servers w/o needing to install dependencies.
I seem to remember a meetup in SF around 2013 where Docker (was it still dotCloud back then?) was describing a primary use-case was easier deployment of services.
I'm sure for someone else, it was deployment/coordination of related services.
no more handmade scripts(or worse fully manual operations) stupid simple dockerfile scripts.. any employee would be able to understand and groups can organize around it
docker-compose tying services into their own subnet was really a cool thing though
edit: came back in to add reference to LXC, it's been probably 2 decades since i've thought about that.
Was it always so hard to build the software you needed on a single system?
Ironically one of the arguments for dynamic linking is memory efficiency and small exec size ( the other is around ease of centrally updating - say if you needed to eliminate a security bug ).
X was (in)famous for memory use (see the chapter in the 'Unix-Hater's Handbook'); and shared libs was the consensus as to how to make the best of a difficult situation, see:
http://harmful.cat-v.org/software/dynamic-linking/
My preference is to bring dependencies in at the source code level and compile them in to the app - stops the library level massive dependency trees ( A need part of B but because some other part of B needs C our dependency tool brings in C, and then D and so on ).
You could see that history repeat itself in Python - "pip install something" is way easier to do that messing with virtualenvs, and even works pretty well as long as number of package is small, so it was a recommendation for a long time. Over time, as number of Python apps on same PC grew, and as the libraries gained incompatible versions, people realized it's a much better idea to keep all things isolated in its own virtualenv, and now there are tools (like "uv" and "pipx") which make it trivial to do.
But there are no default "virtualenvs" for regular OS. Containers get closest. nix tries hard, but it is facing uphill battle - it goes very much "against the grain" of *nix systems, so every build script of every used app needs to be updated to work with it. Docker is just so much easier to use.
Golang has no dynamic code loading, so a lot of times it can be used without containers. But there is still global state (/etc/pki, /etc/timezone, mime.types , /usr/share/, random Linux tools the app might call on, etc...) so some people still package it in docker.
There were many use cases that rapidly emerged, but this eclipsed the rest.
Docker Hub then made it incredibly easy to find and distribute base images.
Google also made it “cool” by going big with it.
You’re talking about the needs it solves, but I think others were talking about the developments that made it possible.
My understanding is that Docker brought features to the server and desktop (dependency management, similarity of dev machine and production, etc), by building on top of namespacing capabilities of Linux with a usability layer on top.
Docker couldn’t have existed until those features were in place and once they existed it was an inevitability for them to be leveraged.
As about SEL4 - it is so elegant because it leaves all the difficult problems to the upper layer (coincidentally making them much more difficult).
I completely buy this as an explanation for why SEL4 for user environments hasn't (and probably will never) take off. But there's just not that much to do to connect a server application to the network, where it can access all of its resources. I think a better explanation for the lack of server side adoption is poor marketing, lack of good documentation, and no company selling support for it as a best practice.
Using sel4 on a server requires complex software development to produce an operating environment in which you can actually do anything.
I’m not speaking ill of sel4; I’m a huge fan, and things like it’s take-grant capability model are extremely interesting and valuable contributions.
It’s just not a usable standalone operating system. It’s a tool kit for purpose-built appliances, or something that you could, with an enormous amount of effort, build a complete operating system on top of.
I'd love to work on this. It'd be a fun problem!
Are there any projects like that going on? It feels like an obvious thing.
There is work within major consumer product companies building such things (either with sel4, or things based on sel4's ideas), and there's Genode on seL4.
If you only care to run stateless stuff that never write anything (or at least never read what they wrote) - it's comparatively easy. Still gotta deal with the thousand drivers - even on the server there are a lot of quirky stuff. But then you gotta run the database somewhere. And once you run a database you get all the problems Linus warned about. So you gotta run the database on a separate Linux box (at that point - what do you win vs. using Linux for everything?) or develop a new database tailored for SeL4 (and that's quite a bit more complex than an OS kernel). An elegant solution that only solves a narrow set of cases stands no chance over a crude solution that solves every case.
Also, with the current sexy containerized stacks it's easy to forget, but having same kind of environment on the programmer's workbench and on the sever was once Unix's main selling point. It's kinda expensive to support a separate abstraction stack for a single purpose.
True. Yet containers, or more precisely the immutable images endemic to container systems, directly address the hardest part of application security: the supply chain. Between the low effort and risk entailed when revising images to address endlessly emerging vulnerabilities, and enabling systematized auditing of immutable images, container images provide invaluable tools for security processes.
I know about Nix and other such approaches. I also know these are more fragile than the deeply self-contained nature of containers and their images. That's why containers and their image paradigm have won, despite all the well-meaning and admirable alternatives.
> A bypassable security mechanism is worse than useless
Also true. Yet this is orthogonal to the issues of supply chain management. If tomorrow, all the problems of escapable containers were somehow solved, whether by virtual machines on flawless hypervisors, or formally verified microkernels, or any other conceivable isolation mechanism, one would still need some means to manage the "content" of disparate applications, and container systems and the image paradigm would still be applicable.
Not really. People only use Nix because it doesn't randomly break, bitrot or require arcane system setup.
Unlike containers. You really need k8s or something like it to mould Docker containers into something manageable.
Containers is "run these random shell commands I copy pasted from the internet on top of this random OS image I pulled from the internet, #yolo".
People copy and paste nix code all the damn time because it's downright unparseable and inscrutable to the majority of users. Just import <module>, set some attrs and hit build. #yolo
You see the difference?
I'll stipulate this, despite knowing and appreciating the much greater value Nix has.
Then, the problem that Nix solves isn't something container users care about. At scale, the bare metal OS hosting containers is among the least of one's problems: typically a host image is some actively maintained, rigorously tested artifact provided by one of a couple different reliable sources. Ideally container users are indifferent to it, and they experience few if any surprises using them, including taking frequent updates to close vulnerabilities.
> Unlike containers.
Containers randomly break or bitrot? I've never encountered that view. They don't do this as far as I'm aware. Container images incorporate layer hashing that ensure integrity: they do not "bitrot." Image immutability delivers highly consistent behavior, as opposed to "randomly break." The self-contained nature of containers delivers high portability, despite differences in "system setup." I fail to find any agreement with these claims. Today, people think nothing of developing images using one set of tools (Docker or what have you) and running these image using entirely distinct runtimes (containerd, cloud service runtimes, etc.) This is taken entirely for granted, and it works well.
> Arcane system setup.
I don't know what is meant by "system setup" here, and "arcane" is subjective. What I do know is that the popular container systems are successfully and routinely used by neophytes, and that this doesn't happen when the "system setup" is too demanding and arcane. The other certainty I have is that whatever cost there is in acquiring the rather minimal knowledge needed to operate containers is vastly smaller than achieving the same ends without containers: the moment a system involves more than 2-3 runtime components, containers start paying off verses running the same components natively.
All the fucking time. Maybe it's possible to control your supply chain properly with containers, but nobody actually does that. 99% of the time they're pulling in some random "latest image" and applying bespoke shell commands on top.
> I don't know what is meant by "system setup" here, and "arcane" is subjective.
Clearly you've never debugged container network problems before.
They do. I assure you.
> they're pulling in some random "latest image"
Hardly random. Vendoring validated images from designated publishers into secured private repos is the first step on the supply chain road.
> Clearly you've never debugged container network problems before.
Configuring Traefik ingress to forward TCP connections to pods was literally the last thing I did yesterday. At one time or another I've debugged all the container network problems for every widely used protocol in existence, and a number of not so common ones.
99 percent of Docker container users aren't on the supply chain road. They just want to "docker pull", #yolo.
> Configuring Traefik ingress to forward TCP connections to pods was literally the last thing I did yesterday
Docker does crazy insane modifications to your system settings behind the scenes. (Of which turning off the system firewall is the least crazy.)
Have fun when the magic Docker IP addresses happen to conflict with your corporate LAN.
People keep saying that, but I do not get it. If an attack that would work without a container, fails from inside a container (e.g. because it cannot read or write a particular file, or it cannot) it is better security.
> A bypassable security mechanism is worse than useless.
It needs the bypass to exist, and it needs an extra step to actually bypass it.
Any security mechanism (short of air gaps) might have a bypass.
> even if a malicious program is still able to detect it's not the true root.
Also true for security unless it can read or write to the true root.
I use containers as an extra security measure. i.e. as a way of reducing the chance that a compromise of one process will lead to a compromise of the rest of the system.
That said, I would guess that providers of container hosting must be fairly confident that they can keep them secure. I do not know what extra precautions they take though.
An escape from properly configured container/namespaces is a kernel 0day. Or a 0day in whatever protocol the isolated workload talks to the outside with.
Looks like the Nirvana fallacy.
k8s is about managing clusters of machines as if they were a single resource. Hence the name "borg" of its predecessor.
AFAIK, this isn't a use case handled by SEL4?
If you are already running SEL4 and you want to spawn an application that is totally isolated, or even an entire sub-kernel it's not different than spawning a process on UNIX. There is no need for the containerization plugins on SEL4. Additionally the isolation for the storage and networking plugins would be much better on SEL4, and wouldn't even really require additional specialized code. A reasonable init system would be all you need to wire up isolated components that provide storage and networking.
Kubernetes is seen as this complicated and impressive piece of software, but it's only impressive given the complexity of the APIs it is built on. Providing K8s functionality on top of SEL4 would be trivial in comparison.
There are other reasons it's impressive. Its API and core design is incredibly well-designed and general, something many other projects could and should learn from.
But the fact that it's impressive because of the complexity of the APIs it's built on is certainly a big part of its value. It means you can use a common declarative definition to define and deploy entire distributed systems, across large clusters, handling everything from ingress via load balancers to scaling and dynamic provisioning at the node level. It's essentially a high-level abstraction for entire data centers.
seL4 overlaps with that in a pretty minimal way. Would it be better as underlying infrastructure than the Linux kernel? Perhaps, but "providing K8s functionality on top of SEL4" would require reimplementing much of what Linux and various systems on top of it currently provide. Hardly "trivial in comparison".
Containerization is after all, as you mentioned, a plugin. As is network behavior. These are things that k8s doesn't have a strong opinion on beyond compliance with the required interface. You can switch container plugin and barely notice the difference. The job of k8s is to have control loops that manage fleets of resources.
That's why containers are called "containers". They're for shipping services around like containers on boats. Isolation, especially security isolation, isn't (or at least wasn't originally) the main idea.
You manage a fleet of machines and a fleet of apps. k8s is what orchestrates that. SEL4 is a microkernel -- it runs on a single machine. From the point of view of k8s, a single machine is disposable. From the point of view of SEL4, the machine is its whole world.
So while I see your point that SEL4 could be used on k8s nodes, it performs a very different function than k8s.
As others mentioned containers aren’t about security either, I think you’re rather missing the whole purpose of the cloud native ecosystem here.
Namespaces were not an attempt to add security, but just grew out of work to make interfaces more flexible, like bind mounts. And Unix security is fundamentally good, not having namespaces isn't much of a point against it in the first place, but now it does have them.
And it's going pretty well indeed. All applications use many kernel features, and we do have very secure high performance web and other servers.
L4 systems have been around for as long as Linux, and SEL4 in particular for 2 decades. They haven't moved the needle much so I'd say it's not really going all that well for them so far. SEL4 is a great project that has done some important things don't get me wrong, but it doesn't seem to be a unix replacement poised for a coup.
L. Ron Hubbard is fundamentally good!
I kid, but seriously, good how? Because it ensures cybersecurity engineers will always have a job?
seL4 is not the final answer, but something close to it absolutely will be. Capability-based security is an irreducible concept at a mathematical level, meaning you can’t do better than it, at best you can match it, and its certainly not matched by anything else we’ve discovered in this space.
Good because it is simple both in terms of understanding it and implementing it, and sufficient in a lot of cases.
> seL4 is not the final answer, but something close to it absolutely will be. Capability-based security is an irreducible concept at a mathematical level, meaning you can’t do better than it, at best you can match it, and its certainly not matched by anything else we’ve discovered in this space.
Security is not pure math though, it's systems and people and systems of people.
That might be why Docker was originally implemented, but why it "happened" is because everyone wanted to deploy Python and pre-uv Python package management sucks so bad that Docker was the least bad way to do that. Even pre-kubernetes, most people using Docker weren't using it for sandboxing, they were using it as fat jars for Python.
Even java things wher fatjars exist you at some point end up with os level dependencies like "and this logging thing needs to be set up, and these dirs need these rights, and this user needs to be in place" etc. Nowadays you can shove that into a container
I think the whole thing has been levels of abstraction around a runtime environment.
in the beginning we had the filesystem. We had /usr/bin, /usr/local/bin, etc.
then chroot where we could run an environment
then your chgroups/namespaces
then docker build and docker run
then swarm/k8s/etc
I think there was a parallel evolution around administration, like configure/make, then apt/yum/pacman, then ansible/puppet/chef and then finally dockerfile/yaml
I've always wondered if there could be something like:
to accomplish something similar, or maybe:As long I never have to worry about configure snippets that deal with Sun's CC compiler from 1990's, or with gcc-3, I will be happy.
In terms of security, I think even more secure than SEL4 or containers or VMs would be having a separate physical server for each application and not sharing CPUs or memory at all. Then you have a security boundary between applications that is based in physics.
Of course, that is too expensive for most business use cases, which is why people do not use it. I think using SEL4 will run into the same problem - you will get worse utilization out of the server compared to containers, so it is more expensive for business use cases and not attractive. If we want something to replace containers that thing would have to be both cheaper and more secure. And I'm not sure what that would be
Wasn't this what unikernels were attempting a decade ago? I always thought they were neat but they never really took off.
I would totally be onboard with moving to seL4 for most cloud applications. I think Linux would be nearly impossible to get into a formally-verified state like seL4, and as you said most cloud stuff doesn't need most of the features of Linux.
Also seL4 is just cool.
Anyone doing deployments in managed languages, regardless of AOT compiled, or using a JIT, the underlying operating system is mostly irrelevant, with exception of some corner cases regarding performance tweeks and such.
Even if those type 1 hypervisors happen to depend on Linux kernel for their implementation, it is pretty much transparent when using something like Vercel, or Lambda.
Docker's claim to fame was connecting that existing stuff with layered filesystem images and packaging based off that. Docker even started off using LXC to cover those container runtime parts.
Unix was not designed to be convenient for VPS providers. It was designed to allow a single computer to serve an entire floor of a single company. The security approach is appropriate for the deployment strategy.
As it did with all OSes, the Internet showed up, and promptly ruined everything.
If the "fundamentally poor approach to security" is a shared global namespace, why are namespaces not just a fix that means the fundamental approach to security is no longer poor?
Systemd does this and it is widely used.
https://archive.kernel.org/oldwiki/tiny.wiki.kernel.org/
Why not 9front and diskless Linux microVMs, Firecracker/Kata-containers style?
Filesystem and process isolation in one, on an OS that's smaller than K8s?
Keep it simple and Unixy. Keep the existing binaries. Keep plain-text config and repos and images. Just replace the bottom layer of the stack, and migrate stuff to the host OS as and when it's convenient.
Namespacing of all resources (no restriction to a shared global namespace) was actually taken directly from plan9. It does enable better security but it's about more than that; it also sets up a principled foundation for distributed compute. You can see this in how containerization enables the low-level layers of something like k8s - setting aside for the sake of argument the whole higher-level adaptive deployment and management that it's actually most well-known for.
They did have what you could call userspace container management via application servers, though.
Java at least uses binary dependencies very rarely, and they usually have the decency of bundling the compiled dependencies... But it seems Java and Go just saw the writing on the wall and mostly just reimplement everything. I did have problems with the Snappy compression in the Kafka libraries, though, for instance .
If you look at most projects in the C world, they only provide the list of dependencies and some build config Makefile/Meson/Cmake/... But the latter is more of a sample and if your platform is not common or differs from the developer, you have the option to modify it (which is what most distros and port systems do).
But good luck doing that with the sprawling tree of modern packages managers. Where there's multiple copies of the same libraries inside the same project just because.
The switch was often much more than a minor upgrade, because it often made splitting up monoliths possible in ways that the Java ecosystem itself didn't have good support for.
The reason Spring includes those libraries is partly historical - Spring is old, and dates from the applications server days. Newer frameworks like Micronaut and Quarkus use more focused and performant libraries like Netty, Vert.x, and Undertow instead.
Unless you just mean that using Kubernetes at all is replicating application servers, which was my point. Kubernetes makes language-specific application servers like Wildfly/JBoss or Websphere obsolete, and is much more powerful, generic, and an improvement in just about every respect.
As for the question I mean the startups trying to sell the idea to use WebAssembly based pods as the next big idea.
It was pretty old, and required a very specific version of java, not available on modern systems. Plus some config files in global locations.
Packaging it in the docker container made it so much easier to use.
Basically the Linux world was actively designed to apps difficult to distribute.
For a sysadmin, distros like Debian were an innovative godsend for installing and patching stuff. Especially compared to the hell that was Windows server sysadmin back in the 90s.
The developer oriented language ecosystem dependency explosion was a more recent thing. When the core distros started, apps were distributed as tarballs of source code. The distros were the next step in distribution - hence the name.
You should be installing it from a distro package!!
What about security updates of dependencies??
And so on. Docker basically overrules these impractical ideas.
You make software harder to distribute (so inconvenient for developers and distributors) but gain better security updates and lower resource usage.
Containers are a related (as the GP comment says) thing, but offer a different and varied set of tradeoffs.
Those tradeoffs also depend on what you are using containers for. Scaling by deploying large numbers of containers on a cloud providers? Applications with bundled dependencies on the same physical server? As a way of providing a uniform development environment?
Those are all pretty much the same thing. I want to distribute programs and have them work reliably. Think about how they would work if Linux apps were portable as standard:
> Scaling by deploying large numbers of containers on a cloud providers?
You would just rsync your deployment and run it.
> Applications with bundled dependencies on the same physical server?
Just unzip each app in its own folder.
> As a way of providing a uniform development environment?
Just provide a zip with all the required development tools.
Yes, they are very similar in someways, but the tradeoffs (compared to using containers) would be very different.
> You would just rsync your deployment and run it.
If you are scaling horizontally and not using containers you are already probably automating provisioning and maintenance of VMs, so you can just use the same tools to automate deployment. You would also be running one application per VM so you do not need to worry about portability.
> Just unzip each app in its own folder.
What is stopping people from doing this? You can use an existing system like Appimage, or write a windows like installer (Komodo used to have one). The main barrier as far as I can see is that users do not like it.
> Just provide a zip with all the required development tools.
vs a container you still have to configure it and isolation can be nice to have in a development environment.
vs installing what you need with a package manager, it would be less hassle in some cases but this is a problem that is largely solved by things like language package managers.
Most Linux apps do not bundle their dependencies, don't provide binary downloads, and aren't portable (they use absolute paths). Some dependencies are especially awkward like glibc and Python.
It is improving with programs written in Rust and Go which tend to a) be statically linked, and b) are more modern so they are less likely to make the mistake of using absolute paths.
Incidentally this is also the reason Nix has to install everything globally in a single root-owned directory.
> The main barrier as far as I can see is that users do not like it.
I don't think so. They've never been given the option.
If you are considering bare-metal servers with deb files, you compare them to bare-metal servers with docker containers. And in the latter case, you immediately get all the compatibility, reproducibility, ease of deployment, ease of testing, etc... and there is no need for a single YAML file.
> If you need a reliable deployment without catching 500 errors from Docker Hub, then you need a local registry.
Yes, and with debs you need local apt repository
> If you need a secure system without accumulating tons of CVEs in your base images, then you need to rebuild your images regularly, so you need a build pipeline.
presumably you were building your deb with build pipeline as well.. so the only real change is that pipeline now has to has timer as well, not just "on demand"
> To reliably automate image updates, you need an orchestrator or switch to podman with `podman auto-update` because Docker can't replace a container with a new image in place.
With debs you only have automatic-updates, which is not sufficient for deployments. So either way, you need _some_ system to deploy the images and monitor the servers.
> To keep your service running, you again need an orchestrator because Docker somehow occasionally fails to start containers even with --restart=always. If you need dependencies between services, you need at least Docker Compose and YAML or a full orchestrator, or wrap each service in a systemd unit and switch all restart policies to systemd.
deb files have the same problems, but here dockerfiles have an actual advantage: if you run supervisor _inside_ docker, then you can actually debug this locally on your machine!
No more "we use fancy systemd / ansible setups for prod, but on dev machines here are some junky shell scripts" - you can poke the things locally.
> And you need a log collection service because the default Docker driver sucks and blocks on log writes or drops messages otherwise. This is just the minimum for production use.
What about deb files? I remember bad old pre-systemd days where each app had to do its own logs, as well as handle rotations - or log directly to third-party collection server. If that's your cup of tea, you can totally do this in docker world as well, no changes for you here!
With systemd's arrival, the logs actually got much better, so it's feasible to use systemd's logs. But here is a great news! docker has "journald" driver, so it can send its logs to systemd as well... So there is feature parity there as well.
The key point is there are all sorts of so-called "best practices" and new microservice-y way of doing things, but they are all optional. If you don't like them, you are totally free to use traditional methods with Docker! You still get to keep your automation, but you no longer have to worry about your entire infra breaking, with no easy revert button, because your upstream released broken package.
It has "too many experts", meaning that everyone has too much decision making power to force their own tiny variations into existing tools. So you end up needing 5+ different Python versions spread all over the file system just to run basic programs.
>hosting provider's ... desire to establish a clean, clear-cut separation between their own services and those of their customers
https://en.wikipedia.org/wiki/FreeBSD_jail
My guess Linux started getting requests rom various orgs for a while, so in true Linux fashion, we got a a few different container type methods years later.
I still think Jails are the best of the bunch, but they can be a bit hard to setup. Once setup, Jails works great.
So here we are :)
120 more comments available on Hacker News