About Containers and Vms
Key topics
The discussion revolves around the differences between containers and VMs, with the submission linking to a documentation page on Incus, a container management project, and commenters sharing their insights and concerns about the technology.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3d
Peak period
49
Day 3
Avg / period
12.2
Based on 61 loaded comments
Key moments
- 01Story posted
Aug 25, 2025 at 3:58 AM EDT
4 months ago
Step 01 - 02First comment
Aug 27, 2025 at 9:52 PM EDT
3d after posting
Step 02 - 03Peak activity
49 comments in Day 3
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 7, 2025 at 4:13 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
"Can only host Linux" -- Windows Containers are a thing too: https://learn.microsoft.com/en-us/virtualization/windowscont...
"Can host a single app" -- not true either. It's just bad practice to host multiple apps in a single container, but it's definitely possible.
IMHO it's not very nice to use the generic-sounding "linuxcontainers.org" domain exclusively for LXC-related content there.
Not sure about the one app thing but that’s the general design of those ad well I suppose.
The Docker folks could have done their work under this umbrella and (maybe for good reasons) chose not to. For later container runtimes, idk the story.
But this project/community definitely laid the groundwork for all of those later Linux container runtimes.
windows containers, only run on windows hosts.
when you run a linux container on a windows host, you're actually running a linux container inside of a linux vm on top of a windows host.
containers share the host operating system's kernel. it is impossible for a linux container (which is just a linux process) to execute and share the windows kernel. the reverse is true, a windows container (which is just a process) cannot execute and share the linux kernel
the article is correct, linux containers can only execute on a linux host
The NT kernel originally had Microsoft POSIX subsystem[0], which was discontinued and replaced with Windows Services for UNIX[1], which was then replaced with Windows Subsystem for Linux[2]. WSL has had two versions;
WSL 1 implemented a subset of linux syscalls directly in the windows kernel. This was discontinued and replaced with WSL 2
WSL 2 is running, you guessed it, a linux VM[3]
> The original version, WSL 1, differs significantly from the second major version, WSL 2. WSL 1 (released August 2, 2016), acted as a compatibility layer for running Linux binary executables (in ELF format) by implementing Linux system calls in the Windows kernel. WSL 2 (announced May 2019), introduced a real Linux kernel – a managed virtual machine (via Hyper-V) that implements the full Linux kernel. As a result, WSL 2 is compatible with more Linux binaries as not all system calls were implemented in WSL 1.
> Version 2 introduces changes in the architecture. Microsoft has opted for virtualization through a highly optimized subset of Hyper-V features, in order to run the kernel and distributions
> The distribution installation resides inside an ext4-formatted filesystem inside a virtual disk, and the host file system is transparently accessible through the 9P protocol
When you run linux containers on a windows host, you're running those containers inside of a linux vm.
0: https://en.wikipedia.org/wiki/Microsoft_POSIX_subsystem
1: https://en.wikipedia.org/wiki/Windows_Services_for_UNIX
2: https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux
3: https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux#WS...
I can't find great docs for it, but its in the release notes last year: https://linuxcontainers.org/incus/news/2024_07_12_05_07.html
You can 100% host "systems containers" on Docker and you can host "applications" on LXC.
Like if I want a entire OS with it's own init system and users and so on and so forth I can do it it OCI images.
In fact I use it every single day with distrobox on top of Podman using OCI container images.
And it works a hell of a lot better then if I tried to do it on LXC.
Incus, which is named after the Cumulonimbus incus or anvil cloud started as a community fork of Canonical's LXD following Canonical's takeover of the LXD project from the Linux Containers community.
The project was then adopted by the Linux Containers community, taking back the spot left empty by LXD's departure.
Incus is a true open source community project, free of any CLA and remains released under the Apache 2.0 license. It's maintained by the same team of developers that first created LXD.
LXD users wishing to migrate to Incus can easily do so through a migration tool called lxd-to-incus.
https://github.com/lxc/incus
As the others have mentioned, Incus is the community fork led by former members of the LXD team.
I prefer Incus, because you can’t do adhoc patching with docker. Instead you have to rebuild the images and that becomes a hassle quicky in a homelab settings. Incus have a VM feel while having docker management UX.
As an engineer this page has a real "trust me bro" feel to it. Maybe fine as a marketing and product positioning thing, but not interesting for HN.
As a stack, Incus has been exceptional, it has largely replaced Proxmox and Podman Quadlets for me. For context, I homelab so I cannot generalize my claim to SMB or enterprise.
But the documentation has been very end user oriented, information regarding specifics like seccomp as you mentioned are only discoverable with the search bar and that leads to various disparate locations; and that also isn't taking into account that some of the more nitty gritty information isn't on the Incus portion of linuxcontainers.org, see the LXC Security page for example: https://linuxcontainers.org/lxc/security/
If privilege isolation is a priority but you want to use containers, gVisor and Firecracker are way ahead of anything else. The Linux kernel API has proved to be very hard to secure, and not for lack of trying.
Containers just leverage existing Linux namespace isolation techniques to isolate applications.
A good way to think about it is that they act like blinders on a horse. If applications can't "see stuff" or reference items outside of the container then they don't know it exists and don't know how to interact with it.
"application containers" can take advantage of more then just namespaces to isolate applications, such as running them as unprivileged users inside the container's context and thus limiting them from the sort of kernel features that get exposed inside the containers. Or cgroups to limit resource usage and other smaller things like that.
Regardless "Security" and "Containers" really shouldn't be written about in the same paragraph without MAC framework like SELinux in place or additional isolation techniques like VMs.
Although VMs are a lot more like containers then people realize.
Incus and LXC internally use umoci to manipulate the OCI tarball to conform to how LXC runs containers.
See: - https://umo.ci/ - https://github.com/lxc/lxc/blob/lxc-4.0.2/templates/lxc-oci....
Any shared resource between containers or the kernel itself is an attack surface.
Both options have a very wide attack surface - the kernel api.
Nothing really beats virtualization in security, the surface shrinks to pretty much just the virtualization bits in the kernel and some user space bits in the VMM.
My understanding with Incus(the OP link) it's the same virtualization system, so there is no real difference, security wise between the two.
The question then becomes can they get out from under the virtualization and can they get access to other machines, containers, etc.
Docker's virtualization system has been very weak security wise. So a system container would be more secure than docker's virtualization system.
I still want capsicum to give me sane defaults, so the incentive for sandbox security theater goes away.
It is application containers which maybe should be replaced by better kernel security, not system containers.
This is not a big change implementation-wise, but it completely changes the programming model. Instead of dreaming up endless new sandboxing strategies, we just give processes exactly what they need, no more, no less.
In my experience it has gotta be Docker. For these reasons:
1. I said so
2. I'm the boss
3. Goto 1.