Back to Home11/19/2025, 4:40:24 PM

Adventures in upgrading Proxmox

38 points
30 comments

Mood

thoughtful

Sentiment

neutral

Category

tech

Key topics

Proxmox

virtualization

system administration

The author shares their experience upgrading Proxmox, a virtualization platform, likely discussing challenges and lessons learned.

Snapshot generated from the HN discussion

Discussion Activity

Active discussion

First comment

42m

Peak period

15

Hour 2

Avg / period

8.7

Comment distribution26 data points

Based on 26 loaded comments

Key moments

  1. 01Story posted

    11/19/2025, 4:40:24 PM

    2h ago

    Step 01
  2. 02First comment

    11/19/2025, 5:22:39 PM

    42m after posting

    Step 02
  3. 03Peak activity

    15 comments in Hour 2

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/19/2025, 7:22:41 PM

    5m ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (30 comments)
Showing 26 comments of 30
zer00eyz
1h ago
6 replies
> Running docker inside LXC is weird.

Knowing when to use a vm and when to use a container is sometimes an opaque problem.

This is one of those cases where a VM is a much better choice.

poisonborz
1h ago
1 reply
This seems like a niche issue, running Docker in LXC for years with dozens of images without a problem.
SirMaster
6m ago
Guessing you are only running a single node though, not a cluster with HA and live migration and all that.
selectodude
1h ago
2 replies
Am I crazy or is converting a dockerfile into LXC something that should be possible?
tharos47
28m ago
It should in an ideal world but docker is a very leaky abstraction imho and you will run into a number of problems.

It has improved as of newer kernel and docker versions but they were problems (overlayfs/zfs incompatibilities/ uid mapping problems in docker images/ capabilities requested by docker not available in LXC, rootless docker problems,...)

mzsl
1h ago
In the new Proxmox VE 9.1 release this should be possible, from the changelog:

> OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.

evanjrowley
1h ago
2 replies
I can't speak for the author, but they said they have a Coral TPU passed into the LXC & container, which I also have on my Proxmox setup for Frigate NVR.

Depending on your hardware platform, there could be valid reasons why you wouldn't want to run Frigate NVR in a VM. Frigate NVR it works best when it can leverage the GPU for video transcoding and TPU for object detection. If you pass the GPU to the VM, then the Proxmox host no longer has video output (without a secondary GPU).

Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM. This is a non-starter for systems where there is no extra PCIe slot for a graphics card, such as the many power-efficient Intel N100 systems that do a good job running Frigate.

The reason why you'd put Docker into LXC is that's the best supported way to get docker engine working on Proxmox without a VM. You'd want to do it on Proxmox because it brings other benefits like a familiar interface, clustering, Proxmox Backup Server, and a great community. You'd want to run Frigate NVR within Docker because it is the best supported way to run it.

At least, this was the case in Proxmox 8. I haven't checked what advancements in Proxmox 9 may have changed this.

roger_
1h ago
I have Frigate and a Coral USB running happily in a VM on an N97. GPU pass through is slightly annoying (need to use a custom ROM from here: https://github.com/LongQT-sea/intel-igpu-passthru). I think SRIOV works but haven’t tried. And Coral only works in USB3 mode if you pass the whole PCIe controller.
jakogut
16m ago
> Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM.

This is changing, specifically on QEMU with virtio-gpu, virgl, and Venus.

Virgl exposes a virtualized GPU in the guest that serializes OpenGL commands and sends them to the host for rendering. Venus is similar, but exposes Vulkan in the guest. Both of these work without dedicating the host GPU to the guest, it gives mediated access to the GPU without any specific hardware.

There's also another path known as vDRM/host native context that proxies the direct rendering manager (DRM) uAPI from the guest to the host over virtio-gpu, which allows the guest to use the native mesa driver for lower overhead compared to virgl/Venus. This does, however, require a small amount of code to support per driver in virglrenderer. There are patches that have been on the QEMU mailing list to add this since earlier this year, while crosvm already supports it.

0x1ch
1h ago
The way I understand it is that Docker with LXC allows for compute / resource sharing, where as dedicated VMs will will require passing through the entire discrete GPU. So, the VMs require a total passthrough of those Zigbees, container wouldn't?

I'm not exactly sure how the outcome would have changed here though.

szszrk
1h ago
It's not always better. Docker on lxc has a lot of advantages. I would rather use plain lxc on production systems, but I've been homelabbing on lxc+docker for years.

It's blazing fast and I cut down around 60% of my RAM consumption. It's easy to manage, boots instantly, allows for more elastic separation while still using docker and/or k8s. I love that it allows me to keep using Proxmox Backup Server.

I'm postponing homelab upgrade for a few years thanks to that.

itopaloglu83
1h ago
Proxmox FAQ calls running Docker on LXC a tech preview and “kind of” recommends VMs. At the very bottom of the page.

https://pve.proxmox.com/wiki/FAQ

> While it can be convenient to run “Application Containers” directly as Proxmox Containers, doing so is currently a tech preview. For use cases requiring container orchestration or live migration, it is still recommended to run them inside a Proxmox QEMU virtual machine.

generalizations
1h ago
2 replies
> As an aside... Because one node didn't start, and my Proxmox cluster has only two nodes, it can't reach quorum, meaning I can't really make any changes to my other node, and I can't start any containers that are stopped. I've recently added another Zigbee dongle, that supports Thread, and it happens to share same VID:PID combo as the old dongle, so due to how these were mapped into guest OS, all my light switches stopped working. I had to fix the issue fast.

Lesson in here somewhere. Something about about a toaster representing the local intelligence maxima?

speed_spread
1h ago
3 replies
Lesson 1: clusters should have an odd number of nodes.
nightpool
59m ago
I really, really think there are better lessons there. Maybe more like "Lesson 0. Don't put distributed clusters in control of your light switches"
Spivak
45m ago
Two node / even node clusters can work fine.

For even n>2 you define a tie breaker node in advance and only the partition connected to that node can make a quorum at 50%. For n=2 going from no quorum to quorum requires both nodes but losing a node doesn't lose quorum, and when you lose a node you stop, shoot the other node, and continue. For split brain the fastest draw wins the shootout.

znpy
27m ago
In fairness to proxmox, that's the recommended way.

Most homelabbers ignore recommendations because if anything breaks nothing of corporate value is lost and no one's gonna lose their job.

RedShift1
5m ago
The lesson is use dumb light switches and have a shotgun ready if the printer starts to act up.
danishSuri1994
50m ago
1 reply
It seems like a lot of the pain comes from the fact that hardware passthrough behaves so differently under LXC vs VMs.

Has anyone here found a stable way to handle USB / PCIe device identity changes across updates or reboots?

That part always feels like the weak point in otherwise solid Proxmox setups

adamweld
36m ago
I just use UUID to make sure the mountpoint for each device stays the same across reboots.
znpy
26m ago
1 reply
Btw the issue that the author encountered are not really with proxmox itself but with an out-of-tree kernel driver they installed.

Any debian system (proxmox is based on debian) would have broken in a similar (if not the exact same) way.

_rs
20m ago
Not to mention, Proxmox does not support running Docker in an LXC officially (of course many users still do it). It is not a supported configuration as of now
throw0101c
1h ago
See also "Upgrade from 8 to 9":

* https://pve.proxmox.com/wiki/Upgrade_from_8_to_9

And "Known Issues & Breaking Changes (9.1)":

* https://pve.proxmox.com/wiki/Roadmap#9.1-known-issues

dang
1h ago
Related ongoing thread:

Proxmox virtual environment 9.1 available - https://news.ycombinator.com/item?id=45980005 - Nov 2025 (56 comments)

evanjrowley
2h ago
I ran into the same issue over the weekend. The end-goal for my Proxmox setup is basically the same deployment you have. It's good to see the issue was addressed quickly by the community.
4fterd4rk
1h ago
Man Proxmox... I love it, I use it, but I swear there has to be a more straightforward way to implement this technology.

4 more comments available on Hacker News

ID: 45981666Type: storyLast synced: 11/19/2025, 7:26:56 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.