Adventures in upgrading Proxmox
Mood
thoughtful
Sentiment
neutral
Category
tech
Key topics
Proxmox
virtualization
system administration
The author shares their experience upgrading Proxmox, a virtualization platform, likely discussing challenges and lessons learned.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
42m
Peak period
15
Hour 2
Avg / period
8.7
Based on 26 loaded comments
Key moments
- 01Story posted
11/19/2025, 4:40:24 PM
2h ago
Step 01 - 02First comment
11/19/2025, 5:22:39 PM
42m after posting
Step 02 - 03Peak activity
15 comments in Hour 2
Hottest window of the conversation
Step 03 - 04Latest activity
11/19/2025, 7:22:41 PM
5m ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Knowing when to use a vm and when to use a container is sometimes an opaque problem.
This is one of those cases where a VM is a much better choice.
It has improved as of newer kernel and docker versions but they were problems (overlayfs/zfs incompatibilities/ uid mapping problems in docker images/ capabilities requested by docker not available in LXC, rootless docker problems,...)
> OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
Depending on your hardware platform, there could be valid reasons why you wouldn't want to run Frigate NVR in a VM. Frigate NVR it works best when it can leverage the GPU for video transcoding and TPU for object detection. If you pass the GPU to the VM, then the Proxmox host no longer has video output (without a secondary GPU).
Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM. This is a non-starter for systems where there is no extra PCIe slot for a graphics card, such as the many power-efficient Intel N100 systems that do a good job running Frigate.
The reason why you'd put Docker into LXC is that's the best supported way to get docker engine working on Proxmox without a VM. You'd want to do it on Proxmox because it brings other benefits like a familiar interface, clustering, Proxmox Backup Server, and a great community. You'd want to run Frigate NVR within Docker because it is the best supported way to run it.
At least, this was the case in Proxmox 8. I haven't checked what advancements in Proxmox 9 may have changed this.
This is changing, specifically on QEMU with virtio-gpu, virgl, and Venus.
Virgl exposes a virtualized GPU in the guest that serializes OpenGL commands and sends them to the host for rendering. Venus is similar, but exposes Vulkan in the guest. Both of these work without dedicating the host GPU to the guest, it gives mediated access to the GPU without any specific hardware.
There's also another path known as vDRM/host native context that proxies the direct rendering manager (DRM) uAPI from the guest to the host over virtio-gpu, which allows the guest to use the native mesa driver for lower overhead compared to virgl/Venus. This does, however, require a small amount of code to support per driver in virglrenderer. There are patches that have been on the QEMU mailing list to add this since earlier this year, while crosvm already supports it.
I'm not exactly sure how the outcome would have changed here though.
It's blazing fast and I cut down around 60% of my RAM consumption. It's easy to manage, boots instantly, allows for more elastic separation while still using docker and/or k8s. I love that it allows me to keep using Proxmox Backup Server.
I'm postponing homelab upgrade for a few years thanks to that.
https://pve.proxmox.com/wiki/FAQ
> While it can be convenient to run “Application Containers” directly as Proxmox Containers, doing so is currently a tech preview. For use cases requiring container orchestration or live migration, it is still recommended to run them inside a Proxmox QEMU virtual machine.
Lesson in here somewhere. Something about about a toaster representing the local intelligence maxima?
For even n>2 you define a tie breaker node in advance and only the partition connected to that node can make a quorum at 50%. For n=2 going from no quorum to quorum requires both nodes but losing a node doesn't lose quorum, and when you lose a node you stop, shoot the other node, and continue. For split brain the fastest draw wins the shootout.
Most homelabbers ignore recommendations because if anything breaks nothing of corporate value is lost and no one's gonna lose their job.
Has anyone here found a stable way to handle USB / PCIe device identity changes across updates or reboots?
That part always feels like the weak point in otherwise solid Proxmox setups
Any debian system (proxmox is based on debian) would have broken in a similar (if not the exact same) way.
* https://pve.proxmox.com/wiki/Upgrade_from_8_to_9
And "Known Issues & Breaking Changes (9.1)":
Proxmox virtual environment 9.1 available - https://news.ycombinator.com/item?id=45980005 - Nov 2025 (56 comments)
4 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.