Proxmox
proxmox.comKey Features
Tech Stack
Key Features
Tech Stack
I've kind of wanted to build a three node cluster with some low end stuff to expand my knowledge of it. Now they have a datacenter controller. I'd need to build twice the nodes.
Question: Does anyone know large businesses that utilize proxmox for datacenter operations?
You can migrate a three node cluster from VMware to PVE using the same hardware if you have a proper n+1 cluster.
iSCSI SANs don't (yet) do snapshots on PVE. I did take a three node Dell + flash SAN and an additional temporary box with rather a lot of RAM and disc (ZFS) and took the SSDs out of the SAN and whistled up a Ceph cluster on the hosts.
Another customer, I simply migrated their elderly VMware based cluster (a bit of a mess with an Equallogic) to a smart new set of HPEs with flash on board - Ceph cluster. That was about two years ago. I patched it today, as it turns out. Zero downtime.
PVE's high availability will auto evacuate a box when you put it into maintenance mode, so you get something akin to VMware's DRS out of the box, for free.
PDM is rather handy for the likes of me that have loads of disparate systems down the end of VPNs. You do have to take security rather seriously and it has things like MFA built in out of the box, as does PVE itself.
PVE and PDM support ACME too and have done for years. VMware ... doesn't.
I could go on somewhat about what I think of "Enterprise" with a capital E software. I won't but I was a VMware fanboi for over 20 years. I put up with it now. I also look after quite a bit of Hyper-V (I was clearly a very bad boy in a former life).
There seems to be a mechanism for that since version 9.0 (August 2025), does that not do what you need?
I think the missing datacenter manager was causing a lot of hesitation for those that don't manage via automation
You can set up a cluster to play with multiple nodes without the just-announced PDM 1.0. Or you can use PDM to manage three stand alone nodes.
If you want to do both, perhaps a 3-node cluster plus a 1-node stand alone with a PDM 'overlay'. So just a +1 versus a 2x.
Why twice the nodes? The manager is optional -- but do you need multiples?
Also, when I looked into clusters (that I haven't implemented,) I did see qdevices. It's a way to have a cheap and weak third node just to break ties.
Inside the Modern Data Center! SuperClusters at Applied Digital https://youtu.be/zcwqTkbaZ0o?si=V2uPScjyN_sJcIh7&t=696
If it scales and the proxmox team can grow their support organization, they’ll have a real shot at capturing significant vmware marketshare.
But a great step nonetheless! Hope they grow too.
I (we) have several customers with PVE deployments and VPNs etc to manage them. PDM allows me to use a single pane of glass to manage the lot, with no loss of security. My PDM does need to be properly secured and I need to ensure that each customer is properly separated from each other (minimal IPSEC P2s and also firewall ingress and egress rules at all ends for good measure).
I should also point out that a vCentre is a Linux box with two Tomcat deployments and 15 virty discs. One TC is the management and monitoring system for the actual vCentre effort. Each one is a monster. Then you slap on all the other bits and pieces - their SDN efforts have probably improved since I laughed at them 10+ years ago. VMware encourage you to run a separate management cluster which is a bit crap for any org sub say 5000 users.
PDM is just a controller of controllers and that's all you need. Small, fast and a bit lovely.
Proxmox Datacenter Manager = VMware vcenter
Proxmox VE = VMware ESXi
VE can be a cluster of nodes that you can still manage via the same UI. ESXi cant do that, ESXi UI is a single node, and not even everything that a single node can do with vCenter added.
Proxmox VE is both ESXi and some/most of vCenter.
* https://en.wikipedia.org/wiki/XCP-ng
(There's also OpenStack.)
Xen Orchestra appears to be open source:
* https://github.com/vatesfr/xen-orchestra
* https://docs.xen-orchestra.com/installation#from-the-sources
See also perhaps:
* https://github.com/ronivay/XenOrchestraInstallerUpdater
* https://hub.docker.com/r/ronivay/xen-orchestra
* Via: https://forums.lawrencesystems.com/t/how-to-build-xen-orches...
I don't change the pools enough to make it worth automating the management.
Though I dont quite get the requirement for a hardware server, wouldn't it make much more sense to run this in a VM? Or is this just worded poorly?
...ahem...
When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.
OTH opinions on Proxmox were very measured.
And according to every ex-Amazoner I've ment: the core of AWS is a bunch of Perl scripts glued together
I find that the main paradigms are:
1. Run something in a VM
2. Run something on in a container (docker compose on portainer or something similar)
3. Run a Kubernetes cluster.
Then if you need something that Amazon offers you don’t implement it like open stack, you just run that specific service on options #1-3.
Kubernetes clusters doesn't really solve the storage plane issue, or a unified dashboard for users to interact with it easily.
Something like harvester is pretty close IMO to getting a kubernetes alternative to Proxmox/open cloud.
A 'childish set scripts' that manages (as of 2020) a few hundreds of thousands of cores, 7,700 hypervisors, and 54,000 VMs at CERN:
* https://superuser.openinfra.org/articles/cern-openstack-upda...
The Proxmox folks themselves know (as of 2023) of Proxmox clusters as large as 51 nodes:
* https://forum.proxmox.com/threads/the-maximum-number-of-node...
So what scale do you need?
Yes, but even the Proxmox folks themselves say the most they've seen is 51:
* https://forum.proxmox.com/threads/the-maximum-number-of-node...
I'm happily running some Proxmox now, and wouldn't want to got more than a dozen hypervisor or so. At least not in one cluster: that's partially what PDM 1.0 is probably about.
I have run OpenStack with many dozens of hypervisors (plus dedicated, non-hyperconverged Ceph servers) though.
Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.
I currently work in AI/ML HPC, and we use Proxmox for our non-compute infrastructure (LDAP, SMTP, SSH jump boxes). I used to work in cancer with HPC, and we used OpenStack for several dozen hypervisors to run a lot of infra/services instances/VM.
I think that there are two things determine which system should be looked at first: scale and (multi-)tenancy. More than one (maybe two) dozen hypervisors, I could really see scaling/management issues; I personally wouldn't want to do it (though I'm sure many have). If you have a number internal groups that need allocated/limited resource assignments, then OpenStack tenants are a good way to do this.
Company’s like meta cloud or mirantis made a ton of money with little more than openstack installers and a good out of the box default config with some solid monitoring and management tooling
Maybe I'm wrong - but where I am from, companies with less than 500 employees are like 95% of the workforce of the country. That's big enough for a small cluster (in-house/colocation), but to small for something bigger.
Proxmox has very little overhead. I've since moved to Incus. There are some really decent options out there, although Incus still has some gaps in the functionality Proxmox fills out of the box.
That said, I still run K8S in my homelab. It's an (unfortunately) important skill to maintain, and operators for Ceph and databases are worth the up-front trouble for ease of management and consumption.
do you have a point of reference? this would definitely change some architecture items i’ve got on my list.
Some instructions for Windows 11: https://kubevirt.io/2022/KubeVirt-installing_Microsoft_Windo...
> Off-site replication of guests for manual recovery in case of datacenter failure.
which would've been an actual killer feature
javascript:(function(){document.head.insertAdjacentHTML('beforeend','<meta name="viewport" content="width=device-width"/><style>body{word-break:break-word;-webkit-text-size-adjust:none;text-size-adjust:none;}</style>');})();
It does three things, It adds a viewport meta tag for a proper mobile scaling. Prevents long words/URLs from breaking thr page layout and disables automatic font size adjustment on Safari in landscape mode
Honestly the whole process was incredibly smooth, loving the web management, native ZFS. Wouldn't consider anything else as a type 1 hypervisor at this stage - and really unless I needed live VM migrations I can't see a future where I'd need anything else.
Managed to get rid of a few docker cloud VPS servers and my TrueNAS box at the same time.
I'd prefer if it was BSD based, but I'm just getting picky now.
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.