Kubernetes Is Your Private Cloud
Mood
heated
Sentiment
negative
Category
tech
Key topics
Kubernetes
Cloud Computing
DevOps
The article claims Kubernetes can be used as a private cloud, but commenters heavily criticize its complexity, maintenance, and scalability issues, sparking a debate about its practicality.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
24m
Peak period
85
Day 1
Avg / period
29.3
Based on 88 loaded comments
Key moments
- 01Story posted
11/12/2025, 4:07:30 PM
6d ago
Step 01 - 02First comment
11/12/2025, 4:31:27 PM
24m after posting
Step 02 - 03Peak activity
85 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/18/2025, 7:24:06 AM
1d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
I dont need to autoscale my home lab...
I want a better UI/DX/Interface than Kubernetes...
I need to be able to do things "by hand" as well as "automated" at home...
There is a reason that I use Proxmox at home. Because it is a joy to work with for the simple needs of my home lab.
This doesn't seem to be aimed at homelab but small teams.
Except you own ops, management, extension, interoperability, access, security, scalability, redundancy… words cannot express how ridiculous all of the koober propaganda is
Onboarding new team members? A disaster. The design? All done by one dumpster diver with nobody to call them out on the mistakes because they have no idea what the hell is happening.
I’ve never seen a k8s shop where there weren’t a few principal engineers being roped from incident to incident because teams couldn’t manage their own.
With cloud providers, even ones we don’t use, people have a general idea of how the primitives work. With any reasonably skilled team you get back to talking about how your product works instead of talking about persistent volume replication for Postgres backups or the differing file system behavior or what cli everyone should be using to manage your entire company’s software or how to stop someone from deleting the entire of everything you own or or or
Cloud’s big promise was speed to market and price, and let’s be honest, price is no longer there compared to a decent operation.
The one thing where clouds remain kings is speed for small teams. Any large enough company should probably Ask themselves whether running their own operation using ias would be a better choice.
Because on prem is inelastic, we are at sub 10% peak utilization of compute resources. If we added in the likely higher cloud utilization rate we are talking of 30%+ savings from on prem.
so... you bought way too much hardware?
If you don’t think IAM is a better set of semantics for securing your infrastructure than nightmarish k8s rbac i really don’t know what to tell you.
Can you think back to your batch and honestly tell me that the companies that used k8s are in a better place?
What every company seems to want: multisite, multimaster immediate failover, no.
Also kubernetes buys you scaling. Compute. Disk. Database (with help). Etc.
Now I rail against companies for wanting that, and I think you're right. Your webshop does not need that. It has so many moving parts the redundancy will cause more outages than it solves. But you can do this, and so people will pay for it.
It is a technical accomplishment.
And with sufficiently good sysadmins, it can work well, and scale.
Oh, and all the documentation for that YAML assumes you've memorized as much vocabulary as a Foreign Language 101 class.
(And there is a mad god that says: if you try to use click-ops to get around this without knowing the vocabulary, you're going to have a bad time.)
But on the other hand: to put it in terms of the "3 servers" - the moment you think you'll have 3 servers, and any level of uptime expectations, you'll inevitably have to rebuild them, services and logging and all, from scratch often and quickly enough that you might as well have 20 servers with how stressful that rebuild will be.
k8s can be a saving grace there, and I recommend it to anyone with the time and interest in how cluster best practices work! But it's not a free option or a weekend skill-up.
Also there are vendors renting out datacenters so you do less of hardware management.
Having worked at two companies spending 250M+ on cloud bills alone, they try hard to decouple from cloud but many things are vendor locked
Hybrid has been the answer to both. It shouldn't be a binary decision. stateless compute workload can fairly easily be offloaded to private cloud.
Genuine question out curiosity (I have a master in finance, but never practiced it) -- aren't both the cloud bill and depreciation all tax deductible, eventually? the bill 100% in that year and the depreciation spread over multiple years?
> Hybrid has been the answer to both. It shouldn't be a binary decision. stateless compute workload can fairly easily be offloaded to private cloud.
Can you elaborate on that? I'm studying for saa-c03, and I was shocked by how expensive egress out of aws can be.
- Upgrading a kubernetes cluster may as well be an olympic sport. its so draconian most best practice documentation insists you build a second cluster for AB deployment.
- load balancers come in half a dozen flavours, with the default options bolted at the hip to the cloud cartel. MetalLB is an option, but your admin doesnt understand subnets let alone BGP.
- It is infested with the cult of immutability. pod not working? destroy it. network traffic acting up? destroy the node. container not working? time to destroy it. cluster down? rebuilt the entire thing. At no point does the "devops practitioner" stop to consider why or how a thing of kubernetes has betrayed them. it is assumed you have a football field of fresh bare metal to reinitialize everything onto at a moments notice, failure modes be damned.
what your company likely needs is some implementation of libvirtd or proxmox. run your workloads on rootless podman or (god forbid) deploy to a single VM.
MetalLB is good yes, and admins should have IP knowledge. I ask this in interview questions.
Yes, sheep not pets is the term here. Self healing is wonderful. There's plenty to dig into if you run into the same problem repeatedly. Being able to yank a node out that's misbehaving is very nice from a maintenance pov.
Talos on bare metal to get kubernetes features is pretty good. That's what my homelab is. I hated managing VMs before that.
The complaint isn't immutability, the complaint is that k8s does immutability is a broken, way too granular fashion.
I know that is the whole point of sheep vs pets but it somehow became the "did you restart the pc" version for operations.
Maybe get someone competent then? Why are you tasking running onprem setup someone who doesn’t understand basic networking?
Even with a single VM, someone's company probably will also want a reverse proxy and certificate management (assuming web services), automated deployments, provide secrets to services, storage volumes, health checks with auto restarts, ability to wire logs and metrics to some type of monitoring system, etc. All of this is possible with scripts and config management tools but now complexity is being managed in different ways. Alternatively use K3s and Flux to end up with a solution that checks all of those boxes while also having the option to use k8s manifests in public clouds.
Immutability is like violence: if it doesn't solve your problem, you aren't using enough of it.
Longhorn just kinda worked out of the box though with a couple kernel/system settings. No s3 api though.
But this isn't k8s fault out all.
And in related news, Proxmox VE is often probably a more sensible thing to use for a private cloud environment, because it is far more flexible and easier to use than Kubernetes.
I suspect both of them will go down together if/when they do.
That being said, once it was set up, there was not a lot of maintenance. Kubernetes is quire resilient when set up properly, and the cost savings were significant.
And while k8s can do all the same things and much more with a bit of trying, but it requires a mission control the second you add a second developer, you will have built-in primitives that will compete all the time with the ones you bolt-on, etc etc. Nomad feels much more opinionated and in a good way.
Nomad is one of those things that gets you 90% of the way with 20% of effort, and only then if you need something, you can add things to it. K8S is great, way more flexible, there are managed options out there, massive ecosystem, but it always feels like out of the box you need to glue 5 different tools to it, just get it going.
Also Incus. Stephane Graeber is doing lords work by sticking to his thing. That's also super fun to mess with.
At home i am using this approach. Dumb, but works well. https://royportas.com/posts/simple-gitops-with-nomad
Kubernetes is a rat nest and I long hope for Kubernetes be simpler (Who needs this Gateway API?) but Devs keep building crazy and crazy solutions so we have to pivot to keep up.
What's needed isn't a rambling YAML and immense resource consumption; it's IaC, built-in to the system, that can do what's necessary not for an IT giant that lives off others' services run in-house, but for me, a private citizen with just a few of my own services, little time to manage them, a need to quickly replicate the infrastructure because I don't have infinite data centers, so if the homeserver dies, I need to buy another cheap desktop and set it up, restoring it on the fly. So if I'm offline for a few hours, nothing happens, but hardware costs money, so I need to use it well, and so on.
The giants' solutions are not one-size-fits-all.
K8s is complicated as hell to learn to use. Its learning curve is very shallow. Yes, you can get a "hello world" running quickly, but that is not the benchmark for whether you actually understand what's going on, or how to make it do what you need.
But once you do learn it thoroughly, it's ridiculously fast to ramp up on it and use it for very complex things, very quickly. As a developer medium, as an operational medium, it accelerates the kind of modern practices (that for some reason most people still don't know about?) that can produce a lot of value.
But that's if someone else is building and maintaining the underlying "guts" of it for you. If you try to build it from scratch and maintain it on bare metal, suddenly it's incredibly complicated again with another shallow learning curve and a lot of effort. And that effort continues, because they keep changing the fucking thing every 6-12 months...
It's like learning to drive a car or ride a bike. At first it's difficult and time-consuming. But then you get it, and you can ride around pretty easily, and it can take you far. However, that does not mean you understand car/bike mechanics, or can take it apart and rebuild it when something breaks. So be prepared to pay someone for that maintenance, or be prepared to become a mechanic (and do that work while also doing your day job). This analogy is stretched thinner by the fact that nobody's constantly changing how your car works...
This is not a good reason to have done it. To me this means that the expectations and outcomes were flawed as they are solving a problem that shouldn't have existed. I can't really agree with the sentiment or overview of this post
Just deploy Rook and Ceph? ARE YOU BLEEPING KIDDING ME?!?
There's a job description called "Storage Engineer". These people know a little bit about Kubernetes, but are mostly specialized in everything Ceph. This tells you everything how hard it is to keep Ceph humming along in production. As a sidenote: if you want to make really good money there's also somebody called a "Ceph consultant" who is called in when SHTF. And if SHTF in a Ceph cluster, it really does.
And that's besides all the crap it takes to get and keep Kubernetes running smoothly: Kernel Optimization. Networking. Security. Storage integration. Observability. And the list goes on...
In other words, unless you are VERY well versed in a variety of topics ranging from server architecture to deep Linux knowledge and are knee deep in the usual day to day operations stuff already you are better off running Kubernetes in the cloud and leaving all the intricacies to the likes of Google, Microsoft and Amazon than trying to run a well designed cluster architecture yourself. It just isn't worth it.
High Scale/Revenue
│
│
Managed Services │ Self-hosted K8s
(Overpaying but │ (article is
no choice) │ pitched here)
│
────────────────────────────┼────────────────────────────
Low capacity │ High capacity
│
│
Managed Services │ Managed Services
(Right choice - │ (Wasting money on
focus on product) │ platform team)
│
Low Scale/Revenue
Or something like that. Maybe as a function of time as well but that might be harder to do in 2d.Sure I can absolutely manage my own k8s, but there is no doubt it's easier for me to spin up postgres and ship faster on my own. At enterprise scale it's definitely a lot easier to do everything in k8s and be able to manage as many aspects as possible. I have experience of both.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.