How I think about Kubernetes
No synthesized answer yet. Check the discussion below.
Same reason they remove "dang" if the post starts with it, like the discussion about "Dang! Who ate the middle out of the daddy longlegs".
https://ifunny.co/picture/dang-who-ate-the-middle-out-of-the...
I really like the idea of something like Firebase, but it never seems to work out or just move the complexity to the vendor, which is fine, but I like knowing I can roll my own.
Remove abstractions like CNI, CRI, just make these things built-in.
Remove unnecessary things like Ingress, etc, you can always just deploy nginx or whatever reverse proxy directly. Also probably remove persistent volumes, they add a lot of complexity.
Use some automatically working database, not separate etcd installation.
Get rid of control plane. Every node should be both control plane and worker node. Or may be 3 worker nodes should be control plane, whatever, deployer should not think about it.
Add stuff that everyone needs. Centralised log storage, centralised metric scrapping and storage, some simple web UI, central authentication. It's reimplemented in every Kubernetes cluster.
The problem is that it won't be serious enough and people will choose Kubernetes over simpler solutions.
Think about Linux installation. I don't need to add IDP to create unix users for various people.
Having SSO is fine as long as it's built-in. Installing and configuring separate SSO software is not fine.
It's k3s. You drop a single binary onto the node, run it, and you have a fully functional one-node k8s cluster.
[2] https://docs.siderolabs.com/talos/v1.12/getting-started/gett...
Considering these companies make money when you use their hosted solution, this is not surprising, and it just goes to show TANSTAFL.
If you want better monitoring, metrics, availability, orchestration, logging, and so on, you pay for it with time, money, and complexity.
If you can't justify that cost, you're free to use simpler tools.
Just because everyone sets up a Kubernetes / Prometheus / ELK stack to host a web app that would happily run on a single VPS doesn't mean you need to do the same, or that nowadays this is the baseline for running something.
- Docker Compose running on a single server
- Docker Swarm cluster (typically multiple nodes, can be one)
- Hashicorp Nomad or K3s or other light Kubernetes distrosI think K8s couples two concepts: the declarative-style cluster management, and infrastructure + container orchestration. Keep CRDs, remove everything else, and implement the business-specific stuff on top of the CRD-only layer.
This would give something like DBus, except cluster-wide, with declarative features. Then, container orchestration would be an application you install on top of that.
> That’s why I like to think of Kubernetes as a runtime for declarative infrastructure with a type system.
You can go build a simple way to deploy containers or ship apps: but you are missing what I think allows Kubernetes to be such a big tent, thats a core useful platform for so many. Kubernetes works the same for all types, for everything you want to manage. It's the same desired state management + autonomic systems patterns, whatever you are doing. An extensible platform with a very simple common core.
There are other takes and other tries, but managing desired state for any kind of type is a huge win that allows many people to find their own uses for kube, that is absolutely the cornerstone to it's popularity.
If you do want less, the one project I'd point to that is kubernetes without the kubernetes complexity is KCP. It's just the control plane. It doesn't do anything at all. This to me is much simpler. It's not finding a narrowly defined use case to focus on, it's distilling out the general system into it's simplest parts. Rebuilding a good simple bespoke app container launching platform around KCP would be doable, and maintain the overarching principles that make Kube actually interesting.
I seriously think there is something deeply rotten with our striving for simplicity. I know we've all been burned, and there's so often we want to throw up our hands, and I get it. But the way out is through. I'd rather dance the dance & try to scout for better further futures, than reject & try to walk back.
Kubernetes means everything to everyone. At its core, I think it’s being able to read/write distributed state (which doesn’t need to be etcd) and being able for all the components (especially container hosts) to follow said state. But the ecosystem has expanded significantly beyond that.
Unpopular opinion, but the source of most of the problems I've seen with infrastructures using Kubernetes came from exactly this kind of approach.
Problems usually come where we use tools to solve things that they weren't made for. That is why - in my opinion - it is super important to treat a container orchestrator a container orchestrator.
Yes, and 99% of the companies do this. It is quite common to use Terraform/AWS CDK/Pulumi/etc to provision the infrastructure, and ArgoCD/Helm/etc to manage the resources on Kubernetes. There is nothing wrong with it.
> it is super important to treat a container orchestrator a container orchestrator.
Which products do you think are only “container orchestrators”? Even Docker Compose is designed to achieve a desired state from a declarative infrastructure definition.
The way how something describes the desired state (declaratively for example) has nothing to do with if it is a container orchestrator or not.
If you open the Kubernetes website, do you know what is the first thing you will see? "Production-Grade Container Orchestration". Even according to their own docs, Kubernetes is a container orchestrator.
The mental gymnastics required to express oneself in yaml, rather than, say, literally anything else, invariably generates a horror show of extremely verbose boilerplate, duplication, bloat, delays and pain.
If you're not Google, please for the love of god, please consider just launching a monolith and database on a Linux box (or two) in the corner and see how beautifully simple life can be.
They'll hum along quietly serving many thousands of actual customers and likely cost less to purchase than a single month (or at worst, quarter) of today's cloud-based muggings.
When you pay, you'll pay for bandwidth and that's real value that also happens to make your work environment more efficient.
You can literally get a Linux box (or two) in the corner and run:
curl -sfL https://get.k3s.io | sh -
cat <<EOF | kubectl apply -f -
...(json/yaml here)
EOF
How am I installing a monolith and a database on this Linux box without Kubernetes? Be specific, just show the commands for me to run. Kubernetes that will work for ~anything. HNers spend more tokens complaining about the complexity than it takes to setup.The mental gymnastics required to express oneself in yaml, rather than, say, literally anything else
Like, brainfuck? Like bash? Like Terraform HCL puppet chef ansible pile-o-scripts? The effort required to output your desired infrastructure's definition as JSON shouldn't really be that gargantuan. You express yourself in anything else but it can't be dumped to JSON?
Just because you can install it with 1 command doesn't mean it's not complex, it's just made easier, not simpler.
I’ve seen teams waste many months refining k8s deployments only to find that local development isn’t even possible anymore.
This massive investment often happens before any business value has been uncovered.
My assertion, having spent 3 decades building startups, is that these big co infra tools are functionally a psyop to squash potential competitors before they can find PMF.
If you’re running things differently and getting tons of value with little investment, kudos! Keep on keeping on!
What I’ve seen is that the vast majority of teams that pick up k8s also drink the micro service kool-aid and build a mountain of bullshit that costs far more than it creates.
This is a pretty good definition.
I think part of the challenge is the evolution of K8s over time sometimes makes it feel less like a coherent runtime and more like a pile of glue amalgamated from several different components all stuck together. That and you will have to be aware of how those abstractions stick together with the abstractions from your cloud provider, etc...
For my infrastructure definition repo, I will apply it, watch, and then merge the PR/commit to master. I often need to do this progressively just to roll back if I see resource consumption or other issues, it would be quite dangerous to let the CI pipeline apply everything and then for me to try and change declarations whilst the control plane API is totally starved for resources.
Also (and maybe this is me not doing "proper devops", I don't care), I will often want to tinker a bit with the declaration, trying a bunch of little changes, and then commiting once all is satisfactory. That "dev loop" is less productive if I have to wait for a CI pipeline for every step.
Or is it what I tend to call "intra-cluster infra" - DBs / Prometheus / Kafka etc. Infra that support apps?
The giant nested YAML you come across is the input (pre-deserialization)/output (post-serialization) for the declared types:
https://github.com/kubernetes/api/blob/master/core/v1/types....
Fortunately, or unfortunately, I am the only person that finds humor in this.
For instance, with Helm, I've had success using Helmfile's diffs (which in turn use https://github.com/databus23/helm-diff) to do this.
There's more of a spectrum between these than you think, in a way that can be agile for small teams without dedicated investment in gitops. Even with the messes that can occur, I'd take it over the Heroku CLI any day.
> that runtime continuously works to make the infrastructure match your intent.
The flipside of that is that the infrastructure, at any given time, might not match your intent, or might be continuously working to try to match your intent, which means the state of the infrastructure often does not match the state of its configuration, which is hell during an incident.
It makes me wonder if declarative, converging systems are actually what we want, or if they're what we ended up with, or if all the alternatives are just worse.
6 more comments available on Hacker News