Helm 4.0
Mood
excited
Sentiment
positive
Category
tech
Key topics
Kubernetes
Helm
Software Release
DevOps
Helm, a package manager for Kubernetes, has released its version 4.0, generating considerable interest and discussion within the tech community.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
24m
Peak period
133
Day 1
Avg / period
40
Based on 160 loaded comments
Key moments
- 01Story posted
11/12/2025, 5:02:38 PM
6d ago
Step 01 - 02First comment
11/12/2025, 5:26:15 PM
24m after posting
Step 02 - 03Peak activity
133 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/17/2025, 9:07:51 AM
2d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Also, please fix the "default" helm chart template, it's a nightmare of options and values no beginner understands. Make it basic and simple.
Nowadays i would very much prefer to just use terraform for kubernetes deployments, especially if you use terraform anyway!
I'd love something that works more like Kustomize but with other benefits of Helm charts (packaging, distribution via OCI, more straight forward value interpolation than overlays and patches, ...). So far none have ticked all my boxes.
https://fluxcd.io/flux/components/helm/helmreleases/#post-re...
In hindsight it would have been much faster to write the resources myself.
I wrote a tool called "easykubenix" that works in a similar way, render the chart in a derivation, convert the YAML to JSON, import JSON into the Nix module structure and now you're free to override, remove or add anything you want :)
It's still very CLI deploy centric using kluctl as the deployment engine, but there's nothing preventing dumping the generated JSON (or YAML) manifests into a GitOps loop.
It doesn't make the public charts you consume any less horrible, but you don't have to care as much about them at least
A single purpose chart for your project is generally a lot easier to grok and consume vs what can be done.
I think the likes of "kustomize" is probably a more sane route to go down. But our entire infrastructure is already helm so hard to switch that all out.
With Kustomize, on the other hand, you just write the default as perfectly normal K8s manifests in YAML. You don't have to know or care what your users are going to do with it.
Then you write a `kustomizatiom.yaml` that references those manifests somehow (could be in the same folder or you can use a URL). Kustomize simply concatenates everything together as its default behaviour. Run `kubectl kustomize` in the directory with `kustomization.yaml` to see the output. You can run `kubectl apply -k` to apply to your cluster (and `kubectl delete -k` to delete it all).
From there you just add what you need to `kustomization.yaml`. You can do a few basics easily like setting the namespace for it all, adding labels to everything and changing the image ref. Keep running `kubectl kustomize` to see how it's changing things. You can use configmap and secret generators to easily generate these with hashed names and it will make sure all references match the generated name. Then you have the all powerful YAML or JSON editing commands which allow you to selectively edit the manifests if you need to. Start small and add things when you need them. Keep running `kubectl kustomize` at every step until you get it.
Does your Kubernetes configuration need to be installed by a stranger? Use Helm.
Does your Kubernetes configuration need to be installed by you and your organization alone? Use Kustomize.
It makes sense for Grafana to provide a Helm chart for Grafana Alloy that the employees of Random Corp can install on their servers. It doesn't make sense for my employer to make a Helm chart out of our SaaS application just so that we can have different prod/staging settings.
I think it is because most engineers learn to use Kubernetes by spinning up a cluster and then deploying a couple of helm charts. It makes it feel like that’s the natural way without understanding the pain and complexity of having to create and maintain those charts.
Then there are centralised ‘platform’ teams which use helm to try and enforce their own templating onto everything even small simple micro services. Functionally it works and can scale, so the centralised team can justify their existence but as a pattern it costs everyone a little bit of sanity.
Helm is not good enough to develop abstractions with. So go the opposite way: keep it stupid simple.
Pairing helm with Kustomize can help a lot as well. You do most of the templating in the helm chart but you have an escape hatch if you need more patches.
Nowadays I'm using CUE in front of TF & k8s, in part because I have workloads that need a bit of both and share config. I emit tf.json and Yaml as needed from a single source of truth
I've been trying to apply CUE to my work, but the tooling just isn't there for much of what I need yet. It also seems really short-sighted that it is implemented in Go which is notoriously bad for embedding.
CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)
Also, so much of the k8s ecosystem is in Go that it was a natural choice.
Ah, that makes sense, I guess. I also get the feeling that the language itself is still under very active development, so until 1.0 is released I don't think it matters too much what it's implemented in.
> Also, so much of the k8s ecosystem is in Go that it was a natural choice.
That might turn out to be a costly decision, imho. I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.
I figured I'd try and hack something together, but it was a complete non-starter since I don't work within the Go ecosystem.
Projects like the cue language live and breathe from an active community with related tooling, so the decision still really boggles my mind.
I'll stay optimistic and hope that once it reaches 1.0, someone will write an implementation that is easily embedded for my use-cases. I won't hold my breath though, since the scope is getting quite big.
> I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.
Have you tried a Makefile to run cue? There should be no need to write code to do this
1. it seems like development has largely ceased since Sept
2. it looks to only handle helm, not terraform, I'm looking for something to unify both and deal with dependencies between charts (another thing helm is terrible at)
the tf is still in hcl form for now.
I’d love to dig a bit.
Is there a helm provider?
If not, what would be the right way to install messy stuff like nginx ingress, cert-manager, etc.?
People probably don't realize, that helm mostly is templating for the YAMLs kubernetes wants (plus a lot of other stuff that increases complexity).
So if you want to avoid helm, you gotta do a whole lot of reverse-engineering. You gotta render a chart, explore all the manifests, explore all the configuration options, find out if they're needed or not.
An alternative is to just use helm, invoking it and forgetting about it. You can't blame people for going the easy way, I guess...
Regarding dependencies: Using some SaaS Kubernetes (Google GKE) for example, you'll typically use terraform for SQL and other Services anyway (atleast we do use Google CloudSQL and not some selfhosted postgres in k8s).
I find it interesting that cert-manager points to kubectl for new users and not helm: https://cert-manager.io/docs/installation/
But, for sure, there may be reasons to use helm, as you said. I'm sure it is overused, though.
Network effect is a thing, Helm is the de facto "package manger" for Kubernetes program distribution. But this time there are generally no alternative instructions like
tar xzf package.tar.gz; ./configure; make; adduser -u foo; chown -R foo /opt/fooIn my experience, it’s best to bootstrap ArgoCD/flux, rbac and cloud permissions those services need in Terraform and then move on to do everything else can via Kustomize via gitop. This keeps everything sane and relatively easy to debug on the fly, using the right tool for the job.
I'm still using it with not a single issue (except when is messes up the iptables rules)
I still confidently, upgrade the docker across all the nodes, workers and managers and it just works. Not a single time that it caused an issue.
I heard good things about Nomad (albeit from before Hashicorp changed their licenses): https://developer.hashicorp.com/nomad
I got the impression it was like a smaller, more opinionated k8s. Like a mix between Docker Swarm and k8s.
It's rare that I see it mentioned though, so I'm not sure how big the community is.
Everything else is composable from the rest of the hashicorp stack consul(service mesh and discovery),vault(secrets) allowing you to use as much/or as little as you need and truly able to scale to a large deployment as needed.
In the plus column , picking up its config/admin is intuitive in a way that helm/k8s never really comes across.
Philosophy wise can put it int the unix way of doing things - it does one thing well and gets out of your way , and you add to it as you need/want. Whereas k8s/heml etc have one way or the high way - leaving you fighting the deployment half the time.
It's a shame Nomad couldn't overcome the K8s hype-wagon, but either way IBM is destroying everything good about Hashicorp's products and I would proceed with extreme caution deploying any of their stuff net-new right now...
Using it in prod and also for my personal homelab needs - works pretty well!
At the scale you see over here (load typically served on single digit instances and pretty much never needing autoscaling), you really don't need Kubernetes unless you have operational benefits from it. The whole country having less than 2 million people also helps quite a bit.
K8s isn't for running containers, it's for implementing complex distributed systems: tenancy/isolation and dynamic scaling and no-downtime service models.
- Kamal
- Docker compose with Caddy (lb_try_duration to hold requests while the HTTP container restarts)
- Systemd using socket activation (same as Docker compose, it holds HTTP connections while the HTTP service restarts)
So you don't have to buy the whole pig and butcher it to eat bacon.
Nit: it holds the TCP connections while the HTTP service restarts. Any HTTP-level stuff would need to be restarted by the client. But that’s true of every “zero downtime” system I’m aware of.
It's far from table stakes and you can absolutely overengineer your product into the ground by chasing it.
"0 downtime" system << antifragile systems with low MttR.
Something can always break even if your system is "perfect". Utilities, local disasters, cloud dependencies.
I know that there are solutions like CDK and SST that attempt this, but because the underlying mechanisms are not native to those solutions, it's simply not enough, and the resulting interfaces are still way too brittle and complex.
If you used helm + terraform before, you'll have no problem understanding the terraform kubernetes provider (as opposed to the helm provider).
If you write your own tf definition of operator x v1, it can be tricky to upgrade to v2 - as you need to figure out what changes are needed in your tf config to go from v1 to v2.
You can install, update, and remove an app in your k8s cluster using helm.
And you release a new version of your app to a helm repository.
This sounds okay in principle, but I far too often end up needing to look through the template files (what helm deploys) to understand what a config option actually does since documentation is hit or miss.
Gets the job done.
A one-time adoption from kubectl yaml or helm to terraform is doable - but syncing upstream updates is a chore.
If terraform (or another rich format) was popular as source of truth - then perhaps helm and kubectl yaml could be built from a terraform definition, with benefits like variable documentation, validation etc.
Any tool that encourages templating on top of YAML, in a way that prevents the use of tools like yamllint on them, is a bad tool. Ansible learned this lesson much earlier and changed syntax of playbooks so that their YAML passes lint.
Additionally, K8s core developers don't like it and keep inventing things like Kustomize and similar that have better designs.
Which is a thing with some Python IDEs, but it's maddening to work on anything that can't do this.
autocmd FileType yaml setlocal et ts=2 ai sw=2 nu sts=0
I'm sure Emacs and others have something similarThere's lots of advice on StackOverflow against building your own JSON strings instead of using a library. But helm wants us to build our own YAML with Go templating. Make it make sense.
You define your data in the "pkl language", then it outputs it as yaml, json, xml, apple property list, or other formats.
You feed in something like:
apiVersion = "apps/v1"
kind = "Deployment"
metadata {
name = "my-deployment"
labels {
["app.kubernetes.io/name"] = "my-deployment"
["app.kubernetes.io/instance"] = "prod"
}
}
spec {
replicas = 3
template {
containers {
new {
name = "nginx"
}
new {
name = "backend"
}
}
}
}
And then you `pkl eval myfile.pkl -f yaml` and get back: apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app.kubernetes.io/name: my-deployment
app.kubernetes.io/instance: prod
spec:
replicas: 3
template:
containers:
- name: nginx
- name: backend
The language supports templating (structurally, not textually), reuse/inheritance, typed properties with validation, and a bunch of other fun stuff.They also have built in package management, and have a generated package that provides resources for simplifying/validating most kubernetes objects and generating manifests.
There's even a relatively easy path to converting existing YAML/JSON into pkl. Or the option to read an external YAML file and include it/pull values from it/etc (as data, not as text) within your pkl so you don't need to rebuild everything from the ground up day 1.
Aaaaand there's bindings for a bunch of languages so you can read pkl directly as the config for your app if you want rather than doing a round trip through YAML.
Aaaaand there's a full LSP available. Or a vscode extension. Or a neovim extension. Or an intellij extension.
The documentation leaves a bit to be desired, and the user base seems to be fairly small so examples are not the easiest to come by... but as far as I've used it so far it's a pretty huge improvement over helm.
but we don't have tons of infra so no idea how it would run for big thousands-of-employees corps.
Kustomize also seems like hell when a deployment reaches a certain level of complexity.
DevOps has more friction for tooling changes because of the large blast radius
Helm, and a lot of devops tooling, is fundamentally broken.
The core problem is that it is a templating language and not a fully functional programming language, or at least a DSL.
This leads us to the mess we are in today. Here is a fun experiment: Go open 10 helm charts, and compare the differences between them. You will find they have the same copy-paste bullshit everywhere.
Helm simply does not provide powerful enough tools to develop proper abstractions. This leads to massive sprawl when defining our infrastructure. This leads to the DevOps nightmare we have all found ourselves in.
I have developed complex systems in Pulumi and other CDKs: 99% of the text just GOES AWAY and everything is way more legible.
You are not going to create a robust solution with a weak templating language. You are just going to create more and more sprawl.
Maybe the answer is a CDK that outputs helm charts.
You say you want a functional DSL? Well, jq is a functional DSL!
I liked KRO's model a lot but stringly typed text templating at the scale of thousands of services doesn't work, it's not fun when you need to make a change. I kinda like jsonnet plus the google cli i forget the name of right now, and the abstraction the Grafana folks did too but ultimately i decided to roll my own thing and leaned heavily into type safety for this. It's ideal. With any luck i can open source it. There's a few similar ideas floating around now - Scala Yaga is one.
I've used it in the past (for a quite small deployment I must say), but have been very happy with it. Specifically the diff mode is very powerful to see what changes you'll apply compared to what's currently deployed.
cdk8s.io
Of course most other programming languages will work just as well, it's just JavaScript being the most natural fit for JSON.
I don't really like this superficial reasoning. You can specify, generate, parse, and validate JSON in many common languages with similar levels of effort.
Saying you should use JavaScript to work with JSON because it has JavaScript in the acronym is about as relevant as comparing Java to JavaScript because both have Java in the name.
terraform with helm/kubernetes/kubectl providers is hit or miss. But I love it for simple things. For hairy things I will want full TypeScript with Pulumi.
E.g. these are the libs I use, generated from CRDs: https://github.com/Extrality/pulumi-crds
Now it’s not perfect either. It does have some issues with slow querying of the current state during planning, even when it has the tfstate as a cache, which is another source of errors.
There’s packages. You can write functions. You can write tests trivially (the output is basically a giant map that you just write out as yaml)…
I’m applying this to other areas too with great success, for example our snowflake IaC is “just python” that generates SQL. It’s great.
But really any kind of reconciler, e.g. flux or argo with helm works very well. Helm is only used as a templating tool, i.e. helm template is the only thing allowed. It works very well and I've ran production systems for years without major issues.
I dont really understand how people have so much trouble with Helm, granted yaml whitespace + go templating is sometimes awful, it is the least bad tool out there that I have tried and once you learn the arcane ways of {{- its mostly a non-issue.
I would recommend writing your own charts for the most part and using external charts when they are simple, or well proven. Most applications you want to run arent that complicated, they are mostly a collection of environment variables, config files, and arguments.
If I could wish for a replacement of helm, it would be helm template with the chart implemented in a typed language, e.g. TypeScript, instead of go template but backwards compatible with go template.
> Some common CLI flags are renamed:
> --atomic → --rollback-on-failure > --force → --force-replace
> Update any automation that uses these renamed CLI flags.
I wish software providers like this would realize how fucking obnoxious this is. Why not support both? Seriously, leave the old, create a new one. Why put this burden on your users?
It doesn't sound like a big deal but in practice it's often a massive pain in the ass.
A Helm chart is often a poorly documented abstraction layer which often makes it impossible to relate back the managed application's original documentation to the Helm chart's "interface". The number of times I had to grep through the templates to figure out how to access a specific setting ...
What is the essence of the complaint here? That chart authors do poor jobs? That YAML sucks (it does! it so so does!)? Just that charting provides an abstraction you'd rather not have? (If so, why not just... not use Helm?) Something else?
As said, that I often cannot relate the managed application's documentation to the Helm chart's interface?
Reason for it can vary ... poor Helm chart documentation, poor Helm chart design, Helm chart not in sync with application releases, ... The consequence is that I often need to grep through its templates and logic to figure out how to poke the chart's interface to achieve what I want. I don't think that's reasonable to say that's part of the end-user experience.
PS: I have no gripes with YAML
Why would I need a chart for a single container app? Making this simple is what Kubernetes is designed for. No, I don’t want your ServiceAccounts or PVs because I anyway need to grant and understand the permissions and select the size and SKU of the underlying disk.
Deploying an app in your own infrastructure has too many knobs that need to be turned so you need to expose all of them. Just spend a few minutes extra to write your own deployment manifest. While it’s a few more lines of code vs ”helm install”, you will not regret it and you’ll get a much better understanding of what’s actually running.
Now there are of course exceptions to this, like Prometheus or Ingress operators where more complex charts are warranted. What I’m talking about is those charts that just wrap what can be translated from docker-compose to k8s in two minutes.
I'd say the abstraction is not worth it when you have only a steady 2-3 servers worth of infrastructure. Don't do it at "Hello, world!" scale, you win nothing.
(I work for a company that helps other companies set up and secure larger projects into environments like Kubernetes.)
The alternatives of helm are not that interesting to me: I still have nightmare when I had to use jsonnet and kustomize just for istio, with upgrade hell.
So I am sticking to helm as it feels way straight forward when you need to change just a few things from an upstream open source project: way fewer lines to maintain and change!
When you look into all the complaints one by one they are exceptionally acurate.
* yaml has its quirks. - check
* text templating can’t be validated for spec comformity - check
* helm has lot’s of complexity - check
* helm has dependency problems - check
* helm charts can have too many moving parts with edge cases causing deep dive in the chart - check
and many others. However proposed solutions cut short on providing the value helm brings on.Helm is not just a templating engine to provide kubernetes manifests. It’s an application deployment and distribution ecosystem. Emphasis on the "ecosystem".
* It brings dependency management,
* It provides kubernetes configuration management.
* It provides abstraction over configuration to define applications instead of configuration.
* It provides application packaging solution.
* It provides an application package manament solution.
* There is community support with huge library of packages.
* It’s relatively easy to create or understand charts with a varied experience level. A more robust and strictly typed templating system would remove at least half of this spectrum.
* The learning curve is flat.
When you put all of these in to consideration, it’s relatively easy to understand why it’s this prominant in the kubernetes ecosystem.And that makes it wrong. YAML is structured format and proper templating should work with JSON-like data structures, not with text. Kustomize is better example.
Helm's contribution (as horrible as text templating on YAML is) is, yes, to be a package manager. Part of a Helm chart includes jobs ("hooks") that can be run at different stages (pre-install, pre-upgrade, etc.) as well as a job to run when someone runs "helm test", and a way to rollback changes ("helm rollback"), which is more powerful than just rolling back a Deployment, because it will rollback changes to CRDs, give you hooks/jobs that can run pre- and post-rollback, etc.
Helm charts are meant to be written by someone with the relevant skills sitting next to the developers, so that it can be handed off to another team to deploy into production. If that's not your organization or process, or if your developers are giving your ops teams Docker images instead of Helm charts, you're probably over-engineering by adopting it.
Also I cannot count how many times I had to double/triple run charts because crds were in a circular dependency. In a perfect world this must not be an issue but if you want to be a user of an upstream chart this is a pain
People then start creating tooling to mask some of the complexity, but then said tooling grows to support the full K8s feature set and then we're back to square one.
Because the rush to K8s was so fast (and arguably before it was ready) the tooling often became necessary.
> Helm charts are meant to be written by someone with the relevant skills sitting next to the developers.
That makes sense for large organizations, but it still gets complicated depending on how your service plugs into a greater mesh of services.
I currently treat helm the same way I treat Cloudformation on AWS (another horrid thing to deal with). If some third party has it so that I can easily take the template and launch it, then great. I don't want to go any further under the hood than that.
The last project I had to be involved with used kustomize for different environments, flux to deploy, helm to use a helmchart which took in a list of configmaps using "valuesFrom". Not only does kustomize template and merge together yaml but so does the valuesFrom thing, however at "runtime" in the cluster.
There's just not a single chance to get any coherent checking/linting or anything before deployment. I mean how could a language server even understand how all this spaghetti yaml merges together? And note that I was working on this as a developer in a very restricted environment/cluster.
Yaml is too permissive already, people really start programming with it. The thing is, kubernetes resources are already an abstraction. That's kind of the nice thing about it, you can create arbitrary resources and kubernetes is the management platform for them. But I think it becomes hairy already when we create resources that manage other resources.
And also, sure some infrastructure may be "cattle" but at some point in the equation there is state and complexity that has to be managed by someone who understands it. Kubernetes manifests are great for that, I think using a package manager to deploy resources is taking it too far. Inevitably helm charts and the schema of values change and then attention is needed anyway. It makes the bar for entry into the kubernetes ecosystem lower but is that actually a good thing for the people who then fall into it without the experience to solve the problems they inevitably encounter?
Sorry for the rant but given my second paragraph I hope there is some understanding for my frustrations. Having all that said, I am glad they try to improve what has established itself now and still welcome these improvements.
> The thing is, kubernetes resources are already an abstraction.
Your first comment was more accurate - they’re heavily nested abstractions.
A container represents a namespace with a limited set of capabilities, resources, and a predefined root.
A Pod represents one of more containers, and pulls the aforementioned limitations up to that level.
A ReplicaSet represents a given generation of a set amount of Pods.
A Deployment represents a desired number of Pods, and pulls the ReplicaSet abstraction up to its level to manage the stated end state (and also manages their lifecycle).
I think most infra-adjacent people I’ve worked with who use K8s could accurately describe these abstractions to the level of a Pod, but few could describe what a container actually is.
> It makes the bar for entry into the kubernetes ecosystem lower but is that actually a good thing for the people who then fall into it without the experience to solve the problems they inevitably encounter?
It is not a good thing, no. There is an entire generation of infra folk who have absolutely no clue how computers actually work, and if given an empty bare metal server connected to a LAN with running servers, would be unable to get Linux up and running on the empty server.
I am not against K8s, nor am I against the cloud - I am against people using abstractions without understanding the underlying fundamentals.
The counter to this argument is always something along the lines of, “we build on abstractions to move faster, and build more powerful applications - you don’t need to understand electron flow to use EC2.” And yes, of course there’s a limit; it’s probably somewhere around understanding different CPU cache levels to be well-rounded. However, IME at the lower levels, the assumption that you don’t need to understand something to use it doesn’t hold true. For example, if you don’t understand PN junctions, you’re probably going to struggle to effectively use transistors. Sure, you could know that to turn a silicon BJT transistor on, you need to establish approximately 0.7 VDC between its base and emitter, but you wouldn’t understand why it’s much slower to turn off than to turn on, or why thermal runaway happens, etc.
What I meant by that is that kubernetes resources are generic. "Objects" in the cluster representing arbitrary things. And this makes sense because, it's okay if one doesn't know what cgroups and namespaces are to deploy a container/pod resource. What I'm trying to say is that this kind of arbitrary abstraction is what k8s brought to the table but people keep trying to abstract again on top of that which makes no sense. "Resource" is already generic.
[1]: https://github.com/bjw-s-labs/helm-charts/tree/main
See here for more examples on how people are using this chart:
A few years ago, the startup I worked at folded - just as the new CTO's mandate to move everything to K8s with Helm was coming into effect. Having to scramble for a new job sucked of course, but in retrospect, I honestly have good feelings associated with the whole debacle: A) I learned a lot about Helm, B) I no longer needed to work with Helm, and C) I'm now quite sure that I don't want to be part of any engineering org that makes the decision to use it.
This is not exactly a criticism of these technologies, but simply me discovering that I'm simply utterly incompatible with it. Whether it's a failing with the Cloud Native Stack, or a personal failing of mine, it doesn't matter - everyone's better off when I stay far away from it.
(Not all of them were written in a sane manner, but that's just how it goes)
At Dayjob in the past, we've debugged various Helm issues caused by the internal sprig library used. We fear updating Argo CD and Helm for what surprises are in store for us and we're starting to adopt the rendered manifests pattern for greater visibility to catch such changes.
14 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.