Back to Home11/13/2025, 10:20:57 PM

Kubernetes Ingress Nginx is retiring

210 points
165 comments

Mood

thoughtful

Sentiment

mixed

Category

tech

Key topics

Kubernetes

Ingress Nginx

Cloud Native

Debate intensity60/100

The Kubernetes Ingress Nginx project is being retired, prompting discussions about its impact and alternatives.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

4h

Peak period

152

Day 1

Avg / period

80

Comment distribution160 data points

Based on 160 loaded comments

Key moments

  1. 01Story posted

    11/13/2025, 10:20:57 PM

    5d ago

    Step 01
  2. 02First comment

    11/14/2025, 2:50:09 AM

    4h after posting

    Step 02
  3. 03Peak activity

    152 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/15/2025, 2:41:31 PM

    3d ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (165 comments)
Showing 160 comments of 165
andix
5d ago
2 replies
Does anyone know good resources on how to migrate and which gateway controllers are suitable replacements?

Ingresses with custom nginx attributes might be tricky to migrate.

seabombs
5d ago
2 replies
I've been using Envoy Gateway in my homelab and have found it to be good for my modest needs (single node k3s cluster running on an old PC). I needed to configure the underlying EnvoyProxy so that it would listen on specific IPs provided by MetalLB, and their docs were good enough to find my way through that.

https://gateway.envoyproxy.io/

imcritic
5d ago
4 replies
But envoy configs are unreadable abominations, why would you choose it? How did you even learn how to configure it? It's documentation is so confusing.
trenchpilgrim
5d ago
1 reply
Envoy is designed with the intent that a machine is dynamically reconfiguring it at runtime. It is not designed to be configured directly by a human.

The tradeoff is that you can do truly zero downtime configuration changes. Granted, this is important to a very small number of companies, but if it's important to you, Envoy is great.

imcritic
4d ago
1 reply
This makes no sense.

Where would one take a machine to dynamically reconfigure envoy? How would one configure it?

> The tradeoff is that you can do truly zero downtime configuration changes.

So... just like with nginx?

trenchpilgrim
3d ago
> Where would one take a machine to dynamically reconfigure envoy? How would one configure it?

When I worked in this area a while back - Ingess Controllers and Ingress / a custom type we made because Ingress was too limited.

We didn't use nginx because it would drop requests and mess up connections during certain config reloads. With a custom controller, Envoy never dropped a connection or request we didn't explicitly tell it to (excepting network reliability of course). For context a slow day for us was many billions of requests.

eddythompson80
5d ago
You don't. Envoy is great if you programmatically configure it, or if you have very small and simple configs. It can't be maintained by a human. But if you have tools that generate it programmatically based on other config, you can read through it.
andix
4d ago
Configuring of the proxy is done by the k8s Gateway controller. Exactly like for the ingress controller. You just use standardized k8s CRDs to configure it.

The gateway/ingress controller takes the k8s resources and configures the proxy server accordingly. In some cases additional config snippets specific to the proxy (nginx, envoy, etc) are required, but it's usually just a few lines.

Which http server is used is not that important (the most common ones are all fine), it's more about how well the integration to k8s works.

arccy
4d ago
it's pretty straightforward if you think about it in terms of the networking layers involved in processing a request though
mzaccari
5d ago
^ I second Envoy Gateway! It has support for HTTPRoute like all the others, but also TCPRoute, UDPRoute, TLSRoute, GRPCRoute backed by Envoy and they have worked great for me on EKS clusters I manage for work. The migration from Ingress API to Gateway API hasn’t been bad, as you can have both running side-by-side (just not using the same LB) and the EnvoyPatchPolicy has been great for making advanced changes for things not covered by the manifests
PhilippGille
5d ago
1 reply
Literally the second link in the article is "migrating to API Gateway" and points to https://gateway-api.sigs.k8s.io/guides/

Which has this section about migration: https://gateway-api.sigs.k8s.io/guides/migrating-from-ingres...

And this list of Gateway controllers: https://gateway-api.sigs.k8s.io/implementations/

andix
4d ago
I was looking for something more opinionated and hands on. This is a reference documentation.
all_usernames
5d ago
1 reply
What's the security back-story here?
bennysaurus
5d ago
1 reply
Only a single maintainer for years, and it's fallen now to best-effort.
withinboredom
5d ago
1 reply
I (and others) have offered to create a PR for issues opened — just point us in the right direction we asked. The maintainer always came back with “I fixed it for you”.

The maintainer had plenty of people who wanted to help, but never spent the time to teach them.

mawadev
5d ago
1 reply
Are you blaming the maintainer? lol, lmao even
withinboredom
5d ago
Not exactly blaming them. But saying opportunities were missed, for sure.
SlavikCA
5d ago
4 replies
Reading few blogs and forums about it today - people talking about switching to Gateway API (from "legacy" Ingress).

And I do not understand it:

1. Ingress still works, it's not deprecated.

2. There a lot of controllers, which supports both: Gateway API and Ingress (for example Traefik)

So, how Ingress Nginx retiring related / affects switch to Gateway API?

pronik
4d ago
Ingress as defined by Kubernetes is really restricted if you need to do rewriting, redirecting and basically all the stuff we've been doing in pre-Kubernetes times. Nginx Ingress Controller worked around that by supporting a ton of annotations which basically were ingested into nginx.conf, to the point that any time you had a need everyone just assumed you were using nginx-ingress and recommended an annotation or two.

In a way, it was a necessity, since Ingress was all you'd get and without stuff like rewriting, doing gradual Kubernetes migrations would have been much more difficult to impossible. For that reason, every ingress controller tried to go a similar, but distinctly different way, with vastly incompatible elements, failing to gain traction. In a way I'm thankful they didn't try to reimplement nginx annotations (apart from one attempt I think), since we would have been stuck with those for foreseeable future.

Gateway API is the next-gen standardized thing to do ingress, pluggable and upgradable without being bound to a Kubernetes version. It delivers _some_ of the most requested features for Ingress, extending on the ingress concept quite a bit. While there is also quite a bit of mental overhead and concepts only really needed by a handful of people, just getting everyone to use one concept is a big big win for the community.

Ingress might not be deprecated, but in a way it was late to the party back in the day (OpenShift still has Route objects from that era because ingress was missing) and has somewhat overstayed its welcome. You can redefine Ingress in terms of Gateway API and this is probably what all the implementers will do.

nunez
5d ago
1) ingress still works but is on the path to deprecation. It's a super popular API, so this process will take a lot of time. That's why service meshes have been moving to Gateway API. Retiring ingress-nginx, the most popular ingress controller, is a very loud warning shot.

2) see (1).

stackskipton
5d ago
It doesn't but Kubernetes team was kind of like "Hey, while you are switching, maybe switch away from Ingress API?"
cheriot
4d ago
I think it's that Gateway is new (relatively speaking) so there's a lot of places it's a good fit that haven't adopted it yet.
etchalon
5d ago
1 reply
Why would you kill a thing that works so well, is so flexible, and does not have an equal yet?

I do not understand.

seneca
5d ago
1 reply
There are no maintainers. It was maintained by one engineer for years, he stepped down, and F5 (who bought nginx) don't want to contribute since they have a competitor.
mt42or
5d ago
The project is still active even not pushing new big features.
seneca
5d ago
2 replies
Ingress nginx was the default ingress for pretty much the entire life of k8s. F5 bought nginx and made nginx ingress, which I've never met a user of.

Sad to see such a core component die, but I guess now everyone has to migrate to gateways.

elric
5d ago
1 reply
F5 bought nginx? Isn't (wasn't?) nginx a simple open source web server?
vbezhenar
5d ago
Nginx Inc was founded by Nginx developers in 2011. They were selling commercial support. They were bought by F5 in 2019 for $670M.
wg0
5d ago
And see how confusing the naming is.

ingress ngnix. ngnix ingress.

hombre_fatal
5d ago
1 reply
Another triumph for open source: popular project probably used by many megacorps only propped up by the weekend charity of a couple unpaid suckers over the years.
preisschild
4d ago
Yes, exactly. And not only megacorps are to blame, smaller businesses use it too without contributing anything back and then their devs complain here...
3ln00b
5d ago
18 replies
How do you people even keep up with this? I'm going back to cybersecurity after trying DevOps for a year, it's not for me. I miss my sysadmin days, things were simple back then and worked. Maybe I'm just getting old and my cognitive abilities are declining. It seems to me that the current tech scene doesn't reward simple.
gryfft
5d ago
3 replies
> It seems to me that the current tech scene doesn't reward simple.

A deal with the devil was made. The C suite gets to tell a story that k8s practices let you suck every penny out of the compute you already paid for. Modern devs get to do constant busy work adding complexity everywhere, creating job security and opportunities to use fun new toys. "Here's how we're using AI to right size our pods! Never mind the actual costs and reliability compared to traditional infrastructure, we only ever need to talk about the happy path/best case scenarios."

dangus
5d ago
4 replies
This just seems like sensationalist nonsense spoken by someone who hasn’t done a second of Ops work.

Kubernetes is incredibly reliable compared to traditional infrastructure. It eliminates a ton of the configuration management dependency hellscape and inconsistent application deployments that traditional infrastructure entails.

Immutable containers provide a major benefit to development velocity and deployment reliability. They are far faster to pull and start than deploying to VMs, which end up needing some kind of annoying deployment pipeline involving building images or having some kind of complex and failure-prone deployment system.

Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)

And who is really working for a company that has a small deployment? I’d say that most medium-sized tech companies can easily justify the complexity of running a kubernetes cluster.

Networking can be complex with Kubernetes, but it’s only as complex as your service architecture.

These days there are more solutions than ever that remove a lot of the management burden but leave you with all the benefits of having a cluster, e.g., Talos Linux.

vasco
5d ago
1 reply
It was clear they didn't know what they were saying when they think the main reason for kubernetes was to save money. Kubernetes is just easy to complain about.
imp0cat
5d ago
Exactly, if anything, Kubernetes will require a lot more money.
steve1977
5d ago
1 reply
The problem is that some Kubernetes features would have a positive impact on development velocity in theory, however in my experience (25 years of ops and devops), the cost of keeping up often eats up those benefits and often results in a net-negative.

This is not always a problem of Kubernetes itself though, but of teams always chasing after the latest shiny thing.

mlrtime
4d ago
2 replies
Also a old man from VMS/Sparc days, I'm still doing "devops" and just deployed a realtime streaming webapp tool for our team in a few days to k8s pods. It was incredibly easy and I get so much for free

Automatically created for me: - Ingress, TLS, Domain name, Deployment strategy, Dev/Prod environments through helm, Single repo configuration for source code, reproducible dev/prod build+run (Docker)...

If a company sets this up correctly developers can create tooling incredibly fast without any tickets from a core infra team. It's all stable and very performant.

I'd never go back to the old way of deploying applications after seeing it work well.

steve1977
4d ago
2 replies
> just deployed a realtime streaming webapp tool for our team in a few days to k8s pods.

How long would you estimate that deployment would have taken with more a „classic“ approach? (e.g. deploying to a Java application server)

mlrtime
4d ago
Too opened ended of a question, but in 'old days' it would be a ticket for a new vm, then back and forth between dev and infra to setup the host, deploy the application etc...
trenchpilgrim
4d ago
If you had a really good team, hours. At most companies, days to weeks. At worst, months.

With a well managed Kubernetes, around 5-15 minutes. Not a theoretical time, I have personally had thousands of devs launch that quickly on clusters I ran.

throw_away_341
4d ago
1 reply
> If a company sets this up correctly developers can create tooling incredibly fast

I find that it has its place in companies with lots of micro services. But I think that because it is made "easy" it encourages unnecessary fragmentation and one ends up with a distributed monolith.

In my opinion, unless you actually have separate products or a large engineering team, a monolith is the way to go. And in that case you get far with a standard CI/CD pipeline and "old school" deployments

But of course I will never voice my opinion in my current company to avoid the "boomer" comments behind my back. I want to stay employable and am happy to waste company resources to pad my resume. If the CTO doesn't care about reducing complexity and costs, why should I?

mlrtime
4d ago
1 reply
In my example it was a simple CRUD app, no microservice. It could just as easy been ran by scping the entire dev dir to a vm and ensuring a port is open. But I wouldn't get many of the things I described above and I don't need to monitor it at all.

Also a release is just a PR merge + helm upgrade.

throw_away_341
4d ago
1 reply
You had PR merge and automatic release before Kubernetes too, and it's not that hard to configure.

If one has a small project where a few seconds of downtime is acceptable, you can just setup a simple Github action triggered on commit/merge. It can scp the file to the server and run "sysctl restart" automatically. I have used this approach for small side projects (even with external paying users)

And if you need a "no downtime" release, a proper CI/CD pipeline can handle a blue/green switch. I don't think you would spend much more time setting that up, than Kubernetes from scratch unless you have extensive experience with Kubernetes.

roryirvine
4d ago
You're not expecting them to set k8s up from scratch, just as you'd not expect the dev team to set up the datacentre power or networking from scratch for the server in your "scp and sysctl restart" scenario.

Typically, a k8s installation is looked after by a cross-functional Platform team, who look after not just the k8s cluster but also the gateways, service mesh, secrets management, observability and other common services, shared container images, CI/CD tooling, as well as platform security and governance.

These platform services then get consumed by the feature dev teams (of which there could be anywhere between half a dozen and multiple thousands). To deploy a new app, those dev teams need only create a repo and a helm chart, and the platform's self-service tooling will do the rest automatically. It really shouldn't take more than a few minutes for a team with some experience.

Yes, it's optimised for a very different scale of operation than a single server at a managed hosting provider. But there are plenty of situations in which that scale is required, and it's there that k8s shines.

KaiserPro
4d ago
> Kubernetes is incredibly reliable compared to traditional infrastructure.

The fuck it is.

> It eliminates a ton of the configuration management

Have you used k8s recently? to get it secure and sane is a lot of work. Even if you buy in sensible defaults, its a huge amount of work to get a safe, low blast radius deployment pipeline working reliably

Like if you want vaguely secure secrets, thats an add on. if you want decent non-stupid networking, thats an addon, Everything is split horizon DNS.

Thats before we get to state management, trying to play the pvc lottery, is not fun. which means its easier to use a clustered filesystem. Thats how fucked it is.

> there’s a lot of complexity to configuration management on traditional VMs

Not really, you need at least terraform to spin up your k8s cluster in the first place, its not that much harder to extend it to use real machines instead.

It is more expensive, unless you're binpacking with docker.

> cough…Chef

Chef can also fuck off. Although facebook use it on something like 8 million servers, somehow.

> Networking can be complex with Kubernetes

try making it use ipv6.

Look what the industry needs is a simple orchestration layer that places docker containers according to a DAG. You can have dependencies, and if you want a plugin system to allow you to paint yourself into a corner.

Have some hooks so we can trigger actions based on backlog

Leave the networking to the network, because DHCP and DNS are a solved problem.

What I'm describing is basically ECS, but without the horrid config language.

tbrownaw
4d ago
> Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)

I have a small application running under single-node k3s. It's slightly (but not hugely) easier to work with then the prior version that I had running under IIS.

hexbin010
5d ago
Mhm! And Google just sit there laughing at everyone. Mission accomplished
Glamklo
5d ago
We scale our infra from 20 to 200 servers with k8s out of the box. All pods logs are shipped directly to central services and pods health themselves constantly.

What we have today is 1000x more stable than all the custom VMs we had before.

And cheaper.

We are not idiots running k8s because some C-suite said so...

solatic
5d ago
3 replies
It's exactly why taking a trip through the ops/infra side is so important for people - you learn why LTS-style engineering is so important. You learn to pick technologies that are stable, reliable, well-supported by a large-enough people who are conservative in their approach, for anything foundational, because the alternative is migration pain again and again.
j-krieger
5d ago
1 reply
I also feel like we as an industry should steer towards a state of "doneness" for OSS solutions. As long as it works, it's fine to keep using technologies that are only sparsely maintained.
cyberpunk
5d ago
1 reply
Ingress-Nginx is commonly internet facing though; I think everyone wants at least base image and ssl upgrades on that component…
rcxdude
4d ago
In which case it's even more important that the updates are not a huge amount of work.
toredash
5d ago
2 replies
I often find myself trying to tell people that KISS is a good thing. If something is somewhat complex it will be really complex after a few years and a few rotations of personnel.
hoherd
4d ago
Another great one is PLOS, the Principal of Least Astonishment. Stable and reliable software and systems should avoid astonishing surprises.

https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...

friendzis
5d ago
Quite often the tradeoff is not between complexity (to cover a bunch of different cases) and simplicity (do one thing simply), but rather where that complexity lies. Do you have dependency fanout? It probably makes sense to shove all that complexity into the central component and manage it centrally. Otherwise it probably makes sense to make all the components a bit more complex than they could be, but still manageable.
oblio
4d ago
At least in the golden days of job hopping, not migrating was a way to hobble that job hopping and decrease your income growth prospects. Now that engineers are staying put more it's likely we'll start seeing what you're saying.

Though now AI slop is upon us so we'll probably be even worse off for a while.

Aeolun
5d ago
1 reply
I like devops. It means you get to get ahead of all the issues that you could potentially find in cybersecurity. Sure it's complicated, but at least you'll never be bored. I think the hardest part is that you always feel like you don't have enough time to do everything you need to.
dangus
5d ago
2 replies
DevOps teams are always running slightly behind and rarely getting ahead of technical debt because they are treated as cost centers by the business (perpetually understaffed) and as “last minute complicated requests that sound simple enough” and “oops our requirements changed” dumping grounds for engineering teams.

Plus, the ops side has a lot of challenges that can really be a different beast compared to the application side. The breadth of knowledge needed for the job is staggering and yet you also need depth in terms of knowing how operating systems and networks work.

immibis
5d ago
1 reply
> DevOps teams are always running slightly behind and rarely getting ahead of technical debt because they are treated as cost centers by the business

This is one of those explanations that sounds reasonable but when you actually experience it you realize the explanation makes no sense.

If you're "running behind of technical debt" you'll always feel understaffed no matter how much staffing you have. And adding more staffing will make your tech debt worse.

Plus, tech debt doesn't really exist. It's a metaphor for all the little annoyances in your system that add up, but the metaphor makes it sound like it's the problem of management or accounting to solve when it's actually created by developers and solved by developers.

dangus
3d ago
Hmm, no, you were changing the meaning of my comment.

> no matter how much staffing you have

That’s not what I said. I said that there tends to be not enough staff. Businesses are more willing to hire software engineers (shipping features = revenue) than hiring DevOps people (keeping the lights on).

> tech debt doesn’t really exist.

Well that’s news to me. I’m pretty sure it exists. It has an entire Wikipedia article, and that article doesn’t agree with your definition.

And yes, more staff would help. Hiring me literally helped my organization fix its lack of monitoring and alerting because nobody had time to address the problem during my team’s day to day responsibilities.

Your assertion that it’s not management’s fault is absurd. Management is by definition the bearer if ultimate responsibility. Every problem in any business is something where the buck stops at management.

If I ship something with long term problems because management told me to work faster and meet the deadline, that is directly management’s fault. Even me shipping something bad on my own volition is management‘s fault indirectly: they hired the wrong talent (me), or maybe they assigned me to the wrong project where my expertise wasn’t good enough, or they misjudged risks and didn’t leave enough contingency buffer or didn’t make a plan for what to do if we fail.

The way businesses view humans are as machine-like resources of labor (Human Resources), they don’t view you as an individual with emotions and thoughts and feelings. When they hire someone they have quantitative measures surrounding that person: how likely are they to perform well, burn out and quit, steal from the company, get run over by a bus, etc. The corporate system actually dictates that management is responsible for the way it arranges and commands its human machines.

Aeolun
4d ago
1 reply
> Plus, the ops side has a lot of challenges that can really be a different beast compared to the application side.

That’s why we aim to call it DevOps, so that you can take all that into account from the start of the project?

dangus
3d ago
In practice, I’ve never seen a “DevOps” or “SRE” team that wasn’t just Ops.

Almost every company works with the “throw stuff over the wall to DevOps” mentality. The word “DevOps” is meaningless.

makeitdouble
5d ago
8 replies
> things were simple back then

If you were working in the orgs targeted by k8s, I think it was generally more of a mess. Think about managing a park of 100~200 servers with home made bash scripts and crappy monitoring tools and a modicum of dashboards.

Now, k8s has engulfed a lot more than the primary target, but smaller shops go for it because they'r also hoping to hit it big someday I guess. Otherwise, there will be far easier solutions at lower scale.

dangus
5d ago
3 replies
Even after the bash script era, I don’t think the configuration management landscape gets enough discredit for how bad it is. I never felt like it stopped feeling hacked together and unreliable.

E.g., Chef Software, especially after its acquisition, is just a dumpster fire of weird anti-patterns and seemingly incomplete, buggy implementations.

Ansible is more of the gold standard but I actually moved to Chef to gain a little more capability. But now I hate both of them.

When I just threw this all in the trash in my HomeLab and went to containerization it was a major breath of fresh air and resulted in getting a lot of time back.

For organizations, of the best parts about Kubernetes is that it’s so agnostic so that you can drop in replacements with a level of ease that is just about unheard of in the Ops world.

If you are a small shop you can just start with something simpler and more manageable like k3s or Talos Linux and basically get all the benefits without the full blown k8s management burden.

Would it be simpler to use plain Docker, Docker Swarm, Portainer, something like that? Yeah, but the amount of effort saved versus your ability to adapt in the future seems to favor just choosing Kubernetes as a default option.

bostik
5d ago
1 reply
To quote an ex coworker: all configuration management systems are broken, in equal measure - just in different fashion. They are all trying to shoehorn fundamentally brittle, complex and often mutually exclusive goals behind a single facade.

If you are in the position to pick a config management system, the best you can do is to chart out your current and known upcoming use cases. Then choose the tool that sucks the least for your particular needs.

And three years down the line, pray that you made the right choice.

Yes, kube is hideously complex. Yes, it comes with enormous selection of footguns. But what it does do well, is to allow decoupling host behaviour from service/container behaviour more than 98% of the time. Combined with immutable infrastructure, it is possible to isolate host configuration management to the image pre-bake stage. Leave just the absolute minimum of post-launch config to the boot/provisioning logic, and you have at least a hope of running something solid.

Distributed systems are inherently complex. And the fundamental truth is that inherent complexity can never be eliminated, only moved around.

jitl
5d ago
with EKS and cloud-init these days i dont find any need to even bake AMIs anymore. scaling / autoscaling so easy now with karpenter to create/destroy nodes to fit current demand. i think if you use kubernetes in a very dumb way to just run X copies of Y container behind an ALB with no funny business it just works.
jabl
5d ago
2 replies
I have to say I hate ansible too (and puppet and cfengine that I have previously used). But it's unclear to me how containers fix the problems ansible solves.

So instead of an ansible playbook/role that installs, say, nginx from the distro package repository, and then pushes some specific configuration, I have a dockerfile that does the same thing? Woohoo?

fragmede
4d ago
You use docker to create a thing on your laptop that you know is good and works, then you send the Dockerfile file in to the system and that thing is a static blob of bits. Ansible/puppet/chef/cfengine modify a live thing from one state to another. Sure, you can use qcow vm disk images and vm snapshots to achieve the same thing, but it’s a lot more cumbersome and feels slow and yuckier, and no one packaged it up into a neat little tool that got popular (which is to say, Vagrant is awesome but slow, so docker won out).
dangus
4d ago
I think the major important difference is that a dockerfile can’t really break after you get your deployment artifact, whereas configuration management can fail on your underlying nodes if they aren’t crafted perfectly and cause post-deployment failures.

Other issues like secrets and environment management is something I find way more annoying using a tool like Chef.

Try doing a chef policyfile bootstrap that gets some secrets using its own built in chef vault. You can’t do it without wild workarounds because the node isn’t granted access to secrets until it becomes a registered node, and it doesn’t register until a chef client run completes successfully. It’s a really dumb catch-22 design.

The solution is “just use a big 3 cloud secrets vault or Hashicorp vault” and that’s fine but it’s really strange that the tool can’t handle something so simple on its own.

reissbaker
5d ago
Yup. K8s is a bit of a pain to keep up with, but Chef and even Ansible are much more painful for other reasons once you have more than a handful of nodes to manage.

It's also basically a standard API that every cloud provider is forced to implement, meaning it's really easy to onboard new compute from almost anyone. Each K8s cloud provider has its own little quirks, but it's much simpler than the massive sea of difference that each cloud's unique API for VM management was (and the tools to paper over that were generally very leaky abstractions in the pre-K8s world).

mrweasel
5d ago
6 replies
You can manage and reason about ~2000+ servers without Kubernetes, even with a relatively small team, say about 100 - 150, depending on what kind of business you're in. I'd recommend either Puppet, Ansible (with AWX) and/or Ubuntu Landscape (assuming that your in the Ubuntu ecosystem).

Kubernetes is for rather special case environments. I am coming around to the idea of using Kubernetes more, but I still think that if you're not provisioning bare-metal worker nodes, then don't bother with Kubernetes.

The problem is that Kubernetes provides orchestration which is missing, or at least limited, in the VM and bare-metal world, so I can understand reaching for Kubernetes, because it is providing a relatively uniform interface for your infrastructure. It just comes at the cost of additional complexity.

Generally speaking I think people need to be more comfortable with build packages for their operating system of choice and install applications that way. Then it's mostly configuration that needs to be pushed and that simplifies things somewhat.

jitl
5d ago
1 reply
imo if you are on a cloud like aws and using a config management system for mutable infra like puppet you are taking unnecessary complexity and living in the dark ages

> Generally speaking I think people need to be more comfortable with build packages for their operating system of choice and install applications that way. The it's mostly configuration that needs

why, it’s 2025, docker / container makes life so easy

immibis
5d ago
1 reply
because programmers should be able to use computers
mlrtime
4d ago
1 reply
No, they should be able to take business requirements and create performant reliable applications.

They should understand CS/CE core fundamentals but they don't need to know how to admin.

immibis
4d ago
1 reply
You might not make it your day job but you should definitely understand the fundamentals of how your whole stack works. Everything from transistors to eyeballs.
jitl
4d ago
1 reply
Your original suggestion didn’t sound didactic in nature. I did enough deploying Perl apps that way to consider it a huge waste of time. No thanks!
immibis
4d ago
Automate it with a shell script
Grimburger
5d ago
4 replies
> You can manage and reason about ~2000+ servers without Kubernetes, even with a relatively small team, say about 100 - 150

Oh wow, so uh... I'm managing around 1000 nodes over 6 clusters, alone. There's others able to handle things when I'm not around or on leave and meticulously updated docs for them to do so but in general am the only one touching our infra.

I also do dev work the other half of the week for our company.

Ask your boss if he needs a hand :)

mrweasel
5d ago
That is actually very impressive :-) We have a small team to just handle the databases, but that's ~200 MariaDB and Oracle instances, and another to do networking.

How many different applications/services are you running?

In any case, absolutely amazing what one person can manage with modern infrastructure.

mlrtime
4d ago
Managed k8s (GKE/EKS) or self admin k8s? If the former, no problem. If you're building your own clusters on raw cloud or bare metal compute I'm skeptical if doing it solo. Kudos either way!
doubled112
4d ago
This sounds super familiar.

At one job I was the only IT person and we had ~250 plain boring VMs on some bare metal Linux/KVM hosts. No config management. No Kubernetes. I fixed that quickly. There was one other guy capable of taking a look at most of it.

I was also doing the software builds and client releases, client support, writing the documentation for the software, and fixing that software.

I suspect we would have had no problem scaling up with some better tooling. Imagine a team of 150? When people tell me things like that, it sounds more like the solution isn't much of a solution at all.

geodel
4d ago
> Ask your boss if he needs a hand :)

Hehe, you lack skill in empire building. You know "leading a team of highly motivated team of 50+ devops engineers". The kind of talent that postpones patching until you are back from vacation. Or deploying config change that needs at least two rollbacks before finally going in.

mlrtime
4d ago
2 replies
>for their operating system of choice...

Have you been in a company with ~2000+ servers where devs install their apps on these OSs and building packages that refuse to upgrade to the latest OS? I mean even with LTS a 20 year old company may still have 3-4 LTS OSs because that last 5% refuse to or cannot upgrade their application to work with the new OS. Sure you could VM the entire thing, but Docker + K8s removes that completely.

steve1977
4d ago
2 replies
If developers don't maintain their apps, it doesn't really matter that much how and where you deploy them. With Kubernetes, you just end up with unmaintained Docker images that potentially contain a ton of vulnerabilities.
conor-
4d ago
But with a containerized app image you can reduce the blast radius of the poorly maintained app compared to running it bare metal on a host with other services. Also you can still maintain base images to patch/try to reduce vulnerability surfaces
jitl
4d ago
Yeah but at least the fucked-ness is contained in the app layer and the infra layer can live in a happy and optimized modern world.

Also, intricate linkage between an app and the host OS also means there’s more work involved with upgrading.

KaiserPro
4d ago
> Have you been in a company with ~2000+ servers where devs install their apps on these OSs and building packages that refuse to upgrade to the latest OS

THats what ld_preload is for. But real talk, if you have 2k servers and you can't package your apps to run on your OS, then you need a different platform team.

We managed 36k servers using fucking salt and perl. We were packaging nvidia drivers and all sorts. One system that everyone used still needed the athena widget set.

But the main point is, if you're using old packages, then you're gonna get hacked. You either need to kill that app, fire that developer or virtualise it and fill out the risk register, and do monthly recovery tests.

Docker allows you to pack in CVEs like no tomorrow. so sure k8s can let you do that, and given the hardly anyone properly enforces namespace isolation, so they can have a service mesh, you can still steal loads of data from a compromised container.

Glamklo
5d ago
Kubernetes is not for 'rather special cases'.

The whole ecosystem of kubernetes makes pod management so much easier. Logs get shipped automatically. Lots of good self service portals and tools are available for the teams so they can do things themselves.

Abstractin layers like crossplane to allow them access to cloud resources in a controlled manner.

ArgoCD alone is a dream.

The easiest way of managing a lot of servers in a high quality low effort fashion is kubernetes and you can do a lot more in IaC than before.

While you are playing around with ansible platform and some scripts healing your infrastructure after you wrote your own runbooks, k8s restarted the pod and its already running again

Hikikomori
5d ago
Meanwhile we manage over 1200 instances with multiple kubernetes clusters with a team of 10, including complex mesh networking and everything else the team does. It might be complex but it also gives you so much for free that you don't have to deal with.
vidarh
4d ago
When you say 100-150, are you talking about the whole organisation? Not just devops?

Because 100-150 for the devops would be crazy for a mid-sized system like that.

Unless you're managing Windows servers or something.

PunchyHamster
4d ago
> If you were working in the orgs targeted by k8s, I think it was generally more of a mess. Think about managing a park of 100~200 servers with home made bash scripts and crappy monitoring tools and a modicum of dashboards.

We have Configuration Management systems like Puppet in mature enough state for over a decade now.

I haven't installed server manually or "with handmade scripts" in good 12 years by now.

We have park of around 100-200 servers and actually managing hardware is tiny part of it

> Now, k8s has engulfed a lot more than the primary target, but smaller shops go for it because they'r also hoping to hit it big someday I guess. Otherwise, there will be far easier solutions at lower scale.

K8S is popular because it gives developers a lot of power to deploy stuff, without caring much at underlying systems, without bothering ops people too much. Cloud-wise there is a bunch of native ways to just run a few containers that don't involve it but onprem it is nice way to get a bit faster iteration cycle on infrastructure, even if complexity cost is high.

It is overkill for I'd imagine most stuff deployed in K8S and half of deployments are probably motivated by resume padding rather than actual need.

Stranger43
5d ago
I think you underestimate what can be done with actual code because the devops industry seems entirely code averse and seem to prefer a "infrastructure as data" paradigm instead and not even using good well tested/understood formats like sql databases or even object storage but seems to lean towards more fragile formats like yaml.

yes the possix shell is not a good language which is why thinks like perl, python and even php or C got widely used but there is a intermediate layer with tools like fabric(https://www.fabfile.org/) solving a lot of the problems with the fully homegrown without locking you into the "Infrastructure as(manually edited) Data" paradigm that only really works for problems of big scale and low complexity which is exactly the opposite of what you see in many enterprise environments.

vidarh
4d ago
I managed 1000+ VMs without k8s with an orchestrator that less code than most k8s manifests I've had to work with since.

I fully accept that there are sizes and complexities where k8s is a reasonable choice, and sometimes it's a reasonable choice because it's easier to hire for, but the bar should be a lot higher than what it currently is.

It's a reason why I'm putting together alternatives for those of my clients who wants to avoid the complexity.

arkh
4d ago
> Think about managing a park of 100~200 servers with home made bash scripts and crappy monitoring tools and a modicum of dashboards.

Not even that. One repository I checked this week had some commits which messages were like "synchronize code with what is on production server". Awesome. And that's not counting the number of hidden adhoc cronjobs on multiple servers.

Also as a dev I like having a pool of "compute" where I can decide to start a new project whenever instead of having to ask some OPS team for servers, routing, DNS config.

cess11
5d ago
I've managed a couple of hundred virtual servers on vCenter with Ansible. It was fine. Syslog is your friend.
pbowyer
5d ago
> Otherwise, there will be far easier solutions at lower scale.

Which solutions do you have in mind?

- VPS with software installed on the host

- VPS(s) with Docker (or similar) running containers built on-host

- Server(s) with Docker Swarm running containers in a registry

- Something Kubernetes like k3s?

In a way there's two problems to solve for small organisations (often 1 server per app, but up to say 3): the server, monitoring it and keeping it up to date, and the app(s) running on each server and deploying and updating them. The app side has more solutions, so I'd rather focus on the server side here.

Like the sibling commenter I strongly dislike the configuration management landscape (with particular dislike of Ansible and maintaining it - my takeaway is never use 3rd party playbooks, always write your own). As often for me these servers are set up, run for a bit and then a new one is set up and the app redeployed to that (easier than an OS upgrade in production) I've gone back to a bash provisioning script, slightly templated config files and copying them into place. It sucks, but not as much as debugging Ansible has.

cmckn
5d ago
3 replies
The Ingress API has been on ice for like 5 years. The core Kubernetes API doesn't change that much, at least these days. There's an infinite number of (questionable) add-ons you can deploy in your cluster, and I think that's mostly where folks get stuck in the mud.
sph
5d ago
1 reply
> doesn’t change that much

Yet they are retiring a core Ingress that has been around for almost as long as Kubernetes has.

pestaa
5d ago
They are not retiring the API. Nginx Ingress is one of the many projects that implements this API, and you are free to migrate to another implementation.
cesnja
5d ago
1 reply
But the Gateway API has only been generally available for two years now. And the last time I checked, most managed K8S solutions recommend the Ingress API while Gateway support is still experimental.
p_l
5d ago
We also now have multiple full featured Ingress implementations that work better than the old nginx-ingress
yrro
4d ago
And here's me still using OpenShift routes... :)
nunez
5d ago
1 reply
/r/kubernetes had this announcement up about five mins after it dropped at Kubecon. It's a huge deal. So many tutorials and products used ingress-nginx for basic ingress, so them throwing in the towel (but not really) is big news.

That said, (a) the Gateway API supercedes Ingress and provides much more functionality without much more complexity, and (b) NGINX and HAproxy have Gateway controllers.

To generally answer your question, I use HN, /r/devops and /r/kubernetes to stay current. I'm also working on a weekly blog series wherein I'll be doing an overview and quick start guide for every CNCF project in their portfolio. There's hundreds (thousands?) of projects in the collection, so it will keep me busy until I retire, probably :)

locknitpicker
5d ago
1 reply
> /r/kubernetes had this announcement up about five mins after it dropped at Kubecon. It's a huge deal. So many tutorials and products used ingress-nginx for basic ingress, so them throwing in the towel (but not really) is big news.

I was one of those whose first reaction was surprise, because ingress was the most critical and hardest aspect of a kubernetes rollout to implement and get up and running on a vanilla deployment. It's what cloud providers offer out of the box as a major selling point to draw in customers.

But then I browsed through the Gateway API docs, and it is a world of difference. It turns a hard problem that requires so many tutorials and products to help anyone get something running into a trivially solvable problem. The improvements on their security model is undoubtedly better and alone clearly justifies getting rid of ingress.

Change might be inconvenient, but you need change to get rid of pain points.

nunez
4d ago
Gateway works at Layer 4 (TCPRoute, UDPRoute). Massive improvement over Services + service mesh hackery!
jitl
5d ago
1 reply
i prefer current era where i never have to ssh to debug a node. if a node is misbehaving or even needs a patch i destroy it. one command, works every time.
secondcoming
5d ago
2 replies
How can you not be interested in what took down your node???
otterley
4d ago
That’s what telemetry services are for. If you have all the logs and metrics from the host, then you can research and construct the story from those. You don’t necessarily need the host to be alive anymore.
jitl
5d ago
oh, i am interested, but i can’t remember the last time i needed ssh to figure out the issue, or needing to fix a node besides by destroying it. last time it was a silly app deciding to use a host volume on root partition to cache stuff using all the disk space. remediate in the moment by destroying node, fix it forever by moving the app to a node type with instance attached NVMe device and putting the volume there + container that nukes the data volume if it runs out of space.
szszrk
4d ago
1 reply
It's a much less of a deal than it seems. Yeah, it is a popular project that has been around for a while, but this is just another day at work. Things evolve, there are migration paths no matter if you want to stay with ingresses or move on...

Kubernetes is promoting Gateway API for a while now. It's in GA for 2 years already (while Ingress was in GA quite late, 2020/K8s 1.19?).

Sun-setting ingress-nginx was not exactly a secret.

The whole Ingress in k8s is marked in docs as "frozen" for a while as well. There are no radical steps yet, but it's clear that Gateway API is something to get interested in.

Meanwhile Nginx Gateway Fabric [1] (which implements gateway API) is there, still uses nginx under the hood and remains opensource. They even have a "migration tool" to convert objects [3].

There are still a few months of support and time to move on to a different controller. Kubernetes still continues support for ingress so if you want to switch and keep using Ingress, there are other controllers [2].

[1] https://gateway-api.sigs.k8s.io/implementations/#nginx-gatew...

[2] https://gateway-api.sigs.k8s.io/implementations/#gateway-con...

[3] https://docs.nginx.com/nginx-gateway-fabric/install/ingress-...

KaiserPro
4d ago
2 replies
> this is just another day at work.

But the point is this, it worked, it does work and will, if given developer time continue to work.

I now need to schedual in time to test the changes, then adjust the metrics and alerting that we have.

For no gain.

It just feels like kuberenetes is carbon fibre programming.

szszrk
4d ago
I honestly don't get it. As a person who managed k8s in teams of 1-3 people. It's not that much effort. Things get sunset all the time.
herzzolf
4d ago
> if given developer time continue to work.

well that's the root of the problem, no? there's no one who wants to maintain the complex lua written to make nginx cloud native. they were looking for maintainers for quite some time with no one stepping up

and I'm not surprised, their issue tracker always was full of very entitled people, so you would be doing a stressful/thankless job... for what exactly?

pjmlp
4d ago
We don't, I focus mainly on backend, DevOps happens because in many small teams someone has to have multiple roles, and I end up taking DevOps responsibilities as well.

One thing that I push for nowadays, after a few scars is managed platforms.

merb
5d ago
ingress-nginx is older than 5-7 years tough. In that time frame you would’ve needed to update your Linux system, which gets hairy most often as well. The sad thing is just that the replacement is just not there and gateway api has a lot of drawbacks that might get fixed in the next release (working with cert manager)
Daviey
5d ago
Not just that, but technologies which took me many months or even years to become and expert at, the latest generation of engineers seem to be able to pick up in weeks. It's scary how fast the world is moving.
wvh
4d ago
I feel the same, especially the feeling old and jaded part, but I disagree that things were easier. Systems such as Kubernetes are not worse than trying to administer a zillion servers and networks by hand in the late '90s (or with tools like Puppet and Ansible a bit later), let alone HA shenanigans; neither are they a magical solution, more of a side-step and necessary evolution of scale.

There is a wild-grow of 80% solved problems in the Kubernetes space though, and especially the DevOps landscape seems to be plagued by half-solutions at the moment.

I think part of the complexity arises from everything being interconnected services instead of simple stand-alone software binaries. Things talking with other things, not necessarily from the same maker or ecosystem.

I don't understand decisions such as these though, retiring de facto standards such as Ingress NGINX. I can't name a single of our customers at $WORKPLACE that's running something else.

nonameiguess
5d ago
Honestly, a lot of the Hacker News discourse every single time anything having to do with Kubernetes comes up reads like uninformed annoyed griping from people who have barely or not used it. Kubernetes itself has been around since 2014. ingress-nginx was the original example of how to implement an Ingress controller. Ingress itself is not going away, which seems to a misconception of a lot of replies to your comment. A lot of tutorials use this because a lot of tutorials simply copied the Kubernetes upstream documentation's own tutorials, which used toy examples of how to do things, including ingress-nginx itself, which was meant to be a toy example of how to implement an Ingress controller.

Nonetheless, it was around a full decade before they finally decided to retire it. It's not like this is something they introduced, advertised as the ideal fit for all production use cases, and then promptly changed their minds. It's been over a decade.

Part of the problem here is the Kubernetes devs not really following their own advice, as annotations are supposed to be notes that don't implement functionality, but ingress-nginx allowed you to inject arbitrary configuration with them, which ended up being a terrible idea in the main use Kubernetes is really meant for, which is you're an organization running a multi-tenant platform offering application layer services to other organizations, which it is great for, but Hacker News with its "everything is either a week one startup or a solo indy dev" is blind to for whatever reason.

Nonetheless, they still kept it alive for over a decade. Hacker News also has the exact wrong idea about who does and should use Kubernetes. It's not FAANGs, which operate at a scale way too big for it and do this kind of thing using in-house tech they develop themselves. Even Google doesn't use it. It's more for the Home Depots and BMWs of the world, organizations which are large-scale but not primarily software companies, running thousands if not millions of applications in different physical locations run by different local teams, but not necessarily serving planet-scale web users. They can deal with changing providers once every ten years. I would invite everyone who thinks this is unmanageable complexity to try dipping their toes into the legal and accounting worlds that Fortune 500s have to deal with. They can handle some complexity.

Tractor8626
5d ago
Cybersecurity is easier? Isn't it all about constantly updating and patching obsolete vulnerable stuff - most annoying part of ops?
steve1977
5d ago
In my experience, many teams keep up with this by spending a lot of time keeping up with this and less time developing the actual product. Which, you probably guessed it, results in products much shittier than what we had 10 or 20 years ago.

But hey, it keeps a lot of people busy, which means it also keeps a lot of managers and consultants and trainers busy.

mmcnl
4d ago
Things weren't simpler. The complexity was simply not visible because different teams/department were all doing a small part of what now a single team is doing with Kubernetes. Yes, for that single team it is more complex. But now it's 1 team that does it all, instead of 5 separate teams responsible for development, storage, networking, disaster recovery, etc.

Kubernetes is a gift.

zzyzxd
4d ago
If your infrastructure can justify the complexity of Kubernetes, keeping up with Kubernetes native software is extremely easy comparing to anything else I have dealt with. I had some horror story managing nginx instances on 3 servers with ansible. To me that's much harder than working with ingress controllers in Kubernetes.

Replacing an ingress controller in Kubernetes is also a well documented practice, with minimum or even zero downtime if you want to.

Generally, if your engineering team can reasonably keep things simple, it's good. However, business needs to grow and infrastructure needs to scale out. Sometimes trying too hard to be simple is, in my experience, how things become unmanageably complex.

I find well-engineered complexity to be much more pleasant to work with.

brookritz
4d ago
I once installed some kubernetes based software by following the instructions and watching many unicode/ascii-art animations on the commandline. I've also learned that the 8 in k8s stands for 8 letters: 'ubernete'. I've decided that D4s is not for me.
seized
5d ago
2 replies
It's in beta, but HAProxy has a gateway product:

https://www.haproxy.com/blog/announcing-haproxy-unified-gate...

MrDarcy
5d ago
3 replies
Love haproxy but if we’re shilling projects istio is superior. Multi cluster, hbone, ambient.
nunez
5d ago
Lots more moving pieces though
runiq
5d ago
What is hbone? What is ambient?
Grimburger
5d ago
> istio is superior

It's also eating a significant amount of your compute and memory

PhilippGille
5d ago
There are many Gateway implementations: https://gateway-api.sigs.k8s.io/implementations/
mt42or
5d ago
3 replies
This is required steps but the timing plan is bad. Looks like a google product closing. Let people time to move out, 6 month is not enough.
Unroasted6154
5d ago
1 reply
It's not a service shutting down though. It will still work fine for a while and it there is a critical security patch required, the community might still be able to add it.
mt42or
5d ago
1 reply
No they are going to forbid people to commit anything to the project so even security patch will be blocked.
jen20
5d ago
The chance of this not having a fork keeping security updates running is effectively zero.
pronik
5d ago
1 reply
To be fair, this is not the first time we'e heard about this, https://github.com/kubernetes/ingress-nginx/issues/13002 exists since March. However I also thought that the timeline to a complete project halt would be much longer considering the prevalence of the nginx ingress controller. Might also mean that InGate is dead, since it's not mentioned in this post and doesn't seem to be close to any kind of stable release.
nielsole
5d ago
1 reply
> InGate development never progressed far enough to create a mature replacement; it will also be retired
pronik
4d ago
I stand corrected. Had a feeling it's dead when I looked into the GitHub repository a couple of weeks back.
preisschild
4d ago
> Let people time to move out, 6 month is not enough.

Did you actually contribute? Either by donations or code? If not, Beggars can't be choosers. You are not entitled to free maintainence for open source software you use.

kardianos
5d ago
2 replies
Traefik has an Nginx compatibility for annotations as well to make it easy to switch.
nielsole
5d ago
The list of supported annotations is quite short though
pronik
4d ago
Nginx annotations is not something we want to keep though and there is rarely such a thing as a seamless drop-in replacement. Every change needs to be intensively tested and nobody in their right mind would replace an ingress controller by another one in-place on any cluster size. The correct way will be to offer a new ingress class and nudge the cluster users to migrate to that one, one app at a time, which means you could as well migrate to something completely different, depending on size and effort. Traefik tries very hard to be an option in Kubernetes space, with some promising concepts, but supporting random annotations is not it.
rastignack
5d ago
2 replies
I have tens of clusters to maintain. Quite an advertisement for ECS!
Crowberry
5d ago
Inadvertently we migrated to ECS just last week
wg0
5d ago
Kubernetes behaves like a JavaScript framework. See what has been happening in React and Sevelte for past few years.

Infrastructure is the underlying fabric and it needs stability and maturity.

wg0
5d ago
6 replies
Kubernetes is never maturing. It keeps moving. An installation just a year ago would have things that would require significant planning to upgrade.

What is missing is an open source orchestrator that has a feature freeze and isn't Nomad or docker swarm.

lemontheme
5d ago
1 reply
Just out of curiosity, what's wrong with either of those two?
j0057
4d ago
1 reply
Docker is not for production. Nomad at scale in practice needs a lot of load-bearing Bash scripts around it: for managing certs, for external DNS, you need Consul for service discovery, Vault for secrets.

At that point, is Nomad still simple? If you're going to take on all of the essential complexity of deploying software at scale, just do it right and use Kubernetes.

Source: running thousands of containers in production.

steeleduncan
4d ago
1 reply
> you need Consul for service discovery

Kubernetes uses etcd for service discovery. It isn't that Nomad does things differently or less simply, it is just that they are more explicit about it.

The real difference is that Kubernetes has a wide array of cloud hosts that hide the complexity from users, whereas Nomad can realistically be self hosted

j0057
4d ago
I'm not saying that Kubernetes isn't complex, I'm saying it's a fallacy to claim that the Hashicorp stack in any way manages to be less complex in practice. All of these moving parts are unavoidable if you want to run software at scale, Kubernetes is just way better engineered than the Hashicorp stack, if only for not depending on dockerd.
jitl
4d ago
wat, i did an upgrade of my year-old clusters in september and all i did was bump the version number and run terraform apply
KronisLV
4d ago
> What is missing is an open source orchestrator that has a feature freeze and isn't Nomad or docker swarm.

Running Docker Swarm in production, can't really complain, at least for scales where you need a few steps up from a single node with Docker Compose, but not to the point where you'd need triple digits of nodes. I reckon that's most of the companies out there. The Compose specification is really simple and your ingress can be whatever web server you prefer configured as a reverse proxy.

throwaway838112
5d ago
hear hear!
blue_cookeh
4d ago
I don't really get this mentality targing K8s specifically nowadays - perhaps that was true in the early days but I'm managing several clusters that are all a few years old at this point. Cluster services like Cilium, Traefik, etc are all managed through ArgoCD the same as our applications... every so often I go through the automated PRs for infra services, check for breaking changes and hit merge. They go to dev/staging/prod as tests pass.

I think services take me literally half an hour a month or so to deal with unless something major has changed, and a major K8s version upgrade where I roll all nodes is a few hours.

If people are deploying clusters and not touching them for a year+ then like any system you're going to end up with endless tech debt that takes "significant planning" to upgrade. I wouldn't do a distro upgrade between Ubuntu LTS releases without expecting a lot of work, in fact I'd probably just rebuild the server(s) using tool of choice.

PunchyHamster
4d ago
That entirely depends which version change you hit.

But I'd love LTS release chain that keeps config same for at least 2-3 years.

stackedinserter
5d ago
It wasn't the most loved part of k8s, to say the least.
vbezhenar
5d ago
When I was choosing ingress controller few years ago, I think it was the most popular ingress controller by far, according to various polls. As I didn't have any specific requirements, I chose it and it worked for me. Over years I've used few proprietary annotations, so migrating away going to be a bit of pain. Not awesome news.
figassis
5d ago
This is terrible. Of all things k8s, ingress was the part I just did not want to have to mess with. It just worked and was stable, this gateway is completely unnecessary. And it seems to me that nginx retiring is just because people were pushing for the gateway so much that they threw in the towel. Infra is not react, people need to leave it alone.
yahoozoo
5d ago
Great, another deprecation to address in my EKS clusters :(
jackhalford
5d ago
I don’t think this is the https://xkcd.com/2347/ of the ops world? People will usually use the ingress controller of their cloud provider. I’ve been using the tailscale ingresses for tailscale funnel. But the transition from ingress to gateway api is seeming to take forever so I’m just running a caddy pod with a static config until the dust settles.
sleazebreeze
5d ago
RIP, end of an era. Thank you everyone who worked on this, it was an extraordinarily useful and reliable project.

5 more comments available on Hacker News

ID: 45921431Type: storyLast synced: 11/16/2025, 9:42:57 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.