Automatic K8s Pod Placement to Match External Service Zones
Posted3 months agoActive3 months ago
github.comTechstory
calmmixed
Debate
70/100
KubernetesCloud ComputingNetwork Topology
Key topics
Kubernetes
Cloud Computing
Network Topology
The post introduces a tool for automatic Kubernetes pod placement to match external service zones, sparking a discussion on its necessity, potential issues, and alternative solutions.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
N/A
Peak period
27
156-168h
Avg / period
9.2
Comment distribution46 data points
Loading chart...
Based on 46 loaded comments
Key moments
- 01Story posted
Oct 8, 2025 at 1:21 AM EDT
3 months ago
Step 01 - 02First comment
Oct 8, 2025 at 1:21 AM EDT
0s after posting
Step 02 - 03Peak activity
27 comments in 156-168h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 16, 2025 at 8:50 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45512351Type: storyLast synced: 11/20/2025, 5:27:03 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I wanted to share something I've worked a bit to solve regarding Kubernetes: its scheduler has no awareness of the network topology for external services that workloads communicate with. If a pod talks to a database (e.g AWS RDS), K8s does not know it should schedule it in the same AZ as the database. If placed in the wrong AZ, it leads to unnecessary cross-AZ network traffic, adding latency (and costs $).
I've made a tool I've called "Automatic Zone Placement", which automatically aligns Pod placements with their external dependencies.
Testing shows that placing the pod in the same AZ resulted in a ~175-375% performance increase. Measured with small, frequent SQL requests. It's not really that strange, same AZ latency is much lower than cross-AZ. Lower latency = increased performance.
The tool has two components:
1) A lightweight lookup service: A dependency-free Python service that takes a domain name (e.g., your RDS endpoint) and resolves its IP to a specific AZ.
2 ) A Kyverno mutating webhook: This policy intercepts pod creation requests. If a pod has a specific annotation, the webhook calls the lookup service and injects the required nodeAffinity to schedule the pod onto a node in the correct AZ.
The goal is to make this an automatic process, the alternative is to manually add a nodeAffinity spec to your workloads. But resources moves between AZ, e.g. during maintenance events for RDS instances. I built this with AWS services in mind, the concept is generic enough to be used for on-premise clusters to make scheduling decisions based on rack, row, or data center properties.
I'd love some feedback on this, happy to answer questions :)
If yes, that's a simple update of the manifest to have 3 replicas with ab affinity setting to spread that out over different AZ. Kyverno would use the internal Service object this service provide to have a HA endpoint to send queries to.
If we are not talking about this AZP service, I don't understand what we are talkin about.
So specifically for RDS, AWS will provide two endpoint for the client application: A writer and a reader endpoint. Similar to this: mydbcluster.cluster-c7tj4example.us-east-1.rds.amazonaws.com : Writer endpoint mydbcluster.cluster-ro-c7tj4example.us-east-1.rds.amazonaws.com : Reader endpoint (notice -ro part).
The writer endpoint will always resolve to the active master, which is what the client application is configured to use, and thats the hostname my lookup service will use as input to determine the current location of the Writer instance.
My solution works only for hostnames that returns a single IP address, so it won't work for the Reader endpoints. As I wrote in the repository, a requirement for this is that "The FQDN needs to return a single A record for the external resource".
I would create a similar policy where Kyverno at intervals would check the Deployment spec to see if the endpoint is changed, and alter the affinity rules. It would then be a traditional update of the Deployment spec to reflect the desire to run in another AZ, if that made sense?
How about, don't use Kubernetes? The lack of control over where the workload runs is a problem caused by Kubernetes. If you deploy an application as e.g. systemd services, you can pick the optimal host for the workload, and it will not suddenly jump around.
Being able to move workloads around is kinda the point. The need exists irrespective of what you use to deploy your app.
Any hostname for a service in AWS that can relocate to another AZ (for whatever reason), can use this.
Fine grained control over workload scheduling is one of the K8s core features?
Affinity, anti-affinity, priority classes, node selectors, scheduling gates - all of which affect scheduling for different use cases, and all under the operator's control.
You can specify just about anything, including exact nodes, for Kubernetes workloads.
This is just injecting some of that automatically.
I'm not knocking systemd, it's just not relevant.
This project literally sets the affinity. That's precisely the control you seem to negate.
> To gather zone information, use this command ...
Why couldn't most of this information be gathered by lookup service itself? A point could be made about excessive IAM, but a simple case of RDS reader residing in a given AZ could be easily handled by simply listing the subnets and finding where a given IP belongs.
This service is published more as a concept to be built on top of, than a complete solution.
You wouldn't even need IAM rights to read RDS information, you need subnet information. As subnets are zonal, it does not if the service is RDS or Redis/ElastiCache. The IP returned from the hostname lookup, at the time your pod is scheduled, determines which AZ that Pod should (optimally) be deployed to.
Where this solution was created, was in a multi AWS account environment. Doing describe subnets API calls across multiple accounts is a hassle. It was "good enough" to have a static mapping of subnets, as they didn't change frequently.
Kyverno requirement makes it limited. There is no "automatic-zone-placement-disabled" function in case someone wants to temporarily disable zone placement but not remove the label. How do we handle RDS Zone changing after workload scheduling? No automatic look up of IPs and Zones. What if we only have one node in specific zone? Are we willing to handle EC2 failure or should we trigger scale out?
You don't have to use Kyverno. You could use a standard mutating webhook, but you would have to generate your own certificate and mutate on every Pod.CREATE operations. Not really a problem but, it depends.
> There is no "automatic-zone-placement-disabled"
True. Thats why I choose to use preferredDuringSchedulingIgnoredDuringExecution instead of requiredDuringSchedulingIgnoredDuringExecution. In my case, where this solutions originated from, Kubernetes was already a multi AZ solution where there was always at least one node in each AZ. It was nice if the Pod could be scheduled into the same AZ, but it was not a hard requirement,
> No automatic look up of IPs and Zones. Yup, it would generate a lot of extra "stuff" to mess with: IAM Roles, how to lookup IP/subnet information from multi account AWS setup with VPC Peerings. In our case it was "good enough" with a static approach. Subnet/network topology didnt change frequently enough to add another layer of complexity.
> What if we only have one node in specific zone?
Thats why we defaulted to preferredDuringSchedulingIgnoredDuringExecution and not required.
In general the goal should be to deploy as much of the stack in one zone as possible, and have multiple zones for redundancy.
> In general the goal should be to deploy as much of the stack in one zone as possible
Agree. The can be a few downsides one has to consider if you have to fail over to another zone. Worst case, there isn't sufficient capacity available when you fail over if everyone else is asking for capacity at the same time. If one uses e.g. karpenter, you should be able to be very diverse in the instance selection process, so that you get at least some capacity, but maybe not the preferred.
Lowest possible latency would of course be running the client code on the same physical box as the SQL server, but thats hard to do.
This is precisely how Amazon's bread is buttered. An outage affecting an entire AZ is rare enough that I would feel pretty happy making all our clusters single-AZ, but it would be a fool's errand for me to convince management to go against Amazon's official recommendations.
It's a plugin that enables traffic re-direction for any service that is using an IP in any given VPC. If you have say multiple RDS Reader instances, it will first attempt to use local AZ instances first, but the other instances are available if local instances are non-functional. So you do not loose HA or failover features.
The plugin does not require any reconfiguration on your apps. It works similar to Topology Aware Routing (https://kubernetes.io/docs/concepts/services-networking/topo...) in Kubernetes, but it works for services outside of Kubernetes. The plugin even works for non-Kubernetes setup as well.
This AZP solution is fine for services that is have one IP or primary instance, like RDS Writer instance. It does not work for anything that is "stateless" and multi-AZ, like RDS Read-only instances or ALBs.
Most people should start with a single-zone setup and just accept that there's a risk associated with zone failure. If you have a single-zone setup, you have a node group in that one zone, you have the managed database in the same zone, and you're done. Zone-wide failure is extremely rare in practice and you would be surprised at the number (and size of) companies that run single-zone production setups to save on cloud bills. Just write the zone label selector into the node affinity section by hand, you don't need a fancy admission webhook if you want to reduce chance's factor.
If you decide that you want to handle the additional complexity of supporting failover in case of zone failure, the easiest approach is to just setup another node group in the secondary zone. If the primary zone fails, manually scale up the node pool in the secondary zone. Kubernetes will automatically schedule all the pods on the scaled up node pool (remember: primary zone failure, no healthy nodes in the primary zone), and you're done.
If you want to handle zone failover completely automatically, this tool represents additional cost, because it forces you to have nodes running in the secondary zone during normal usage. Hopefully you are not running a completely empty, redundant set of service VMs in normal operation, because that would be a colossal waste of money. So you are presuming that, when RDS automatically fails over to zone b to account for zone a failure, that you will certainly be able to scale up a full scale production environment in zone b as well, in spite of nearly every other AWS customer attempting more or less the same strategy; roughly half of zone a traffic will spill over to zone b, roughly half to zone c, minus all the traffic that is zone-locked to a (e.g. single-zone databases without failover mechanisms). That is a big assumption to make and you run a serious risk of not getting sufficient capacity in what was basically an arbitrarily chosen zone (chosen without context on whether there is sufficient capacity for the rest of your workloads) and being caught with zonal mismatches and not knowing what to do. You very well might need to failover to another region entirely to get sufficient capacity to handle your full workload.
If you are both cost- and latency-sensitive to stick to a single zone, you're likely much better off coming up with a migration plan, writing an automated runbook/script to handle it, and testing it on gamedays.
I don't disagree, but there is one issue with this approach and that is that RDS is a multi AZ service by itself. That means that when a maintenance event occur on your insaance, AWS will start a new instance in a new zone, and fail over to that one.
You could of course manually failover RDS afterwards to your primary zone. Not sure if that is better than manually scaling up a node pool if a zone fails.
> So you are presuming that, when RDS automatically fails over to zone b to account for zone a failure, that you will certainly be able to scale up a full scale production environment in zone b as well, in spite of nearly every other AWS customer attempting more or less the same strategy;
Thats up to the user to decide via the Kyverno policy. We used the preferredDuringSchedulingIgnoredDuringExecution affinity setting to instruct the scheduler to attempt to schedule the pods in the optimal zone.
I believe the only way to be 100% sure that you have compute capacity available in your AWS account is the use EC2 On-Demand Capacity Reservations (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capa...). If your current zone is at full capacity, and for some reason the nodes your VMs are running on dies, that capacity is lost, and you wont get it back either.
Not true for single-AZ deployments. There is downtime during the maintenance event, but this is also true in multi-AZ deployments when the instance in the second AZ is promoted; a multi-AZ maintenance window has slightly less downtime, but not much; downtime is downtime, but generally not enough to affect a 99.9% SLA anyway.
> EC2 On-Demand Capacity Reservations
Also quite expensive to maintain just for outage recovery events.
The point I'm trying to make is that formal risk analysis forces you to think about actual sources of risk, and SRE/FinOps principles force you think about how much budget you are willing to spend to address those risks. And I don't understand how a tool like this fits into formal risk analysis and where it presents an optimum solution for those risks.
Seems it does not fit your risk analysis?
They lay out the problem and solution pretty well in the link. If you still don't understand after reading it, then that's okay! It just means you're not having this problem and you're not in need of this tool, so go you! But at least you'll come away with the understanding that someone was having this problem and someone needed this tool to solve it, so win win win!
Since you mentioned it, what I've done before when it comes to improving CI builds, is to use karpenter + local SSD mounts with very large instance types in an idle timeout of ~1h. This allowed us to have very performant build machines at a low cost. The first build of the day took a while to get going, but for the price-benefit perspective it was great.
Thanks; that sounds faster than most self-hosted CI services.
Or, if the resources that CI build is utilizing within the image (after the image is pulled and started) is AZ bound, then yes the build process would be improved since the CI build would fetch AZ local resources, rather than crossing the AZ boundary