Show HN: Fair CPU scheduling to run unlimited apps on one plan
miget.com2. We track CPU usage in real-time across all workloads and maintain a global usage map.
3. Idle CPU from any app/node becomes available for re-purchase by other workloads in the same resource plan.
4. CPU limits can be adjusted on the fly without restarts, enabling real-time response to changing load.
If anyone wants to dive into topics like threshold algorithms, node assignment heuristics, or Kubernetes API interactions - I'm happy to dig into that.
If you’re curious about how this stacks up against platforms like Heroku, Render or Railway, I can post a cost-comparison table.
1) What about memory - is it shared too? CPU is shared dynamically. Memory is still hard allocated as a guaranteed limit per workload. This was intentional because, unlike CPUs, memory oversubscription risk is significantly harder to mitigate safely at PaaS scale without introducing latency unpredictability and OOM risk. So: CPU = elastic, RAM = guaranteed / stable.
2) Is isolation compromised by this approach? No - apps don’t run on the same container host. Every app runs on its own Kubernetes node (physical or VM). The Fair Scheduler coordinates CPU fairness across nodes under a single user resource plan. This eliminates noisy neighbors and preserves app-level blast radius reduction.