Preventing Kubernetes From Pulling the Pause Image From the Internet
Postedabout 2 months agoActiveabout 2 months ago
kyle.cascade.familyTechstory
heatednegative
Debate
80/100
KubernetesContainerdPause Image
Key topics
Kubernetes
Containerd
Pause Image
The article discusses how to prevent Kubernetes from pulling the pause image from the internet, sparking a heated discussion about the design and implementation of containerd and Kubernetes.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
46m
Peak period
38
0-12h
Avg / period
8.7
Comment distribution52 data points
Loading chart...
Based on 52 loaded comments
Key moments
- 01Story posted
Nov 4, 2025 at 10:04 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 4, 2025 at 10:51 PM EST
46m after posting
Step 02 - 03Peak activity
38 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 11, 2025 at 8:07 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45818499Type: storyLast synced: 11/20/2025, 2:46:44 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Instead of just swapping out the registry, try baking it into your machine image.
> This should be part of the containerd distribution
containerd is not the only CRI runtime out there.
Right, that’s the point. A user of the CRI should not have to care about this implementation detail.
> containerd is not the only CRI runtime out there.
Any CRI that needs a pause executable should come with one.
This detail is implemented by K8s, not a container runtime. User has to care about the source of it due to supply chain attacks, though.
> Any CRI that needs a pause executable should come with one.
Which would introduce a simultaneous update across different projects, which is a problem harder than a line of config.
You can also setup a separate service to "push" images directly to your container runtime, someone even demoed one in Show HN post some time ago I think.
The nomad team made this configurable afterwards.
I am way more comfortable managing a system that is k3s rather than something that is still using tmux that gets wiped every reboot.
Well... it's what I would have said until bitnami pulled the rug and pretty much ruined the entire ecosystem as now you don't have a way to pull something that you know is trusted with similar configuration and all from a single repository which makes deployments a pain in the ass.
However, on the plus side I've just been creating my own every time I need one with the help of claude using bitnami as reference and honestly it doesn't take that much more time and keeping them up to date is relatively easy as well with ci automations.
Thoughts on Tmux-resurrect[1] , it can even resurrect programs running inside of it as well. It feels like it can as such reduce complexity from something like k3s back to tmux. What are your thoughts on it?
[1]:https://github.com/tmux-plugins/tmux-resurrect?tab=readme-ov...
I haven't used the tool itself so I am curious as I was thinking of a similar workflow as well sometime ago
Now please answer the above questions but also I am going to assume that you are right about tmux-ressurect, even then there are other ways of doing the same thing as well.
https://www.baeldung.com/linux/process-save-restore
This mentions either Criu if you want a process to persist after a shutdown, or the shutdown utility's flags if you want to temporarily do it.
I have played around with Criu and docker, docker can even use criu with things like docker checkpoint and I have played with that as well (I used it to shutdown mid compression of a large file and recontinue compression exactly from where I left)
What are your thoughts on using criu+docker/criu + termux, I think that it itself might be an easier thing than k3s for your workflow.
Plus, I have seen some people mention vps where they are running the processes for 300 days or even more without a single shutdown iirc and I feel like modern VPS providers are insanely good at uptime, even more so than sometimes cloud providers.
even using tmux resurrect on my personal machine I've had it fail to resurrect anything
again - lack of documentation and loosy tmux resurrect state is not what I want to go thru when working in unfamilar environments
why are you getting downvoted
docker compose also has issues but at least it is defined, again if you are managing 10+ machines docker becomes a challenge to maintain especially when you have 4 to 5 clusters, when you are familiar with kubernetes there's virtually no difference between docker tmux or raw k8s, although I heavily recommend k3s due to its ability to maintain itself.
I knew bitnami were trouble when I saw their paid tier prices. Relevant article: https://devoriales.com/post/402/from-free-to-fee-how-broadco...
Oh, and it's owned by Broadcom.
Very easy, reliable.
Without k3s I would have use Docker, but k3s really adds important features: easier to manage network, more declarative configuration, bundled Traefik...
So, I'm convinced that quite a few people can happily and efficiently use k8s.
In the past I used other k8s distro (Harvester) which was much more complicated to use and fragile to maintain.
And because they are "immutable" - I found it's significantly more complicated to use with no tangible benefits. I do not want to learn and deal declarative machine configs, learn how to create custom images with GPU drivers...
Quite a few things which I get done on Ubuntu / Debian under 60 seconds - takes me half an hour to figure out with Talos.
It sounds like an immutable kubernetes distro doesn't solve any problems for you.
Anything else, most companies aren't Web scale enough to set their full Kubernetes clusters with failover regions from scratch.
EDIT: I loaded the page from a cloud box, and wow, I'm getting MITMed! Seems to only be for this site, wonder if it's some kind of sensitivity to the .family TLD.
My team’s service implements a number of performance and functionality improvements on top of your typical registry to support the company’s needs.
I can’t say much more than that sadly.
A lot of security is posturing and posing to legally cover your ass by following an almost arbitrary set of regulations. In practice, most end up running the same code as the rest of us anyway. People need to get stuff done.
https://github.com/awslabs/amazon-eks-ami/pull/2000
There was a discussion open on containerd's GitHub on removing the dependency on the pause image but it has been closed as won't fix: https://github.com/containerd/containerd/issues/10505
Also, if you are using kubeadm to create your cluster, beware that kubeadm may be pre-pulling a different pause image if it does not match your containerd configuration: https://github.com/kubernetes/kubeadm/issues/2020