Back to Home11/17/2025, 3:49:38 PM

How to escape the Linux networking stack

147 points
59 comments

Mood

thoughtful

Sentiment

positive

Category

tech

Key topics

Linux networking

network optimization

Cloudflare

Debate intensity20/100

Cloudflare's blog post explains how they optimized their Linux networking stack, sparking discussion on alternative approaches and the company's technology choices.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

3h

Peak period

47

Day 1

Avg / period

27.5

Comment distribution55 data points

Based on 55 loaded comments

Key moments

  1. 01Story posted

    11/17/2025, 3:49:38 PM

    2d ago

    Step 01
  2. 02First comment

    11/17/2025, 6:52:49 PM

    3h after posting

    Step 02
  3. 03Peak activity

    47 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/19/2025, 1:06:01 AM

    18h ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (59 comments)
Showing 55 comments of 59
lazyeye
2d ago
1 reply
SLATFATF - "So long and thanks for all the fish" a Douglas Adams quote So Long, and Thanks for All the Fish - Wikipedia https://en.wikipedia.org/wiki/So_Long,_and_Thanks_for_All_th...
cestith
1d ago
1 reply
A few things in the article are Douglas Adams quotes, and more specifically from the Hitchhiker’s Guide series.

Creating the universe being regarded as a mistake and making many unhappy is from those books. Whenever someone figures out the universe it gets replaced with something stranger and having evidence that’s happened repeatedly is too. The Restaurant at the End of the Universe is reference in the article.

I’m a bit surprised nothing in the article was mentioned as being “mostly harmless”.

gishh
1d ago
One of these days I’ll figure out how to throw myself at the ground and miss.
notepad0x90
1d ago
2 replies
I'm slightly surprised cloudflare isn't using a userspace tcp/ip stack already (faster - less context switches and copies). It's the type of company I'd expect to actually need one.
Droobfest
1d ago
1 reply
notepad0x90
1d ago
3 replies
Nice, they know better. But it also makes me wonder, because they're saying "but what if you need to run another app", I'd expect for things like loadbalancers for example, you'd only run one app per server on the data plane, the user space stack handles that the kernel stack uses a different control plane NIC with the kernel stack so that boxes are reachable even if there is link saturation, ddos,etc..

It also makes me wonder, why is tcp/ip special? The kernel should expose a raw network device. I get physical or layer 2 configuration happening in the kernel, but if it is supposed to do IP, then why stop there, why not TLS as well? Why run a complex network protocol stack in the kernel when you can just expose a configured layer 2 device to a user space process? It sounds like "that's just the way it's always been done" type of a scenario.

wmf
1d ago
1 reply
AFAIK Cloudflare runs their whole stack on every machine. I guess that gives them flexibility and maybe better load balancing. They also seem to use only one NIC.

why is tcp/ip special? The kernel should expose a raw network device. ... Why run a complex network protocol stack in the kernel when you can just expose a configured layer 2 device to a user space process?

Check out the MIT Exokernel project and Solarflare OpenOnload that used this approach. It never really caught on because the old school way is good enough for almost everyone.

why stop there, why not TLS as well?

kTLS is a thing now (mostly used by Netflix). Back in the day we also had kernel-mode Web servers to save every cycle.

bbarnett
1d ago
Was it Tux? I've only used it, a looong time ago, on load balancers.

https://en.wikipedia.org/wiki/TUX_web_server

hansvm
1d ago
2 replies
TCP/IP is, in theory (AFAIK all experiments related to this fizzled out a decade or two ago), a global resource when you start factoring in congestion control. TLS is less obviously something you would want kernel involvement from, give or take the idea of outsourcing crypto to the kernel or some small efficiency gains for some workloads by skipping userspace handoffs, with more gains possible with NIC support.
notepad0x90
1d ago
1 reply
why can't it be global and user space? DNS resolution for example is done by user space, and it is global.
1718627440
1d ago
1 reply
DNS isn't a shared resource, that needs to be managed and distributed fairly, among programs that don't trust and cooperate with each others.
notepad0x90
1d ago
1 reply
DNS resolution is a shared resource. The DNS client is typically a user-space OS service that resolves and caches DNS requests. What is resolved by one application is cached and reused by another. But at the app level, there are is no deconflicting happening like transport layer protocols. However, the same can be said about IP, IP addresses like name servers are configured system wide and shared by all apps.
1718627440
1d ago
1 reply
It can be shared access to a cache, but this is an implementation detail for performance reasons. There is no problem with having different processes resolve DNS with different code. There is a problem if two processes want to control the same IP address, or manage the same TCP port.
notepad0x90
18h ago
Yeah, but there is still no reason why an "ip_stack" process can't ensure a different IP isn't used and a "gnu_tcp" or whatever process can't ensure tcp ports are assigned to only one calling process. An exclusive lock on the raw layer 2 device is what you're looking for I think. I mean right now, applications can just open a raw socket and use a conflicting tcp port. I've done to kill TCP connections matching some criteria by sending the remote end an RST pretending to be the real process (legit use case). Which approach is more performant, secure, and resilient? that's the what i'm asking here.
Veserv
1d ago
You do want to offload crypto to dedicated hardware otherwise your transport will get stuck at a paltry 40-50 Gb/s per core. However, you do not need more than block decryption; you can leave all of the crypto protocol management in userspace with no material performance impact.
rcxdude
1d ago
You can do that if you want, but I think part of why tcp/ip is a useful layer of abstraction is it allows more robust boundaries between applications that may be running on the same machine. If you're just at layer 2 you are basically acting in behalf of the whole box.
nomel
1d ago
1 reply
> faster - less context switches and copies

Aren't neither required these days, with all the zero copy interfaces that are now available?

majke
1d ago
1 reply
> > faster - less context switches and copies

This is very much newbie way of thinking. How do you know? Did you profile it?

It turns out there is surprisingly little dumb zero-copy potential at CF. Most of the stuff is TLS, so stuff needs to go through userspace anyway (kTLS exists, but I failed to actually use it, and what about QUIC).

Most of the cpu is burned on dumb things, like application logic. Turns out data copying and encryption and compression are actually pretty fast. I'm not saying these areas aren't ripe for optimization - but the majority of the cost was historically in much more obvious areas.

notepad0x90
1d ago
> This is very much newbie way of thinking. How do you know? Did you profile it?

Does it matter? less syscalls is better. Whatever is being done in kernel mode can be replicated (or improved upon much more) in a user-space stack. It is easier to add/manage api's in user space than kernel apis. You can debug, patch, etc.. a user space stack much more easily. You can have multiple processes for redundancy, ensure crashes don't take out the whole system. I've had situations where rebooting the system was the only solution to routing or arp resolution issues (even after clearing caches). Same with netfilter/iptables "being stuck" or exhibiting performance degradation over time. if you're lucky a module reload can fix it, if it was a process I could have just killed/restarted it with minimal disruption.

> Most of the cpu is burned on dumb things, like application logic. Turns out data copying and encryption and compression are actually pretty fast. I'm not saying these areas aren't ripe for optimization - but the majority of the cost was historically in much more obvious areas.

I won't disagree with that, but one optimization does not preclude the other. if ip/tcp were user-space, they could be optimized better by engineers to fit their use cases. The type of load matters too, you can optimize your app well, but one corner case could tie up your app logic in cpu cycles, if that happens to include a syscall, and if there is no better way to handle it, those context switch cycles might start mattering.

In general, I don't think it makes much difference..but I expected companies like CF that are performance and outage sensitive to strain every last drop of performance and reliability out of their system.

alecco
1d ago
3 replies
Being a networking company I always wondered why did they pick Linux over FreeBSD.
HumanOstrich
1d ago
2 replies
Why does being a networking company suggest FreeBSD is the "right" pick?
password4321
1d ago
2 replies
Serving Netflix Video at 400Gb/s on FreeBSD [pdf]

https://news.ycombinator.com/item?id=28584738

(I don't consider this "the answer" as much as one example.)

victorbjorklund
1d ago
To be honest, I think when I heard them speak, they're kind of saying yes, FreeBSD is awesome but that the main reason is that the early people there liked FreeBSD so they just stuck with it. And that it's a good choice, but they don't claim these are things that would be impossible to do with optimizations in Linux.
HumanOstrich
1d ago
I think they used FreeBSD because they were already using FreeBSD. The article doesn't even mention Linux.
alecco
1d ago
1 reply
Because FreeBSD is known for having the best network stack. The code is elegant and clean. And, at least until a few years ago, it was the preferred choice to build routers or firewalls.

AFAIK, they were the first to implement BPF for production ready code almost 3 decades ago.

https://en.wikipedia.org/wiki/Berkeley_Packet_Filter

But all this is opinion and anecdotal. Just pick a random network feature and compare by yourself the Linux and the FreeBSD code.

HumanOstrich
1d ago
1 reply
> But all this is opinion and anecdotal.

Exactly.

alecco
1d ago
1 reply
> But all this is opinion and anecdotal. Just pick a random network feature and compare by yourself the Linux and the FreeBSD code.

Why did you take out of context my self-criticism and omitted the second part of the line showing how you can see this by yourself?

HumanOstrich
22h ago
"Go research it yourself" does not back up your claim that FreeBSD is the "best" for networking.
esseph
1d ago
BSD driver support lags behind pretty bad.
majke
1d ago
This happened before my watch, but I always was rooting for Linux. Linux is winning on many aspects. Consider the featureset of iptables (CF uses loads of stuff, from "comment" to "tproxy"), bpf for metrics is a killer (ebpf_exporter), bpf for DDoS (XDP), Tcp fast open, UDP segmentation stuff, kTLS (arguably half-working). Then there is non-networking things like Docker, virtio ecosystem (vhost), seccomp, namespaces (net namespace for testing network apps is awesome). And the list goes on. Not to mention hiring is easier for Linux admins.
marginalia_nu
1d ago
11 replies
This is extremely tangential, but I was working on setting up some manual network namespaces recently, basically manually reproducing what docker does to fix some of its faulty assumptions regarding containers having multiple IPs and a single name causing all sort of jank, and had to freshen up on a lot of Linux virtual networking concepts (namespaces, veths, bridge networks, macvlans and various other interfaces), made a ton of fairly informal notes to make myself sufficiently familiar with the thing to set it up.

Would anyone be interested if I polished it up and maybe added a refresher on the relevant layer 2 networking needed to reason about it? It's a fair bit of work and it's a niche topic, so I'm trying to poll a bit to see if the juice is worth the squeeze.

HumanOstrich
1d ago
2 replies
I was actually going down rabbitholes today trying to figure out how to do a sane Docker setup where all the containers couldn't connect to each other. Your notes would be valuable at most any level of polish. :)
esseph
1d ago
1 reply
If you create each container in its own network namespace, they won't be able to.
HumanOstrich
1d ago
2 replies
It's a little more complex than that for any non-trivial layout where some containers do need to talk to other containers, but most don't.
brirec
1d ago
2 replies
You could also create a network for each pair of containers that need to communicate with one another.
HumanOstrich
1d ago
1 reply
That would create an excessive amount of bridges in my case. Also this is another trivial suggestion that anyone can find with a quick search or asking an LLM. Not helpful.

I'm not sure why people are replying to my comment with solutioning and trivial suggestions. All I did was encourage the thread OP to publish their notes. FWIW I've already been through a lot of options for solving my issue, and I've settled on one for now.

kortilla
1d ago
1 reply
> I'm not sure why people are replying to my comment with solutioning and trivial suggestions

Because your comment didn’t say you solved it and you asked for notes without any polish as if that would help.

HumanOstrich
1d ago
I said "for now". Meaning I'd be interested in alternatives. Jeez.
marginalia_nu
1d ago
If you want point-to-point communication between two network namespaces, you should use veths[1]. I think virtual patch cables is a good mental model for veths.

If you want multiple participants, you use bridges, which are roughly analogous to switches.

[1] https://man7.org/linux/man-pages/man4/veth.4.html

esseph
1d ago
1 reply
That's a change from what was asked which was isolation between each.

Yes, if they need to talk, share namespaces.

HumanOstrich
22h ago
I didn't ask a question. :-)
aryonoco
1d ago
I put each docker container in a LXC container which effectively uses namespaces, cgroups etc to isolate them.
dfedbeef
1d ago
YES
sevg
1d ago
Yes please!
globalnode
1d ago
i await your write up!
ambicapter
1d ago
I would absolutely be interested.
teleforce
1d ago
Looking forward to that.

It's about time someone write a new linux networking book covering layer 2 and 3.

The existing books are already more than two decades old namely Linux Routing and Linux Routers (2nd edition).

MrResearcher
1d ago
Don't forget to post the link here!
manuelangel99
1d ago
I would def. be interestred!
anbotero
1d ago
Most definitely. Not just for myself, but for some of my peers here too.
msbhvn
1d ago
Please do it, I'm very biased but I think there would be lots of interest in seeing all that explained in one place in a coherant fashion (you will likely sharpen your own understanding in the process and have the perfect resource for when you next need to revisit these topics).
pmontra
1d ago
Yes of course. It would be great.
snvzz
1d ago
Tangentially related, seL4's LionsOS can now act as a router/firewall[0].

0. https://news.ycombinator.com/item?id=45959952

pjmlp
1d ago
I would expect they would do the same as other big scalers, and handle most of the networking in dedicated card firmware,

https://learn.microsoft.com/en-us/azure/azure-boost/overview...

https://learn.microsoft.com/en-us/azure/virtual-network/acce...

seabrookmx
2d ago
I had to read their article on "soft-unicast" before I could really grok this one: https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-...

4 more comments available on Hacker News

ID: 45954638Type: storyLast synced: 11/19/2025, 6:00:00 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.