Meta Is Using the Linux Scheduler Designed for Valve's Steam Deck on Its Servers
Key topics
The Linux scheduler designed for Valve's Steam Deck is now being used on Meta's servers, sparking a lively discussion about the unexpected crossover between gaming and enterprise tech. Commenters are abuzz about the implications, with some noting that Linux's default scheduler is "braindead by modern standards" and that Meta, Amazon, and Google have dedicated engineers fine-tuning performance for tiny efficiency gains. As one commenter pointed out, latency-aware scheduling is crucial in multiple domains, from gaming to voice and video processing, highlighting the surprising common ground between Steam Deck and Facebook's servers. The conversation also touches on the reciprocal nature of open-source innovation, with Meta's Kyber IO scheduler being adopted by various Linux distros to fix microstutter issues.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
13m
Peak period
127
0-6h
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 23, 2025 at 12:08 PM EST
12 days ago
Step 01 - 02First comment
Dec 23, 2025 at 12:21 PM EST
13m after posting
Step 02 - 03Peak activity
127 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 26, 2025 at 5:30 PM EST
9 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> The Linux kernel began transitioning to EEVDF in version 6.6 (as a new option in 2024), moving away from the earlier Completely Fair Scheduler (CFS) in favor of a version of EEVDF proposed by Peter Zijlstra in 2023 [2-4]. More information regarding CFS can be found in CFS Scheduler.
> Starting from version 6.6 of the Linux kernel, [CFS] was replaced by the EEVDF scheduler.[citation needed]
Just traditionally, Linux schedulers have been rather esoteric to tune and by default they've been optimized for throughput and fairness over everything else. Good for workstations and servers, bad for everyone else.
[0]https://tinyurl.com/mw6uw9vh
And the people at FB who worked to integrate Valve's work into the backend and test it and measure the gains are the same people who go looking for these kernel perf improvements all day.
There's nothing special or proprietary about the RHEL code. Access to the code isn't an issue, it's reconstructing an exact replica of RHEL from all of the different package versions.
https://gitlab.com/redhat/centos-stream/rpms
For any individual RHEL package, you can find the source code with barely any effort. If you have a list of the exact versions of every package used in RHEL, you could compose it without that much effort. It's just not served up to you on a silver platter unless you're a paying customer.
It seems like every time I read about this kind of stuff, it's being done by contractors. I think Proton is similar. Of course that makes it no less awesome, but it makes me wonder about the contractor to employee ratio at Valve. Do they pretty much stick to Steam/game development and contract out most of the rest?
They're also a flat organization, with all the good and bad that brings, so scaling with contractors is easier than bringing on employees that might want to work on something else instead.
And then you consider it in context: a company with huge impact, brand recognition, and revenue (about $50M/employee in 2025). They’ve remained extremely small compared to how big they could grow.
There are not many tech companies with 50k+ employees, as a point of fact.
I’m not arguing just to argue - 300 people isn’t small by any measure. It’s absolutely not “extremely small” as was claimed. It’s not relatively small, it’s not “small for what they are doing”, it’s just not small at all.
300 people is a large company. The fact that a single digit number of ultrahuge companies exist doesn’t change that.
What does Germany have to do with Valve? It’snot a German company. But if you want to use Germany for reference, here are the five German companies with revenues closest to Valve’s:
- Infineon Technologies, $16.4B revenue, 57,000 employees
- Evonik Industries, $16B, 31,930 employees
- Covestro, $15.2B, 17,520 employees
- Commerzbank, $14.6B, 39,000 employees
- Zalando, $12.9B, 15,793 employees
Big, small, etc. are relative terms. There is no way to decide whether or not 300 is small without implicitly saying what it's small relative to. In context, it was obvious that the point being made was "valve is too small to have direct employees working on things other than the core business"
Yes, 300 is quite small.
Back to the root point. Small company focused on core business competencies, extremely effective at contracting non-core business functions. I wish more businesses functioned this way.
If you have 30mins for a video I recommend People Make Games' documentary on it https://www.youtube.com/watch?v=eMmNy11Mn7g
Value is chump change in this department. The allow the practice of purchasing loot boxes and items but don't analyze and manipulate behaviors. Valve is the least bad actor in this department.
I watch half the video and found it pretty biased compared to whats happening in the industry right now.
[Citation needed]
> I certainly dont think that Valve designed there systems to encourage gambling
Cases are literally slot machines.
> [section about third-party websites] I don't think Valve deliberately encouraged it.
OK, but they continue to allow it (through poor enforcement of their own ToS), and it continues to generate them obscene amounts of money?
> you guys are choosing to focus on the one company thats fighting against it.
Yes, we should let the billion dollar company get away with shovelling gambling to children.
Also, frankly speaking, other AAAs are less predatory with gambling. Fortnite, CoD, and VALORANT to pick some examples, are all just simple purchases from a store. Yes, they have issues with FOMO, and bullying for not buying skins [0], but oh my god, it isn't allowing children to literally do sports gambling (and I should know, I've actively gambled on esports while underage via CS, and I know people that have lost $600+ while underage on CS gambling).
[0]: https://www.polygon.com/2019/5/7/18534431/fortnite-rare-defa...
If you don't see it happening, the game is being played as intended.
When I worked in the HFC/Fiber plant design industry, the simple act of "Don't use the same boilerplate MSA for every type of vendor" and being more specific about project requirements in the RFP makes it very clear what is expected, and suddenly we'd get better bids, and would carefully review the bids to make sure that the response indicated they understood the work.
We also had our own 'internal' cost estimates (i.e. if we had the in house capacity, how long would it take to do and how much would it cost) which made it clear when a vendor was in over their head under-bidding just to get the work, which was never a good thing.
And, I've seen that done in the software industry as well, and it worked.
That said, the main 'extra' challenge in IT is that key is that many of the good players aren't going to be the ones beating down your door like the big 4 or a WITCH consultancy will.
But really at the end of the day, the problem is what often happens is that business-people who don't really know (or necessarily -care-) about specifics enough unfortunately are the people picking things like vendors.
And worse, sometimes they're the ones writing the spec and not letting engineers review it. [0]
[0] - This once led to an off-shore body shop getting a requirement along the lines of 'the stored procedures and SQL called should be configurable' and sure enough the web.config had ALL the SQL and stored procedures as XML elements, loaded from config just before the DB call, thing was a bitch to debug and their testing alone wreaked havoc on our dev DB.
But most of the time you don't want "a unit of software", you want some amorphous blob of product and business wants and needs, continuously changing at the whims of business, businessmen, and customers. In this context, sure, you're paying your developers to solve problems, but moreover you're paying them to store the institutional knowledge of how your particular system is built. Code is much easier to write than to read, because writing code involves applying a mental model that fits your understanding of the world onto the application, whereas reading code requires you to try and recreate someone else's alien mental model. In the situation of in-house products and business automation, at some point your senior developers become more valuable for their understanding of your codebase than their code output productivity.
The context of "I want this particular thing fixed in a popular open source codebase that there are existing people with expertise in", contracting makes a ton of sense, because you aren't the sole buyer of that expertise.
I don't remember all the details, but it doesn't seem like a great place to work, at least based on the horror stories I've read.
Valve does a lot of awesome things, but they also do a lot of shitty things, and I think their productivity is abysmal based on what you'd expect from a company with their market share. They have very successful products, but it's obvious that basically all of their income comes from rent-seeking from developers who want to (well, need to) publish on Steam.
That said, something like this which is a fixed project, highly technical and requires a lot of domain expertise would make sense for _anybody_ to contract out.
For contextual, super specific, super specialized work (e.g. SCX-LAVD, the DirectX-to-Vulkan and OpenGL-to-Vulkan translation layers in Proton, and most of the graphics driver work required to make games run on the upcoming ARM based Steam Frame) they like to subcontract work to orgs like Igalia but that's about it.
There have been demands to do that more on HN lately. This is how it looks like when it happens - a company paying for OSS development.
They needed Windows games to run on Linux so we got massive Proton/Wine advancements. They needed better display output for the deck and we got HDR and VRR support in wayland. They also needed smoother frame pacing and we got a scheduler that Zuck is now using to run data centers.
Its funny to think that Meta's server efficiency is being improved because Valve paid Igalia to make Elden Ring stutter less on a portable Linux PC. This is the best kind of open source trickledown.
"Slide left or right" CPU and GPU underclocking.
(And same for Windows to the degree it is more inconsistent on Windows than Mac)
The problem is: the specifications of ACPI are complex, Windows' behavior tends to be pretty much trash and most hardware tends to be trash too (AMD GPUs for example were infamous for not being resettable for years [1]), which means that BIOSes have to work around quirks on both the hardware and software. Usually, as soon as it is reasonably working with Windows (for a varying definition of "reasonably", that is), the ACPI code is shipped and that's it.
Unfortunately, Linux follows standards (or at least, it tries to) and cannot fully emulate the numerous Windows quirks... and on top of that, GPUs tend to be hot piles of dung requiring proprietary blobs that make life even worse.
[1] https://www.nicksherlock.com/2020/11/working-around-the-amd-...
The real problem is that the hardware vendors aren't using its development model. To make this work you either need a) the hardware vendor to write good drivers/firmware, or b) the hardware vendor to publish the source code or sufficient documentation so that someone else can reasonably fix their bugs.
The Linux model is the second one. The problem comes when the hardware vendors don't do either of them. But some of them are better than others, and it's the sort of thing you can look up before you buy something, so this is a situation where you can vote with your wallet.
… And that’s all fine, because this is a super niche need: effectively nobody needs Linux laptops and even fewer depend on sleep to work. If ‘Linux’ convinced itself it really really needed to solve this problem for whatever reason, it would do something that doesn’t look like its current development model, something outside that.
Regardless, the net result in the world today is that Linux sleep doesn’t work in general.
That's a vastly different statement.
Liquid Glass ruined multitasking UX on my iPad. :(
Also my macbook (m4 pro) has random freezes where finder becomes entirely unresponsive. Not sure yet why this happens but thankfully it’s pretty rare.
until the new s2idle stuff that Microsoft and Intel have foisted on the world (to update your laptop while sleeping… I guess?)
I think the reality is that Linux is ahead on a lot of kernel stuff.
IO_Uring is still a pale imitation :(
io_uring didn't change that, it only got rid of the syscall overhead (which is still present on Windows), so in actuality they are two different technical solutions that affect different levels of the stack.
In practice, Linux I/O is much faster, owing in part to the fact that Windows file I/O requires locking the file, while Linux does not.
https://learn.microsoft.com/en-us/windows/win32/api/ioringap...
Although Windows registered network I/O (RIO) came before io_uring and for all I know might have been an inspiration:
https://learn.microsoft.com/en-us/previous-versions/windows/...
You can see shims for fork() to stop tanking performance so hard. IOUring doesnt map at all onto IOCP.
IOCP is much nicer from a dev point of view because your program can be signalled when a buffer has data on it but also with the information of how much data, everything else seems to fail at doing this properly.
On the surface, they are as simple as Linux UOG/rwx stuff if you want it to be, but you can really, REALLY dive into the technology and apply super specific permissions.
Also, as far as I know Linux doesn't support DENY ACLs, which Windows does.
Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
- There does not seem to be a way to determine which machines in the fleet have successfully applied. If you need a policy to be active before doing deployment of something (via a different method), or things break, what do you do? - I’ve had far too many major incidents that were the result of unexpected interactions between group policy and production deployments.
What?! I was doing kickstart on Red Hat (want called Enterprise Linux back then) at my job 25 years ago, I believe we were using floppies for that.
BTW, we managed to get the earlies history of the project written down here by one of the earliest contributors for anyone who might be interested:
https://anaconda-installer.readthedocs.io/en/latest/intro.ht...
As for how the automated installation on RHEL, Fedora and related distros works - it is indeed via kickstart:
https://pykickstart.readthedocs.io/en/latest/
Note how some commands were introduced way back in the single digit Fedora/Fedora Core age - that was from about 2003 to 2008. Latest Fedora is Fedora 43. :)
Preseed is not new at all:
https://wiki.debian.org/DebianInstaller/Preseed
RH has also had kickstart since basically forever now.
I've been using both preseeds and kickstart professionally for over a decade. Maybe you're thinking of the graphical installer?
You have a hardened Windows 11 system. A critical application was brought forward from a Windows 10 box but it failed, probably a permissions issue somewhere. Debug it and get it working. You can not try to pass this off to the vendor, it is on you to fix it. Go.
And then you get security product who have the fun idea of removing privileges when a program creates a handle (I'm not joking, that's a thing some products do). So when you open a file with write access, and then try to write to the file, you end up with permission errors durig the write (and not the open) and end up debugging for hours on end only to discover that some shitty security product is doing stupid stuff...
Granted, thats not related to ACLs. But for every OK idea microsoft had, they have dozen of terrible ideas that make the whole system horrible.
This makes writing robust code under those systems a lot easier, which in turns makes debugging things when it goes wrong nicer. Now, I'm not going to say debugging those systems is great - SELinux errors are still an inscrutable mess and writing SELinux policy is fairly painful.
But there is real value in limiting where errors can crop up, and how they can happen.
Of course, there is stuff like FUSE that can throw a wrench into this: instead of an LSM, a linux security product could write their own FS overlay to do these kind of shenanigans. But those seem to be extremely rare on Linux, whereas they're very commonplace on Windows - mostly because MS doesn't provide the necessary tools to properly write security modules, so everyone's just winging it.
227 more comments available on Hacker News