Why Is Windows Still Tinkering with Critical Sections? – the Old New Thing
Posted3 months agoActive3 months ago
devblogs.microsoft.comTechstoryHigh profile
heatedmixed
Debate
80/100
WindowsPerformance OptimizationLegacy Software
Key topics
Windows
Performance Optimization
Legacy Software
The Old New Thing blog post discusses Windows' tinkering with critical sections, sparking debate about Microsoft's engineering decisions and the challenges of maintaining legacy software.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
55m
Peak period
89
60-72h
Avg / period
20.7
Comment distribution124 data points
Loading chart...
Based on 124 loaded comments
Key moments
- 01Story posted
Sep 24, 2025 at 1:32 PM EDT
3 months ago
Step 01 - 02First comment
Sep 24, 2025 at 2:28 PM EDT
55m after posting
Step 02 - 03Peak activity
89 comments in 60-72h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 1, 2025 at 6:15 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45363471Type: storyLast synced: 11/20/2025, 8:52:00 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
We need more casual light-heartedness in this line of work considering how much casual bullshit there is.
https://news.ycombinator.com/item?id=42046226
Not to mention (from way back when) VAX -> VAXen.
Likewise I'm comfortable with "Can't nobody prove nothing", which I think is a succinct way to express an opinion that would be rather awkward when expressed in the prestige dialects of English instead.
Here's another example of confusing terminology. In the C standard library, `fflush` just advances you to the next step, where your buffer goes out to the 'write file' system call, where your data sits in the disk cache to be written later on. Meanwhile, Win32's `FlushFileBuffers` actually forces everything out to the disk, acting much more like `fsync`. Yet again, very different things despite using the same word "flush" in the name.
fflush and FlushFileBuffers both just happen to have "flush" in the name. The fact someone decided that fflush doesn't actually flush the file buffer and added fsync to actually flush the buffers, does make for a very frustrating experience, but I find most of the POSIX API to be like that. See also: sync(void) vs fsync(int fd) vs syncfs(int fd).
- It is purely an in-process lock, and originally the main way of arbitrating access to shared memory within a process.
- It is counted and can be recursively locked.
- It defaults to simple blocking, but can also be configured to do some amount of spin-waiting before blocking.
- CritSecs are registered in a global debug list that can be accessed by debuggers. WinDbg's !locks command can display all locked critical sections in a process and which threads have locked them.
Originally in Win32, there were only two types of lock, the critical section for in-process locking, and the mutex for cross-process or named locks. Vista added the slim reader/writer lock, which is a much lighter weight, pointer-sized lock that uses the more modern wait-on-address approach to locking.
Locks come in several types. A developer picks one type depending on the use of the protected data; for example, exclusive, reader-writer, and others.
Locks are great at preventing data corruption and data loss, but come with issues. They can hurt performance, can cause "liveness" and other issues, and are usually "advisory" ("mandatory" locks which are enforced by the OS are rarely available) so developers must remember to protect data by using locks around every critical section.
Modern hardware includes support for many lock-free mechanisms that can greatly reduce the need for locks.
Today you can use SRWLock or WaitOnAddress on Windows(or std::shared_mutex for portable impl, but not std::mutex because reasons).
Monkeying with critical sections is hard. Monkeying with them due to performance and memory optimization while maintaining correctness is incredibly hard. If they'd done a good job of it, no one would have noticed, and Raymond would have had a cool story to tell. Instead we got this.
Sigh, Microsoft, what's happening over there?
Using extra stack space has got to be on a list of things likely to cause trouble.
When you have apps doing things like relying on values in uninitialized memory, literally any change anywhere can potentially break them.
A company like Microsoft that cares about compatibility will test third party apps to try to catch this sort of thing, but there’s only so much testing you can do. It’s not feasible to test games from two decades prior so throughly that you’d notice that one particular vehicle never spawns.
But it's still kind of bogus that compatability mode doesn't make things compatible.
From a quick search, it looks like compatibility mode entails interposing a library between the app and the OS libraries, and the library emulates the behavior of an older OS. It’s not automated, each compatibility fix is crafted to work around a specific issue. In theory a fix could be made for this issue, but they’d have to find it and debug it first.
They didn't make any changes in correctness. The game itself is broken, and it's normal to undefined behavior from out of bounds accesses will change between OS versions, even just for security reasons. In fact, other versions of the game did fix this bug in a patch, it just didn't get picked because people go out of their way to downgrade the game in order to maintain some cut assets and features.
Most other OSes (Android, MacOS, iOS, game consoles) rely on versioning, which makes it easier to provide compatibility layers or at least know when a piece of software just isn't supported anymore.
Personally I think Windows should have specialized VMs for old software, so they can be compatible forever even if they have bugs.
Pretty much every game console ever made still works with every game for that console, but when it's Windows you never know.
Microsoft strategy is to maintain backward compatibility as much as possible.
Microsoft is the only company to pick all three. That's not strategy, that's indifference.
All because game developers prefer to target this OS full or warts than dealing with GNU/Linux fragmentation.
I might go back to Debian. I'm only really using Ubuntu since that and RHEL are what we use at work.
It was quite the shock to me recently when I had to use a Surface Laptop (the ARM one). Snapdragon X Elite & 32GB of RAM and took almost double the time to get to the desktop from a cold start compared to my M2 Air. Then even once everything was loaded, interacting with the system just did not feel responsive at all, the most egregious being file explorer and the right click menu.
And I have my own gripes with macOS feeling like it's slow due to the animations and me wanting to move faster than it can. Windows used to happily oblige, but now it's laggy.
Microsoft is too caught up in shoving Copilot everywhere to care though.
Office has been preloading on startup since Windows 95.
At some point I started Autoruns and just disabled every DLL that didn't come from Microsoft and that I didn't strictly need. It sped up Windows immensely, and I went back enabling maybe one or two of those DLLs at a later point because it broke functionality.
I could've saved weeks of my life if Microsoft had just added a little popup that says "7zip.dll is adding 24% extra time to opening right click menus, disable it Y/N" every time a badly programmed plugin slowed down my shell.
Same goes for OS X now — it seems every OS grows incomprehensible to the company that makes it
Which is sad, because one of the main functions of an OS should be to protect you from misbehaving applications
Linux is a bit better, but I think even the Debian GUI suffers from trying to be convenient/magic rather than predictable/robust
The tough part with (implied) multifaceted comments is nobody can just say things like that, they have to assume what meaning could still make sense to them (which is a dangerous game) or just not engage.
It's stupid but Windows has choked on that situation for many years.
And because Explorer continues to support those interfaces today, that means if anything integrates with the shell via those interfaces, Explorer ends up with lots of operations that are forced into being synchronous. You can avoid some of it by being extremely selective about what sorts of shell extensions you let be installed.
Windows doesn’t have performance requirements for most plugins/extensions. It would be great if they did, but it hasn’t been the culture of their ecosystem thus far.
>Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, Registry and process/thread activity. It combines the features of two legacy Sysinternals utilities, Filemon and Regmon ...
https://learn.microsoft.com/en-us/sysinternals/downloads/pro...
Probably all you can do is keep poking your IT support and hope it goes up the line until someone finds or creates the fix/workaround.
While there's been a bit of polish, the two simply sip hardware requirements. So much so that you can put them on things like a raspberry pi 3 and still get a decent experience.
The solution seems to be installing and fully activating the operating system in the VM while it's still possible, then archiving the VM image. However, I don't know how reliable this method is, since the Windows OS may require reactivation if the hardware configuration changes significantly. Therefore, if the future VM host machine exposes slightly different hardware to the guest machine, the activation might be gone.
I agree in the general case, but this particular case isn’t a good argument for it.
You can't blame the user for the fact that the old software contains something that is, technically, an invalid use of the API, meaning the software shouldn't have worked, even in the past. The only way to reliably make the old software work as intended is to have an OS version from that time available.
I won’t argue about your other point, as there are arguments either way. I just don’t think this particular example makes a good case, and I suspect that it wouldn’t have been made if the full workings of the bug had been properly understood.
Every time I've tried it, from 2007 to now, it's been a buggy hunk of crap. I normally try not to disparage peoples software projects, but I really don't get ReactOS. I tried it again actually just a few weeks ago. It's barely usable. You'll have far fewer problems just using Wine with Linux.
That's the thing with these kinds of projects that aim to run vast libraries of preexisting software — they're crap for a long, long time, until suddenly there's enough compatibility that hey, it's actually usable. The time vs usability graph for them is very non-linear.
Wine was "garbage" for decades as well. For a long time, it wouldn't do a satisfactory job of running anything but simplest win32 apps.
Same for Ruffle, the open source flash player, but it got to that point much quicker because the API surface is orders of magnitude smaller.
So, VERY much in keeping with Windows tradition, then!
"Once upon a time, pointers on the Macintosh had 24 bits..."
https://news.ycombinator.com/item?id=44632615
Though at least back then, they provided backward-compatibility modes for old software. You know, back when the expected service lifetime of your Mac was longer than that of your dog.
In the retro-games emulation scene there exist quite some people who write binary patches for popular retro games to fix such bugs. Perhaps this approach (and the necessary skills for it) should become more popular outside the retro-games emulation scene.
You just invented "run in compatibility mode"
I've been using Linux only as a desktop for 30 years, so that's a strange comment. For sure games and other software that was written to run on Windows exclusively, ran best on Windows (who could have predicted it!). But as a desktop, Linux has been usable for many decades.
Windows is actually excellent at maintain backwards compatibility. A program written 30 years ago probably still works today.
Rather: Windows is actually excellent at maintain backwards compatibility for binary programs.
There exist lots of other backwards-compatibility topics like
- being backwards-compatible in the user interface (in a business setting, ensuring that the business-critical applications still run is a problem of the IT, but if the user interface changes, people will complain. I just say "Windows 8" or "Office ribbons" (when they were introduced)). I would for example claim that very specifically for the shell interface, the GNU/Linux crowd cares a lot more about user-interface backwards compatibility than what Microsoft does.
- being backwards-compatible in the hardware requirements (i.e. "will it still run on an old computer"). I just want to mention the drama around Windows 11 because it doesn't run on still actively used hardware so that Microsoft cannot force the update on all Windows 10 users, but on the other hand wants to terminate the support for Windows 10.
Maybe if you ignore things like systemd radically changing how services and init systems work. Massive changes with Network Manager and firewalld compared to iptables. Gnome today looks pretty much nothing like it did when I first started using Linux. Now we install software through snaps and what not, or move from yum to dnf or other package managers.
Using Linux today feels very different than it did 20 years ago. I bet most scrips I wrote for Ubuntu over 20 years ago would fail to execute today on a fresh modern install.
I am explicitly talking about the shell (Bash, ksh, zsh). Other parts of the GNU/Linux stack changed a lot.
The Windows analogue of some UNIX shell is rather explorer.exe, and this is exactly what my Windows 8 example refers to.
And bash isn't "some mostly deprecated tool that was only used by power-users"? Think people are mostly using bash for their interface on their steam decks and Android phones and what not? Do most people boot Ubuntu straight into text mode or immediately launch a DE? Grandma using lynx to browse Facebook?
Explorer is a desktop environment. Which, yes, the desktop environment landscape in Linux these days looks pretty different from what was around 20 or so years ago.
You're constantly moving the goal posts and comparing apples and oranges here. Originally saying GNU/Linux user interfaces, then shifted to only text shells, and then comparing those text shells to entire desktop environments while ignoring the forest of constantly changing desktop environments of Linux.
And even then, most of those bat scrips I wrote since XP that only use system tools and commands will largely all still run and do the same thing today. I can't say the same for the same time frame on most major Linux distros that have changed out large parts of their internal tooling.
Legacy compatibility is one of windows biggest strong points, neatly containing 'old windows' and providing the best experience for it would solve the puzzle of why users should stick with windows if MS did want to prune the core OS without giving users reasons to move away.
That's a completely backwards take though. Windows is practically the only platform in the world that cares about backwards compatibility. On Windows you can run a 20 year old executable and you've got pretty good chance it runs. With Linux operating systems you have no idea if something compiled on the last LTS release runs on the next one.
This is one of the few issues where having a private company control the entire stack and providing a stable ABI for decades is actually a benefit, to the point where your target if you want to build games on linux is...Win32 (https://blog.hiler.eu/win32-the-only-stable-abi/)
The title and discussion of the article is "Why is Windows still tinkering with critical sections." not "Why Windows is cast in stone."
Running Linux on regular consumer hardware in 2005 was not really any harder than it is today. In fact, many of the same problems still exist! GPU drivers and janky wifi and power-saving modes, same shit, different decade.
There's Steam Deck now, and Android, but those are still quite proprietary driven by single companies, so I'm not really sure they fit what you mean about an open alternative.
Have a look at ReactOS:
> https://reactos.org/
> https://en.wikipedia.org/wiki/ReactOS
The Linux community already stumbles at this and needs Windows to help it out. "Win32 is the only stable ABI on Linux" has been said so many times it isn't a joke any more. Keep in mind that the OS being open doesn't make the games open. Wine is possible because of Win32's die-hard commitment to long-term binary compatibility. I'm not so sure we're in a bad situation here. The Linux userspace has never had this degree of backwards binary compatibility. The kernel doesn't break userspace but userspace breaks itself all the time.
Linux userspace gets lots of other benefits from this rapid-iteration approach that doesn't concern itself with literally decades of binary compatibility, but keeping old games running indefinitely isn't one of them.
https://news.ycombinator.com/item?id=43772311