Debian Technical Committee Overrides Systemd Change
Posted2 months agoActive2 months ago
lwn.netTechstoryHigh profile
heatednegative
Debate
85/100
DebianSystemdLinux Distributions
Key topics
Debian
Systemd
Linux Distributions
The Debian Technical Committee has overridden a systemd change that would have made /run/lock non-world-writable, sparking debate about systemd's maintainership and Linux distribution policies.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
27m
Peak period
59
6-12h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 24, 2025 at 6:07 AM EDT
2 months ago
Step 01 - 02First comment
Oct 24, 2025 at 6:34 AM EDT
27m after posting
Step 02 - 03Peak activity
59 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 26, 2025 at 1:29 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45692915Type: storyLast synced: 11/20/2025, 7:50:26 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> He said that he uses cu ""almost constantly for interacting with embedded serial consoles on devices a USB connection away from my laptop""
Whyyyyyyyyyyyyyyy
There are a million better ways of doing this.
> create a lock file for every dial-in line to prevent its use by programs looking for a dial-out line.
[0]: https://lwn.net/Articles/1042594/
I don't see the problem. Minicom and even picocom are bloated compared to cu
Interesting take.
I think that the FHS is still extremely helpful for packagers, sysadmins and others so they won't stomp on each other's feet constantly. It helps set expectations and prevents unnecessary surprises.
Just the fact that one particular FHS rule might be outdated or even harmful doesn't mean that the FHS as a whole has outlived its usefulness.
FHS hasn't changed in years. Since then, sandboxing, containers, novel package schemes, and more are the zeitgeist. What does the FHS say about them?
Nothing keeps you from following the FHS inside your container or sandbox.
Are you referring to the location where container images live? Then `/var/lib/containers/` and `/var/lib/containers/storage/` would be perfectly FHS compliant.
Systemd frustrates and angers people with Poettering's complete disregard for bug reports, tradition, and basic common courtesy. At the same time, change needed to happen and change is gonna hurt. And big changes can't wait until they're just as stable as the old system: does anyone develop software like that in their own careers? I try not to ship complete crap but "just as stable as v1" is never a goal.
Poettering is a Microsoft employee. It is normal that he follows the direction of the mothership. What is not normal is, that he has so many blind followers.
every distro has defined their own new file system layout standard
sure they all started out with the common ancestor of FHS 3.0, but diverged since then in various degrees
and some modern competing standards try to fix it (mainly UAPI Group)
(And yes some people will go one and one about how UAPI is just a way for systemd to force their ideas on others, but if you don't update a standard for 10+[1] years and aren't okay with others taking over this work either, idk. how you can complain for them making their own standard).
[1]: It's more like 20 years, but 10 years ago the Linux Fundation took over it's ownership.
I mean, yeah, I get it, systemd bad, democracy good, but these world-writable lock folders are actually a huge pain, and adding some shim code to upgrade to a more secure solution seems achievable?
Now obviously people these days generally know about that so hopefully don’t use predictable file names but that’s one way.
Unless you do open("/run/lock/foo.lock", O_WRONLY|O_CREAT|O_EXCL|O_NOFOLLOW)
I remember the time (around 2001-2002) when just about every binary was discovered to have some variant on this exact exploit. I happened to be linux sysadmin for a very large, high-profile set of linux boxes at the time. Happy times.
Annoying side effect: now you gotta guess which process created the darn lockfile.
A more sensible approach is to do sanity checking on the lockfile and its contents (i.e. does the contained PID match one's own binary).
Or you can use “lsof” to just tell you.
If you want what Debian provides, it's a poor choice for you... but -IME- it doesn't break on upgrade, unlike some Debian-derived distros I've tried in the past.
[0] Something along the lines of "Always try to package exactly what's provided by upstream, try hard to get distro patches upstreamed, and try to have the latest available upstream release in the 'testing channel'.".
[1] Well, I do have a machine that (aside from "side-loading" kernel updates from time to time) hasn't been updated in four years. While I'll try to update that one in the normal way, I'm probably going to need to reinstall.
Thus the title reflects the most interesting bit of the story.
like overriding it now makes a lot of sense, there needs to be grace periods etc.
but we live in a world where OSes have to become increasingly more resilient to misbehaving programs (mainly user programs, or "server programs" you can mostly isolate with services, service accounts/users etc.). And with continuous increases in both supply chain attacks and crappy AI code this will only get worse.
And as such quotas/usage limits of a temp fs being shared between all user space programs like lvm2 and dmraid is kinda a bad idea.
and for such robustness there aren't that many ways around this change, basically the alternatives are:
- make /var/lock root only and break a very small number of programs which neither use flock nor follow the XDG spec (XDG_RUNTIME_DIR is where your user scoped locks go, like e.g. for wayland or pipewire)
- change lvm2, dmraid, alsa(the low level parts) and a bunch of other things your could say are core OS components to use a different root only lock dir. Which is a lot of work and a lot of breaking changes, much more then the first approach.)
- use a "magic" virtual file system which presents a single unified view of /var/lock, but under the hood magically separates them into different tempfs with different quotas (e.g. based on used id the file gets remapped to /run/user/{uid}, except roots gets a special folder and I guess another folder for "everything else"???) That looks like a lot of complexity to support a very small number of program doing something in a very (20+ years) outdated way. But similar tricks do exist in systemd (e.g. PrivateTemp).
kinda only the first option makes sense
but it's not that it needs to be done "NOW", like in a year would be fine too, but in 5 years probably not
I hope they have a change of mind in their approach.
https://pubs.opengroup.org/onlinepubs/9799919799/utilities/V...
Personally I find an interesting observation, and microsoft contributing to linux in any way should be met with skepticism based on the entire last 30 years.
People are so quick to wipe away any wrongdoing from Microsoft as soon as they get thrown a bone, there's some interesting psychology here.
Like, should Lockheed intentionally hire North Korean programmers at cheap rates because North Korea can afford to devote resources to helping Lockheed? The issue here is not primarily that North Korea is a massive citizen-trampling megastate. It's that Lockheed's interests are misaligned with North Korea's.
Also individuals tend to prioritize work that benefits employer interest but that doesn't mean they can do things arbitrarily. It just shifts energy and focus towards certain areas. It's not a problem unless the company employs a large fraction of Debian maintainers which Microsoft doesn't.
https://www.debian.org/intro/philosophy
I think employing the project lead of systemd gives Microsoft a kind of influence that employing the packager of libjpeg-turbo wouldn't. Lennart is notorious for doing things arbitrarily, and what we are discussing here is that the Debian package maintainer for systemd is also doing things arbitrarily, and is also employed by Microsoft.
If we think about it in logical terms, they could sabotage Debian by having “interests” that are suboptimal for their core demographic.
this is a similar to how euro-skeptics are the people who make the very unpopular laws inside the European Union, leading to all the negative press about the European Union. But they have to be listened to as they are democratically elected and it is a democracy.
While work now mandates "If you want to use Linux, it has to be Ubuntu" (and I complied). On personal front - about a decade ago I've moved from "vanilla" Gentoo to Calculate Linux - which was and still is 100% Gentoo.
These days difference is even smaller, but already 10+ years ago Calculate had sane profiles as well as all software packages as pre compiled binaries matching those profiles.
And although systemd is one of configurable USE keywords on Calculate/Gentoo - it's still not the default.
So there probably are some folks that haven't been touched by systemd at all... For now.
[1] https://shepherding.services/manual/html_node/Introduction.h...
Needless to say, such actions are ultimately hurting the users.
I've used it as my home computer for four years, and it seems to work fine.
Does it? That means anyone who needs a lock gets superuser, which seems like overkill. Having a group with write permissions would seem to improve security more?
a global /run/lock dir is an outdated mechanism not needed anymore
when the standard was written (20 years ago) it standardized a common way programs used to work around not having something like flock. This is also reflected in the specific details of FHS 3.0 which requires lock files to be named as `LCK..{device_name}` and must contain the process id in a specific encoding. Now the funny part. Flock was added to Linux in ~1996, so even when the standard was written it was already on the way of being outdated and it was just a matter of time until most programs start using flock.
This brings is to two ways how this being a issues makes IMHO little sense:
- a lot of use cases for /var/lock have been replaced with flock
- having a global writable dire used across users has a really bad history (including security vulnerabilities) so there have been ongoing affords to create alternatives for anything like that. E.g. /run/user/{uid}, ~/.local/{bin,share,state,etc.}, systemd PrivateTemp etc.
- so any program running as user not wanting to use flock should place their lock file in `/run/user/{uid}` like e.g. pipewire, wayland, docker and similar do (specifically $XDG_RUNTIME_DIR which happens to be `/un/user/{uid}`)
So the only programs affected by it are programs which:
- don't run as root
- don't use flock
- and don't really follow best practices introduced with the XDG standard either
- ignore that it was quite predictable that /var/lock will get limited or outright removed due to long standing efforts to remove global writable dirs everywhere
i.e. software stuck in the last century, or in this case more like 2 centuries ago in the 2000th
But that is a common theme with Debian Stable, you have to fight even to just remove something which we know since 20 years to be a bad design. If it weren't for Debians reputation I think the systemd devs might have been more surprised by this being an issue then the Debian maintainers about some niche tools using outdated mechanisms breaking.
OK, but suppose you have a piece of software you need to run, that's stuck in the last century, that you can't modify: maybe you lack the technical expertise, or maybe you don't even have access to the source code. Would you rather run it as root, or run it as a user that's a member of a group allowed to write to that directory?
The systemd maintainers (both upstream and Debian package maintainers) have a long history of wanting to ignore any use cases they find inconvenient.
and if not, you can always put it in a container in which `/var/lock` permissions are changed to not being root-only. Which you probably anyway should do for any abandon ware.
1) A piece of software can be complete.
2) It is virtuous when a piece of software is complete. We're freed to go do something else with our time.
3) It's not virtuous to obligate modifications to any software just because one has made changes to the shape of "the bikeshed".
Is this case, usage of /var/lock was clumsy for a long time. And not cleaning up APIs creates something horrible like Windows. API breaks should be limited, to the absolute minimum. The nice part here is, that we can adapt and patch code on Linux usually.
On the other side Linux (the kernel), GLIBC/STDLIBC++, Systemd and Wayland need to be API stable. Everybody dislikes API-Instability.
This was a general question to begin with.
> Their is an option for the old behavior.
The discussion never centered on an option for keeping old behavior for any legitimate reason. The general tone was "systemd wants it this way, so Debian shall oblige". It was a borderline flame-war between more reasonable people and another party which yelled "we say so!"
> It is a security issue and modern solutions to replace exist.
I'm a Linux newbie. Using Linux for 23 years and managing them professionally for 20+ years. I have yet to see an attack involving /var/lock folder being world-writeable. /dev/shm is a much bigger attack surface from my experience.
Migration to flock(2) is not a bad idea, but acting like Nero and setting mailing lists ablaze is not the way to do this. People can cooperate, yet some people love to rain on others and make their life miserable because they think their demands require immediate obedience.
> FHS isn't maintained.
Isn't maintained or not improved fast enough to please systemd devs? IDK. There are standards and RFCs which underpin a ton of things which are not updated.
We tend to call them mature, not unmaintained/abandoned.
> On Arch /run/lock is only writeable for the superusers. As user I value reliability and the legacy tools are usable.
I also value the reliability and agree that legacy tools shall continue working. This is why I use Debian primarily, for the same last 20+ years.
If FHS hadn't been unmaintained for nearly 2 decades I'm pretty sure non-root /var/lock would most likely have been deprecated over a decade ago (or at least recommended against being used). We know that cross user writable global dirs are a pretty bad idea since decades, if we can't even fix that I don't see a future for Linux tbh.(1)
Sure systemd should have given them a heads up, sure it makes sense to temporary revert this change to have a transition period. But this change has be on the horizon for over 20 year, and there isn't really any way around it long term.
(1): This might sound a bit ridiculous, but security requirements have been changing. In 2000 trusting most programs you run was fine. Today not so much, you can't really trust anything you run anymore. And it's just a matter of time until it is negligent (like in a legal liability way) if you trust anything but your core OS components, and even that not without constraints. As much as it sucks, if Linux doesn't adept it dies. And it does adopt, but mostly outside of the GPG/FSF space and also I think a bit to slow on desktop. I'm pretty worried about that.
> > FHS isn't maintained. > Isn't maintained or not improved fast enough to please systemd devs? IDK.
more like not maintained at all for 20+ years in a context where everything around it had major changes to the requirements/needs
they didn't even fix the definition of /var/lock. They say it can be used for various lock files but also specify a naming convention must be used, which only works for devices and also only for such not in a sub-dir structure. It also fails to specify that it you should (or at least are allowed to cleared the dir with reboot, something they do clarify for temp). It also in a foot note says all locks should be world readable, but that isn't true anymore since a long time. There are certain lock grouping folders (also not in the spec) where you don't need or want them to be public as it only leaks details which maybe an attacker could use in some obscure niche case.
A mature standard is one which has fixes, improvements and clarification, including wrt. changes in the environment its used in. A standard which recognizes when there is some suboptimal design and adds a warning, recommending not to use that sub-optimal desing etc. Nothing of the sort happened with this standard.
What we see instead is a standard which not only hasn't gotten any relevant updates for ~20 years but didn't even fix inconsistencies in itself.
For a standard to become mature it needs mature, that is a process of growing up, like fixing inconsistencies, clarifications, and deprecation (which doesn't imply removal later one). And this hasn't happen for a long time. Just because something has been used for a long time doesn't mean it's mature.
And if you want to be nit picky even Debian doesn't "fully" comply with FH3, because there are points in it which just don't make sense anymore, and they haven't been fixed them for 20 years.
Yes. This is why Microsoft didn't decide to base Windows XP on the NT kernel and Windows 95 was nothing more than a (arguably very) pretty coat of paint on top of Windows 3.11.
It's also why multi-user systems with complicated permissions systems that ran processes in isolated virtual address spaces never got built in the decades prior to NT. All those OS researchers and sysadmins saw no reason to distrust the programs other users intended to run.
The "security issue" expressed is that someone creates 4 billion lock files. The entire reason an application would have a path to create these lock files is because it's dealing with a shared resource. It's pretty likely that lock files wouldn't be the only route for an application to kill a system. Which is a reason why this "security issue" isn't something anyone has taken seriously.
The reason is much more transparent if you read between the lines. Systemd wants to own the "/run" folder and they don't like the idea of user space applications being able to play in their pool. Notice they don't have the same security concerns for /var/tmp, for example.
i think that is somewhat reasonable. but then systemd should have its own space, independent of a shared space: /var/systemd/run or /run/systemd/ ?
This would go contrary to an unstated goal: making everyone else to dance to systemd's tune, for their own good.
[0] <https://lore.kernel.org/all/20140402144219.4cafbe37@gandalf....>
[1] <https://lore.kernel.org/all/CA+55aFzCGQ-jk8ar4tiQEHCUoOPQzr-...>
The central problem with systemd is that they don't want to let you go about your business, they want you to conform to their rule.
Looking from the outside, it looks more that this is a failure of the Debian systemd package maintainer to follow Debian's rules. (Though since I'm not a part of that community, I recognize that there may be cultural expectations I'm not aware of.)
Yes this is a good response from upstream. I can work with that, but in that case, even this response didn't get reflected to mailing list discussion, or drowned out instantly.
My question was more general though, questioning systemd developers' behavior collectively (hence the projects' behavior) through time.
As a user, systemd has improved my productivity tremendously.
The kind of bad mouthing developers that work on solutions for complex problems, code that runs on billions of machines, reflects more of your own fragile ego than them.
> As a user, systemd has improved my productivity tremendously.
Both can be true at the same time. Particularly in the beginning, there was a long string of really important things that used to Just Work that were broken by systemd. Things like:
1. Having home directories in automounted NFS. Under sysv, autofs waited until the network was up to start running. Originally under systemd, "the network" was counted as being up when localhost was up.
2. Being able to type "exit" from an ssh session and have the connection close. Under systemd, closing the login shell would kill -9 all processes with that userid, including the sshd process handling the connection -- before that process could close the socket for the connection. Meaning you type "exit" in an interactive terminal and it hang.
It's been a while since I encountered any major issues with systemd, but for the first few years there were loads of issues with important things that used to Just Work and then broke and took forever to fix because they didn't happen to affect the systemd maintainers. If you didn't encounter any of these, it's probably because your use cases happened to overlap theirs.
Yes, systemd and journalctl have massively simplified my life. But I think it could have been done with far less disruption.
There's no need to be rude. While I'm not anti-systemd; it didn't change my life tremendously, either.
People tend to bash init scripts, but when they are written well, they both work and port well between systems. At least this is my experience with the fleet I manage.
Dependencies worked pretty well in Parallel-SysV, too, again from my experience. Also, systemd is not faster than Parallel-SysV.
It's not that "I had to learn everything from scratch!" woe either. I'm a kind of developer/sysadmin who never whines and just reads the documentation.
I wrote tons of service files and init scripts during Debian's migration. I was a tech-lead of a Debian derivative at that time (albeit working literally underground), too.
systemd and its developers went through a lot phases, remade a lot of mistakes despite being warned about them, and took at least a couple of wrong turns and booed for all the right reasons.
The anger they pull on themselves are not unfounded, yet I don't believe they should be on the receiving end of a flame-war.
From my perspective, systemd developers can benefit tremendously by stepping down from their thrones and look eye to eye with their users. Being kind towards each other never harms anyone, incl. you.
Any time there's systemd criticism there's always a quick rebuttal "But it was too hard writing anything in any other init system before so stop complaining".
So there's enough pass being given from the start to Systemd and the developers because it always has been forced upon us.
a company that considers "consent" to be a dirty word
Systemd doesn't work for me, but it has taken over most Linux distributions, so clearly it's got something people want that I don't understand. That was the case for PulseAudio too.
Systemd basically arose out of a frustration at the legacy issues so the whole project exists as a modernizing effort. No wonder they consider backwards compatibility low priority.
And I think more people should look into being, once again, 100% systemd-free.
> Can anyone tell why systemd developers run fast and loose with what they believe and bully everyone with a stick made out of their ideas?
Because the goal is to take control of Linux. That's why systemd is PID1. That's why Poettering works for Microsoft.
The real question is: why was that ultra-convoluted xz backdoor attempt only working on Linux systems that did have systemd? People shall try to wag the dog saying "but it's because this and that made is so that xz was loaded by OpenSSH, it's got nothing to do with systemd". It's got everything to do with systemd.
And the other question is: how many backdoors are operational, today, on systems that have systemd?
Systemd is Microsoft-level bloat, running as PID 1, spreading its tentacles everywhere in Linux distros, definitely on purpose.
Poettering is moreover an insufferable bully, as can be seen once again.
From TFA:
> So what do you recommend how to go on from here? Change Debian policy (as asked in #1111839), revert the change in systemd, find a Debian wide solution or let every package maintainer implement their own solution?
I suggest Debian just drops systemd once and for all. Debian can still be made systemd-free but it's a hassle. Just make Debian systemd free once again.
Meanwhile you'll find me running systemd-less distros on VMs and running containers giving the PID 1 finger to systemd.
I can't wait to switch my Proxmox to FreeBSD's bhyve hypervisor (need to find the time to do it).
But most of all: I cannot wait for the day a systemd-less hypervisor Linux like Proxmox comes out.
It's coming and people who write stuff like: "Don't use Docker, use systemd this and systemd that" are misguided.
systemd is to me the antithesis of what Linux stands for.
I hope Debian gets pissed enough at some point to fully drop systemd.
P.S: one of my machine runs this: https://www.devuan.org/ and honestly it's totally fine. So yup: power to all those running systemd-less distros, BSDs, etc.
>Debian Policy still cites the FHS, even though the FHS has gone unmaintained for more than a decade.
What ongoing maintenance would a file system standard require? A successful standard of that type would have to remain static unless there was a serious issue to address. Regular changes are what the standard was intended to combat in the first place.
>The specification was not so much finished as abandoned after FHS 3.0 was released...
OK.
>...though there is a slow-moving effort to revive and revise the standard as FHS 4.0, it has not yet produced any results.
So it is not abandoned then. A slow moving process is exactly what you would want for the maintenance of a file system standard.
>Meanwhile, in the absence of a current standard, systemd has spun off its file-hierarchy documentation to the Linux Userspace API (UAPI) Group as a specification. LWN covered that development in August, related to Fedora's search for an FHS successor.
Ah. Systemd/Fedora want a standard that they can directly control without interference from others.
A standard does no good if it does not reflect reality. I think it is a worthwhile effort to try to bring it back in line with actual real world usage.
it's remarkable to me that NixOS manages to run so well despite breaking the FHS so thoroughly. and not just in superficial ways like not calling it /bin, I mean forsaking dynamic linking (hence /var/lib and /usr/lib), keeping man pages, resources and config bundled into the same derivation as the binary sometimes, and occasionally hacking up binary blobs to rewrite rpaths.
on the other hand, there's a place for legacy distros too.
[1] https://freedesktop.org/wiki/Software/systemd/separate-usr-i...
[2] https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...
systemd relies on things in /usr being available, including to decide which scripts to run, and mounting /usr would be one of those scripts, so it has a chicken-and-egg problem.
But ah, it doesn't! Instead the world needs to make sure /usr is mounted before systemd even gets started, so systemd doesn't have to fix its bug.
Personally, I don't mind /usr/bin merging with /bin, the benefit I can see is no more squabbling over whether something should be in /bin or not (i.e. is this tool needed to boot the system, or not?)
> the world needs to make sure /usr is mounted before systemd even gets started, so systemd doesn't have to fix its bug.
Unironically in the same post despite being, to my untrained eye, the same thing.
One is like "I'll run some scripts in order, everything else is on you", the other is like "I'll take care of everything, I'll do that, WHAT YOU DIDN'T MOUNT /USR ? SHAME ON YOU I DON'T WANT TO DEAL WITH THAT CORNER-CASE"
From the creators of systemd we also have GNOME, PulseAudio, and Wayland. They have some design philosophy in common.
BTW most sysvinit distros barely even use sysvinit. sysvinit is a service monitor, similar to systemd but more primitive, but typically most of what it's configured to do is to launch some shell scripts on startup. We really have "systemd distros" and "ad-hoc script distros", not sysvinit distros ("ad-hoc" is not a pejorative). I don't know why they don't make init a shell script directly - you can do that, and it's typically done that way in initramfs.
I want a nail only driven half in and at some crooked angle, that's my business.
It's not my hammers job to agree or disagree that it's a bad nail hammering job as far as it knows. I don't wantto have to convince it of the validity of a use-case it didn't think of before, or thought of and decided it doesn't agree to support.
I just want that crude coat hanger and I don't care who else likes it or doesn't like it or who else thinks I should buy an actual coat hanger and attach it in some way that someone else approves of.
And that's what I expect of systemd? That it should complain loudly whenever me, the daemons I'm attempting to run, or the overall system is doing things in a weird, known-bad, known-fragile way and warn me about it before it breaks if possible.
especially for image based stuff it's a pain
which includes OCI images for things like docker
but also image based distros like e.g. ostree (as used through rpm-ostree by Atomic Fedora desktops like Fedora Silverblue, but also in similar but different forms something Ubuntu has been experimenting with)
Your comment seems fully unrelated to my point that overlaying images is much more a pain if the things you might want to shadow and/or extend are distributed or even duplicated across many different places when they could be just be in one place.
[1] https://lists.busybox.net/pipermail/busybox/2010-December/07...
this doesn't matter for OS X which main changes mostly tend to be diverging away from it's roots into a fully proprietary direction
but it does matter if you build image based Linux distros which might be the future of Linux
One of the purposes of usrmerge is to cleanly separate the read-only and read-write parts of the system. This helps with image-based distros, where /usr can be on its own read-only filesystem, and related use cases such as [1]. Usrmerge is not required for image-based distros to work [2], but it makes things cleaner.
macOS, starting in 2019, is also an 'image-based distro', in that it has a read-only filesystem for system files and a separate read-write filesystem for user data. However, the read-only filesystem is mounted at / instead of /usr. Several different paths under the root need to be writable [3], which is implemented by having a single read-write filesystem (/System/Volumes/Data) plus a number of "firmlinks" from paths in the read-only filesystem to corresponding paths in the read-write filesystem. Firmlinks are a bespoke kernel feature invented for this purpose.
Both approaches have their advantages and disadvantages. The macOS approach is nice in that the system filesystem contains _all_ read-only files/directories, whereas under "distro in /usr" scheme, you need a separate tmpfs at / to contain the mount points and the symlinks into /usr. But "distro in /usr" has the advantage of making the separation between read-only and read-write files simpler and more visible to the user. Relatedly, macOS's scheme has the disadvantage that every writable file has two separate paths, one with /System/Volumes/Data and one without. But "distro in /usr" has the opposite disadvantage, in that a lot of read-only files have two separate paths, one with /usr and one without. Finally, macOS's scheme has the disadvantage that it required inventing and using firmlinks. Linux can already achieve similar effects using bind mounts or overlayfs, but those have minor disadvantages (bind mounts are more annoying to set up and tear down; overlayfs has a bit of performance overhead). Actual firmlinks are not necessarily any better, though, since they don't have a clear story for being shared between containers (which macOS does not support). It is nice that "distro in /usr" doesn't require any such complexity.
Ultimately, the constraints and motivations on both sides are quite different. macOS couldn't have gotten everything read-only under one directory as easily because it has /System in addition to /usr. macOS doesn't have containers. macOS doesn't have different distros with different filesystem layouts and deployment mechanisms. And philosophically, for all that people accuse systemd of departing from Unix design principles, systemd seems to see itself as evolving the Unix design, whereas macOS tends to treat Unix like some legacy thing. It's no surprise that systemd would try to improve on Unix with things like "/bin points to /usr/bin" while macOS would leave the Unix bits as-is.
[1] https://lwn.net/Articles/890463/ [2] https://blog.verbum.org/2024/10/22/why-bootc-doesnt-require-... [3] https://eclecticlight.co/2023/07/22/how-macos-depends-on-fir...
Prior to the group who started an update effort, it had not been touched in about a decade. That’s not slow-moving: that’s abandoned.
Developers have this thing where they will think of a standard as a specification. Instead it is a statement of political will. Saying that a standard is "abandoned" due to lack of "maintenance" seems like an example of thinking of a standard as the instantation of a specification; an actual program.
What's the timeline for software?
Laws remain in force until they are formally:
* Repealed (abolished) by the relevant legislative body (Parliament, Congress, etc.).
* Struck down by a court as unconstitutional or otherwise invalid.
A 150 year "delete" timer would genuinely undermine the foundation of the legal system. Lawyers, judges, and businesses rely on the continuity of core laws (e.g., contract, property, and tax law). If a 150-year-old property law suddenly lapsed, it could instantly void millions of land titles and commercial contracts...
False. They are still in force - they have just become unenforceable. There's a crucial difference, as the US is currently finding out: as long as they are in the books, a Supreme Court decision can instantly render them enforceable again - even against the wishes of the population.
The proper thing to do would be to "garbage collect" unenforceable laws, but politicians are (understandably) hesitant to spend political capital on it when it doesn't provide any tangible return.
In addition, laws are typically regularly amended to handle new societal developments, to clarify wording, or to fit better with other laws or changes in attitudes. A law that has gone 150 years without being amended at all is probably a law that falls into the categories above and is obsolete.
Of course, all this is getting somewhat off-topic, but the point is that laws absolutely can become outdated and unmaintained, either deliberately or by happenstance. And the inverse is also true: most laws that people deal with regularly are kept up-to-date to ensure that they still reflect the needs and wills of the society they're being used in.
Those laws survive not because anyone considers them a good idea, but simply because the issues caused by ignoring them are substantially smaller than the effort involved in removing them.
We also have a bunch of laws that are still followed, but only in the most technical sense. Every “Parliament route” train schedule falls into that category. Train services that must be provided at least once a day, sometimes only once a week, which nobody actually uses, and in some cases only travel to stations with no practical public entrances. Those laws don’t survive because anyone things they’re a good idea, it’s just easier to run the train, than it is to get parliament time to abolish the law.
Meanwhile some laws that are months old are ignored by law enforcement because nothing forces them to read it. It’s that effect which is why so many old laws are ignored rather than formally repealed. When nobody is ridding a horse nobody cares how you need to tie one up when visiting a store etc.
True, but it's been updated a lot more recently than that.
The last update was still much longer ago than 10 years, of course. The most recently ratified amendment to the Constitution - the Twenty-Seventh Amendment, ratified 1992 - was, incredibly enough, proposed in 1789 along with the ten we know as the Bill of Rights and another one which was never ratified. And of the twenty-seven amendments ratified so far, the one most recently proposed by Congress, the Twenty-Sixth Amendment, was both proposed and ratified in 1971.
Somehow has an impact on anything else? Because by that standard every change to any law updates all existing laws that were not changed. Or I’m just completely misunderstanding your point here.
It’s certainly true that the constitution is old and crusty overall and desperately needs an overhaul, but the discussion was about when old laws which haven’t been updated in a while are ignored or enforced.
The constitution is indeed one law, not several different laws, and it’s been updated far more recently than its original year of promulgation or ratification. And it’s still mostly enforced (with increasing exceptions but that’s another discussion entirely).
18 times. 27 total amendments with 1-10 all passing on December 15, 1791.
> a few decades ago
There hasn’t been a meaningful change in over 54 years.
> The constitution is indeed one law, not several different laws
Those recent amendments are a minimum different laws. If you want to call it one law then there’s either 2 federal laws in the US. One needs to be ratified by the states and the others don’t.
There might be minor alterations to details, but the core laws are mostly older than that. Murder, theft, etc don't change that much.
Even the silly confusing ones have a long life. E.g. "Rule against perpetuities"
That wouldn't be my go-to example of a silly law. It's what prevents control of property from remaining permanently with the will of a dead person who managed to own the property outright. It says that, at some point, the will can have no more influence and full ownership vests in someone who's alive.
law on its own can mandate the use of a specific standard, but a standard on its own is no law.
so much so that often doing non-standard stuff is the most successful route. dumb example: Apple and all of it proprietary, non standard stuff.
I completely agree that regular updates are not a requirement for standards to remain relevant, but it does require the ecosystem to still adhere to them - and the problem is that Linux users are increasingly deviating from the FHS.
The FHS does not accurately describe the situation on-the-ground, there are no plans to update the FHS to accurately describe the situation on-the-ground, and there are no plans to update the ecosystem to accurately implement the FHS.
Like it or not: the FHS is dead, and nobody seems interested in reviving it.
> What ongoing maintenance would a file system standard require? A successful standard of that type would have to remain static unless there was a serious issue to address. Regular changes are what the standard was intended to combat in the first place.
It's 2025, anything that wants to be considered modern (and everything should want that), needs to be undergoing constant change and delivering regular "improvements."
>>...though there is a slow-moving effort to revive and revise the standard as FHS 4.0, it has not yet produced any results.
> So it is not abandoned then. A slow moving process is exactly what you would want for the maintenance of a file system standard.
The FHS people to get off their butts. There's no excuse for that pace now that we have such well-developed AI assistants. They should be pushing quarterly updates at a minimum, and a breaking change at least every year or two. It's been obvious for decades that "etc" is in urgent need of renaming to "config", "home" to "user", and "usr" to "Program Files" to keep up with modern UX trends.
FHS seems to specifically imbue the user with the responsibility and consequences of filling up the disk.
52 more comments available on Hacker News