The Early Unix History of Chown() Being Restricted to Root
Posted3 months agoActive2 months ago
utcc.utoronto.caTechstory
calmneutral
Debate
60/100
Unix HistoryFile PermissionsSystem Administration
Key topics
Unix History
File Permissions
System Administration
The article discusses the early Unix history of restricting chown() to root, sparking a discussion on the reasoning behind this restriction and its implications for system administration and security.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
5d
Peak period
17
120-132h
Avg / period
6.7
Comment distribution40 data points
Loading chart...
Based on 40 loaded comments
Key moments
- 01Story posted
Oct 13, 2025 at 12:47 PM EDT
3 months ago
Step 01 - 02First comment
Oct 18, 2025 at 8:48 AM EDT
5d after posting
Step 02 - 03Peak activity
17 comments in 120-132h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 20, 2025 at 6:16 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45570517Type: storyLast synced: 11/20/2025, 5:27:03 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Would save me a wrapper script on my flashdrive that does hacks like loading it from stdin or moving it to temp file.
I think a more appropriate question would be, if the key fits, couldn't you change the lock?
Maybe, that would give you 3 abilities.
1 Lock yourself out if you please? Not terrible
2 Provide access to others, which makes sense since you already have access to the file, you could theoretically share it through other channels, you naturally cannot prevent this.
3. Lock others out. This one is less of a security risk and more of a nuisance risk.
I think the unix model is simple, maybe selinux offers more sophistication. That said the unix chown behaviour could have gone either way in terms of security, but in terms of design it makes sense as is.
In this analogy, I think the analogue of the owner of the bank safe is the owner of the file. Unless you're envisioning the bank safe as representing all the files, rather than just one ...?
UNIX ownership isn't necessarily legal ownership, files are not real property.
The alternative is sending messages to daemons, but as it turns out, the attack surface of those is pretty large too, albeit not as large as setuid.
The whole "do some work on my behalf with elevated privs" is not exactly a solved problem in Unix.
Once storage space was plentiful, the pattern of "overwrite the existing file" was already well established.
something like zfs should have been bog standard, yet its touted as an 'enterprise-grade' filesystem. why is common sense restricted to 'elite' status?
ofcourse i want transparent compression, dedup, copy on write, free snapshots, logical partitions, dynamic resizing, per-user/partition capabilities & qos. i want it now, here, by default, on everything! (just to clarify, ive ever used zfs.)
its so strange when in the compute space you have docker & cgroups, software defined networking, and on the harddrve space i'm dragging boxes in gparted like its the victorian era.
why can't we just... have cool storage stuff? out the box?
FAT, ext4, FFS, are all pretty simple and bulletproof and do everything the typical user needs.
Servers in enterprise settings have higher demands but they can afford an administrator who knows how to manage them and handle problems. In theory.
It was ok from possible data failures point of view, I didn't had much data other than the distro and the stuff I also needed to compile under Linux.
Somehow it managed to still work with the disk, with the sectors that were not damaged.
The bulk of the safety came from the redundancy of copying the file across machines, not filesystem protections.
Compression tradesoff compute vs i/o, if your system has weak compute, it's a bad deal. Most modern systems should do well with compression.
Dedupe needs indexing to find duplicates and makes writes complex (at least for realtime dedupe). I think online dedupe has pretty limited application, but offline dedupe is interesting.
Copy on write again makes writes complex, and tends to fragmentation of files that are modified. Free snapshots are only free when copy on write is the norm (otherwise, you have to copy on write while a snapshot is open, as on FreeBSD UFS). Copy on write offers a lot, but some applications would suffer.
Dynamic resizing (upwards) is pretty common now. Resize down less so. Zfs downsizing is available, but at least when I tried it, the filesystem became unbootable, so maybe not super useful IMHO.
Logical partitions, per user stuff, qos adds complexity probably not needed for everyone.
Older systems with worse compute also had worse i/o. There are cases where fast compression slows things down, but they're rare enough to make compression the better default.
If everything fits in ram then compression could be postponed.
And for that area in between, where your files don't fit in ram but compressed they would fit in ram, compression can give you a big speed boost.
That's like saying the Romans should have just used computers.
To be fair, Frame Maker caused the rest of us a whole lot of grief back then, too. :)
The license manager daemon, lmgrd (?) would crash regularly enough that we just patched the dependency out of our binaries. Sorry about that!
It was possible to chown/chgrp as non root in Solaris up to some version that I forget.
The protocol for changing ownership should be two step.
1. The file is put into an "offered" state, e.g. "offered to bob". Only the owner or superuser can make this state change.
2. Bob can take an "offered to bob" file and change ownership to bob.
Files can always be in an offered state; i.e. have an offered user which is normaly equal to their owner. So when ownership is taken, the two match again.
That's what quotas are: per-user storage limits.
If Bob has a large file which is sitting in Alice's home directory, that counts toward's Bob's quota, not Alice's. If Bob could sneakily change the ownership to Alice, while leaving the permissions open so he could access the file, then the file counts toward Alice's quota.
(I'm the author of the linked-to article.)
For a group shared directory, assigning the disk space usage of files therein to one single user (ignoring the aspect of "which single user do you pick") is unfair to that user (his/her allowed maximum disk space is consumed) while everyone else is not charged for their actual usage.
This all came about to try to enforce rules to prevent one (or a few) rogue users from using up all disk space on the system for themselves, leaving no one else with any disk space available for their own usage.
I suppose it could work, but Mallory would then risk Alice blowing that file away (perhaps innocently, not even realizing it was Mallory's file).
When I'd started, the cluster had three SunOS servers, named cayley, descartes, and napier; undergrad math students had their home directory allocated on a local disk on one of these three machines, which each cross-mounted the others' via NFS. At this time, however, the Math Faculty Computing Facility had just received a fancy new dedicated NFS file server from (IIRC) NetApp, and all our home directories had been moved there instead, presumably freeing up desperately-needed CPU cycles on the three compute servers so we could run the Modula-3 and μC++ compilers.
One evening I was in one of the XTerm labs in the Math and Computer centre working on a CS assignment (the only alternative being to do from my dorm room via 2400 buad dialup). As was tradition, I had left the assignment until the night before it was due to start work on. Indeed, it seems that we all must have, because after getting part way through I needed to access some input data files that were shared from the home directory of the course account—something like ~csXYZ/assignments/N/input—only to find I could not read them.
These files were of course owned by the csXYZ course account and should have been either world-readable or readable by the corresponding csXYZ group to which all students registered that term belonged. Unfortunately something had gone wrong, and although the files were rw-r-----, they belonged to the wrong group, so that I and the other students in the class were not able to access them.
It now being after 6pm there was no hope of tracking down one of the course professors or the tutor to rectify this before morning (and it's quite likley the assignment submission deadline was 9am).
Fortunately, I was a naive and ignorant undergrad student, and not knowing what should and should not have been possible I began to think about how I might obtain access to the needed files.
I knew about suid and sgid binaries, and knew that on these modern SunOS 4 machines you could also have suid and sgid script, so I created a script to cat the needed files, then changed its group to match that to which the files belonged, then tried to chmod g+s the script—but of course this (correctly) failed with a message informing that I could not make my file sgid if I didn't belong to the group in question. I then took a different tack: I chgrped the script back to a gropu I did belong to, ran chmod g+s, then chgrped the script back to the group that owned the files I wanted to read.
I now know that this should have resulted in the script losing its setgid bit, but at the time I was unaware of the expected behaviour—and it seemed that the computer was as ignorant as I was because it duly changed the group as requested without resetting the setgid bit, and I was able to run the script, obtain the files I needed, and finish the assignment.
I then headed over to the CS Club office to discuss what had happened, because I was somewhat surprised this had worked and I wanted to understand why, and I knew that despite the lateness of the hour the office would certainly be open and very likely contain someone more expert than I who would be able to explain.
The office was indeed open but no explanation was forthcoming; instead, I was admonished not to discuss this security hole with anyone until I had reported it, in person, to the system administrators.
Thus it was that bright and early the next morning I found myself in Bill Ince's office with a printout of the terminal history containing a demonstration of the exploit in hand. I informed him I had a security issue to report, and handed him the printout.
He scanned the paper for a moment or two, and then replied simply "ahh, you found it".
It seems I was not the first to report the issue, and he explained that it was due to a bug in the new NetApp file server. He then turned monitor of the terminal on his desk around to show me a long list of filenames scrolling by, and (in hindsight rather unwisely) informed me that it was displaying a list of files that were vulnerable to being WRITTEN to due to the same hole.
He duly swore me to secrecy until the issue could be resolved by NetApp (which it was a few days later), thanked me, and sent me on my way.
> You don't have permission to access /~cks/space/blog/unix/ChownRestrictionEarlyHistory on this server.
I laughed out loud.
https://web.archive.org/web/20251018101005/https://utcc.utor...