Ghosts of Unix Past: a Historical Search for Design Patterns (2010)
Posted3 months agoActive3 months ago
lwn.netTechstory
calmpositive
Debate
20/100
UnixDesign PatternsSoftware History
Key topics
Unix
Design Patterns
Software History
The article explores the historical context and design patterns of Unix, sparking a discussion on the relevance and influence of Unix's design decisions on modern software development.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
4d
Peak period
11
96-108h
Avg / period
8
Comment distribution16 data points
Loading chart...
Based on 16 loaded comments
Key moments
- 01Story posted
Oct 3, 2025 at 12:01 AM EDT
3 months ago
Step 01 - 02First comment
Oct 7, 2025 at 12:13 AM EDT
4d after posting
Step 02 - 03Peak activity
11 comments in 96-108h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 7, 2025 at 7:33 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45458833Type: storyLast synced: 11/20/2025, 2:35:11 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://archive.org/details/patternlanguages0000unse
https://archive.org/details/patternlanguages0002unse
https://archive.org/details/patternlanguages0000unse_l3y0
Just pointing out how the same general idea can take distinct forms of implementation.
It can mean a couple of things:
- Kernel objects have an opaque 32 bit ID local to each process.
- Global kernel objects have names that are visible in the file system.
- Kernel objects are streams of bytes (i.e. you can call `read()`, `write()` etc.).
The first is a kind of arbitrary choice that limits modern kernels. (For example, a kernel might want to use all 64 bits to add tag bits to its handles - still possible, but now you are close to the limit.)
The second and third are mostly wrong. Something like a kernel synchronization primitive or an I/O control primitive does not behave anything like a file or a stream of bytes, and indeed you cannot use any normal stream operations on them. What's the point of conflating the concept of a file system path and kernel object namespacing? It makes a kind of sense to consider the latter a superset of the former, but they are clearly fundamentally different.
The end result is that the POSIX world is full of protocols. A lot of things are shoehorned into file-like streams of bytes (see for example: the Wayland protocol), even when a proper RPC/IPC mechanism would be more appropriate. Compare with the much maligned COM system on Windows, which though primitive and outdated does provide a much richer - and safer - channel of communication.
Also I always found it weird, that a lot of things are "files" in Linux, but not ethernet interfaces, so you have to do that enumeration dance before getting an fd to oictl on. I remember HP-UX having them as files in /dev, which was neat.
My main complaint in general with everything-is-a-file is that it isn't taken far enough:) (Well, on anything except Plan 9)
I think the article articulated it decently:
> It is the file descriptor that makes files, devices, and inter-process I/O compatible.
Or if you like, because pushing everything into that single abstraction makes it easier to use, including in ways not considered by the original devs. Consider, for example, exposing battery information. On other systems, you'd need to compile a program using some special kernel API to query the batteries and then check their stats (say, checking charge levels). In linux, you can just enumerate /sys/class/power_supply and read plain files to get that information.
I asked an LLM how to do this on Windows and got
> wmic path Win32_Battery get EstimatedChargeRemaining
Which doesn't seem meaningfully worse than looking at some sys path; it's not clear what the file abstraction adds for me there.
Because the flip side of your example is that you now have a plain text protocol, and if you wanted to do anything else besides cat’ing it to the console, you’re now writing a parser.
It's one of the local maxima for generality. You could make everything an object or something, but it would require a lot of ecosystem work and eventually get you into a very similar place.
> Because the flip side of your example is that you now have a plain text protocol, and if you wanted to do anything else besides cat’ing it to the console, you’re now writing a parser.
Slight nuance: You could have everything-is-a-file without everything-is-text. Unix usually does both, and I think both are good, but eg. /dev/video0 is a file but not text. That said, text is also a nice local maxima, and the one that requires the least work to buy in to. Contrast, say, powershell, which does better... as long as your programs are integrated into that environment.