Unix Conspiracy (1991)
Posted4 months agoActive4 months ago
catb.orgTechstory
calmmixed
Debate
60/100
UnixOperating SystemsConspiracy Theories
Key topics
Unix
Operating Systems
Conspiracy Theories
The 'Unix Conspiracy' is a 1991 document that humorously suggests AT&T deliberately made Unix unreliable and insecure to profit from ongoing upgrades, sparking discussion on the history and impact of Unix.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
-1799s
Peak period
23
0-3h
Avg / period
7.3
Comment distribution51 data points
Loading chart...
Based on 51 loaded comments
Key moments
- 01Story posted
Sep 4, 2025 at 7:12 PM EDT
4 months ago
Step 01 - 02First comment
Sep 4, 2025 at 6:42 PM EDT
-1799s after posting
Step 02 - 03Peak activity
23 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 6, 2025 at 8:19 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45133289Type: storyLast synced: 11/20/2025, 6:30:43 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
In all seriousness, I have a high respect for Unix and Unix-like systems, particularly FOSS implementations like Linux and FreeBSD. When I first started using Linux in 2004 as a high schooler who grew up on Windows and who used classic Macs in elementary school, the power of the Unix command line and the vast selection of professional-grade software that was available for free, and legally with the source code, too, was mind-blowing. Not too long after that, I started learning about the history of Unix and about its design philosophy. I had a new dream: I wanted to become a computer scientist just like Ken Thompson or Dennis Ritchie working for a place like Bell Labs, or become a professor and lead research projects like the BSDs back when Berkeley had the CSRG. Downloading and using Linux in 11th grade pushed me away from my alternative thoughts of majoring in linguistics, mathematics, or city planning.
Sometime in graduate school I started paying more attention to the Xerox PARC legacy of the Mac, and I started realizing that perhaps Unix was not the pinnacle of operating systems design. I got bitten by the Lisp and Smalltalk bugs, and later I got bitten by the statically-typed functional programming bug (think Haskell and ML).
These days my conception of a dream operating system is basically an alternative 1990s universe where somehow the grander visions of some Apple researchers and engineers working on Lisp (e.g., Newton’s planned Lisp OS, Dylan, SK8) came to fruition, complete with Apple’s then-devotion to usability. The classic Macintosh interface and UI guidelines combined with a Lisp substrate that enabled composition and malleability would have knocked the socks off of NeXTstep, and I say this as a big fan of NeXT and Jobs-era Mac OS X. This is my dream OS for workstation-class personal computing.
With that said, I have immense respect for Unix, as well as its official successor, Plan 9.
For me, that watershed moment came when I looked at IBM's AS/400, known today as "IBM i". Despite having used computers since the 80s, and Unix/Linux since about the mid-90s, only much later AS/400 made me realize how extremely unixoid almost every OS I know is (well, every OS I knew after the 8 bit micro era or so). Just like you, it also made me realize how UNIX maybe isn't the answer to all, and that it's maybe not such a good thing that we've pretty much settled on it.
I've heard that apparently MULTICS, one of UNIX's major influences, gives people a similar impression when they revisit it nowadays, and realize how advanced it was. I personally have not looked into it yet.
(The other OS that expanded my horizon on how different OSes can be from what I know is MVS on IBM mainframes, but unlike AS/400, not really on a necessarily good way.)
It's a bit disappointing that every time somebody decides to write their own kernel, the first thing they do is implement some subset of the POSIX spec.
Still sad that it has to be that way. I've long come off of thinking "everything is a file", and a file then just being a binary blob, is a good thing. (That's not even talking about other concepts from Unix we take for fully granted yet.)
Additionally, we need to consider the career incentive structures of researchers, whether they are in industry, academia, or some other institution. Writing an operating system is difficult and laborious, and coming up with innovative interfaces and subsystems is even more difficult. When a researcher’s boss in industry is demanding research results that are productizable, or when a researcher’s tenure committee at a university is demanding “impact” measured by publications, grants, and awards, it’s riskier betting a career on experimental systems design that could take a few person-years to implement and may not pan out (as research is inherently risky) versus pursuing less-ambitious lines of research.
It’s hard for a company to turn its back on compatibility with standards, and it’s hard for academic researchers to pursue “out-there” ideas in operating systems in a “publish-or-perish” world, especially when implementing those ideas is labor-intensive.
The widespread availability of Unix, from a source-available proprietary system with generous licensing costs to universities back in the 1970s, to the birth of FOSS clones and derivatives such as Linux and the BSDs, has made it much easier for CS researchers to not need to reinvent the OS wheel, instead focusing on narrower subsystems, but at the cost of discouraging research that could very well lead to the discovery of whole new ways of carrying things, metaphorically speaking. Sometimes reinventing the wheel is a good thing.
I still dream, though, of writing my own operating system in my spare time. Thankfully as a community college professor I have no publication pressures, and I get three months off per year to do whatever I want, so….
That pretty much guarantees that change can only be incremental.
Well, not quite _every_ time. For example, I’m deliberately not doing POSIX with my latest one[0], so I can be a bit more experimental.
[0] https://github.com/roscopeco/anos
At this early stage, the filesystem exists only to prove the disk drivers and the IPC interfaces connecting them. I chose FAT32 for this since there has to be a FAT partition anyway for UEFI.
The concept of the VFS may stick around as a useful thing, but it’s strictly for storage, there is no “everything is a file” ethos. It’s entirely a user space concept - the kernel knows nothing of virtual filesystems, files, or the underlying hardware like disks.
Isn't the point that we don't even consider alternatives?
yet most of them target virtual machines. Nobody is programming the bare hardware anymore.
Qemu is nice, but you have to read 5 internet pages to start it properly.
I think it’d be amazing if things like asynchronous I/O were the standard instead of an afterthought in both kernel and user space.
Also feels like Microsoft got something right with their handles… (Try doing select/poll to wait for an socket and a semaphore — without eventfd or something else bolted on)
I think the best explanation is contained in this very old porting guide from IBM that explains how to move UNIX programs to AS/400. It's written in a remarkably un-IBM fashion, not at all straitlaced.
https://www.redbooks.ibm.com/redbooks/pdfs/sg244438.pdf
For any experts out there, please correct me, it's been 30 years...
It's a fascinating book, very approachable for the density of the technical detail contained, and shows how very different choices were made w.r.t. how OS/ 400 system software was designed, and how hardware was developed to support it.
As I understand from reading it, there's like three layers to the software-
-->Things you'd think of as applications on Unix, running in user space...this includes the DB/400 database engine, queue systems etc.
-->Machine Interface (MI) components, which include the management of the single-level store, I/O devices (OS/400 still supports dedicated mainframe-style I/O channels to an extent), and compilers/interpreters. All of this layer and above are considered "Technology Independent" software, where programs written in C/C++, RPG IV, COBOL, Java etc. are compiled into 128-bit object code that gets translated into PPC instructions on the fly.
-->the SLIC (System? Licensed Internal Code) which is referring to both some of the iSeries firmware, stuff that might be considered part of the kernel in Unix, as well as the PowerVM hypervisor.
The craziest thing (to me) about the single-level store is that there's no user-visible filesystem to administer; objects created in memory are automatically persisted onto permanent storage, unless you explicitly tell the system otherwise.
The OS/400 objects also have capabilities built-in; i.e. things like executables can only be run, and not edited at runtime; these are flagged during memory access by setting bits in ECC memory using a dedicated CPU instruction on Power CPUs that was added explicitly for OS/400 use.
For someone who is used to Unix, iSeries a really fascinating thing to learn about. Pity that it's not more accessible to hobbyists, but as the Soltis boook makes clear, the purpose of the design was to satisfy a specific set of customers, who had well-defined line-of-business applications to run.
There's a copy of the previous version, Inside the AS/400, also by Frank Soltis on archive.org
https://archive.org/details/insideas4000000solt/
Crucially, it also describes the "single level store" that everything lives in. In short, the boundary between "main memory" or "hard disk" or other things is abstracted away, and you just have pointers to "things" instead. While in UNIX accessing some piece of main memory is fundamentally different from, say, opening a file.
Even though UNIX has a little bit of the opposite concept of trying -- but in my mind failing -- to represent as much as it can as a "file".
For me, it would be:
I don't know if you could remove the root user, but you can have a lot of control using pfexec. This might require you to design and assemble it however.
ZFS of course for ZFS filesystem.
Solaris Containers are great: sensible and easy-to-use containerized/jailing capabilities
Solaris has "projects" and the FSS (Fair Share Scheduler) which should allow you to cap in both absolute and relative terms (like a share of CPU time or RAM) on a per-user or per-project/group basis, even if not in a container.
As well, you can create virtualized network interfaces called VNICs and have bandwidth management by VNIC or by port (e.g. port 443, port 25). So you could always reserve say 10% of bandwidth for SSH traffic so you never get starved, etc.
- trivially replaceable kernel, to encourage research and experimentation, real time use, etc.
- ruthless, consistent separation of user data, user configuration, and an unbreakable standard for where programs get installed. Just look at dotfiles, hier/LFS, the windows Registry etc to see what a mess this is.
- a native grid compute protocol that allowed any two or more computers to merge into a single system image and automatically distribute workloads across them. We have clustering and the insane complexity of k8s today but it imagine something as easy as "Bonjour and Plan 9"!
[1] https://en.wikipedia.org/wiki/Jean-Marie_Hullot
I was already tainted by starting on 8 an 16 bit home computers, the magic of Amiga and OS/2, before getting into UNIX via Xenix.
The wonderful dungeon of the university library did the rest, showing me all the alternative universe.
It also helped my university was keen into teaching the ways of Smalltalk, Lisp, ML, Prolog, Oberon, in addition to classical UNIX lectures.
I was really lucky in that regard.
To be fair, it’s not a stretch to suspect a company wants its competitors to be dependent on their product. The theory of planned security vulnerabilities sounds off the plot however.
amazing ? I thought this was the norm. /s
The days, when OSs let you select text and background colors, are long gone.
The server was under a heavier load than usual - it's possible the page hadn't finished loading for you, or was missing elements when you toggled reader mode.
A funnier version would be they’re in cahoots with AWS since robust infrastructure is more expensive.
I remember watching a YouTube UNIX wars documentary that posits the exact opposite.
It argued that top brass at AT&T saw UNIX as a means to the telecommunications end, not a business in its own right. When it became more popular in the 1980s, it became obvious that they'd be bad businessmen if they didn't at least make a feeble attempt at making money off of it, so some exec at Ma Bell decided to charge thousands (if not tens of thousands; I can't find a reliable primary source online with a cursory search) per license to keep the computer business from getting to be too much of a distraction from AT&T's telecoms business.
That limited it to the only places that were doing serious OS research at the time: universities. Then some nerd at a Finnish university decided to make a kernel that fit into the GNU ecosystem, and the rest is history.
As soon as they were allowed to profit from UNIX, the Lion's book got forbidden, and the BSD lawsuit took place.
That seems more applicable to Windows these days. If you graph CVEs vs. version, there is an interesting trend.
(This conspiracy may not be factually true, but it is teleologically true)
IBM: everything is a record type, we have 320 different modules to help you deal with all 320 system-standard record types, and we can even convert some of them to others. And we have 50,000 unfixable bugs because other pieces of code depend upon the bug working the way it does ...
UNIX: everything is an ASCII byte. Done
I started writing code in the 1970s on TOPS-10, TWENEX, PLATO, and my undergrad thesis advisor helped to invent Multics. The benefits of UNIX are real, folks, the snake oil is the hardware vendors who HATE how it levels the playing field ...
This exact version of it was first published in v. 4.0.0, on 24 Jul 1996: http://www.catb.org/jargon/oldversions/jarg400.txt
That was then published as The New Hacker's Dictionary, third edition, 1996: https://books.google.com.vc/books?id=g80P_4v4QbIC&printsec=f...
[1] https://en.wikipedia.org/wiki/Jargon_File