Installing and Using Hp-UX 9
Postedabout 2 months agoActiveabout 2 months ago
thejpster.org.ukTechstoryHigh profile
calmpositive
Debate
20/100
Vintage ComputingHp-UXUnix History
Key topics
Vintage Computing
Hp-UX
Unix History
The author shares their experience installing and using HP-UX 9 on an HP 9000 Model 340, sparking nostalgia and discussion among commenters about the history of Unix and HP-UX.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
43s
Peak period
23
6-12h
Avg / period
7.9
Comment distribution63 data points
Loading chart...
Based on 63 loaded comments
Key moments
- 01Story posted
Nov 10, 2025 at 3:48 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 10, 2025 at 3:48 AM EST
43s after posting
Step 02 - 03Peak activity
23 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 13, 2025 at 10:29 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45873904Type: storyLast synced: 11/20/2025, 6:12:35 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The clusters had a shared OS image - that is a single, shared root filesystem for all members. To allow node-specific config files, there was a type of symbolic link called a “Context Dependent Symbolic Link” (CDSL). They were just like a normal symlink, but had a `{memb}` component in the target, which was resolved at runtime to the member ID of the current system. These would be used to resolve to a path under `/cluster/members/{memb}`, so each host could have its own version of a config file.
The single shared root filesystem made upgrades and patching of the OS extra fun. There was a multi-phase process where both old and new copies of files were present and hosts were rebooted one at a time, switching from the old to the new OS.
https://www.am-utils.org/
The am-utils "amd" known as its running process current use I don't have much to say as I've not much seen it as at least Linux distros have had autofs-tools quite long time. But -90 something am-utils was the thing we mostly used.
Adding: Oh, that made me remember we had then also user mode nfs daemon, which allowed re-exporting remote mounts, which was at times with smaller disks and always looking where to get it more if nothing but temporary storage great help. Current kernel based nfs doesn't support it any more.
For details see https://man.netbsd.org/symlink.7 - section Magic symlinks at very end of manual.
All man pages were well written, nicely formatted easy to read and almost all came with often valuable examples giving quick enough understanding to check usage most often. That has been absolutely the thing that I've missed other *nix systems since.
But there are too many things were done so nicely and made it nice to maintain with HP-UX that it's not worth trying to remember and list all. But unfortunately shell environment was not match to convenience GNU tools Linux had from beginning. That is without making effort to install (read: compile from source for quite long time) those HP-UX if that was allowed. With university computing center that no problem, but telco side it was big nono -- not without getting product owner permission first :/
But just an example Ignite-UX was one of my favourites with HP-UX. The simplicity using a one simple command with few options bootable DAT tape that could then be used to either recover whole running fully functional system or clone that developed system first to staging lab and then up to production with ease was great time saver major upgrades and migrations. None of the Linux bare metal backup systems I've tested have been able to recover exactly same disk layouts, usually LVM part is poorly done. As has been VmWare p2v migration tools also btw.
That Linux LVM that Sistina did first before Red Hat bought them, is implemented quite exactly what HP-UX had for some time then.
Occasionally I find some stuff via search engine, mostly random.
It would be nice if anyone having still contacts they could ask if HPE would be willing to relax at least parts of HP-UX, like documentation and let achieve.org take them and let us occasionally check things as rererence how it was HP-UX.
It would be shame if all that work that they did documents were lost and unavailable general public later on.
I do not remember any more if those man files were preformatted and .Z compressed or were there the troff source files and "an" package also. Commercial unicen did have bad habit not to provide sources, so that could be the case.
But if someone have the CD:s then its not too hard to check I believe. Installation files could be packed somehow, like compressed and then cpio or tar inside. That's what I now think those would have been. But I can't remember for sure, its's bit over 25 years when I did work HP-UX last time.
And if I remember correctly HP did ship some printed manuals also with CD's. I have some kind of memory seeing some disks like that, but I never used those. We had paper manuals back then and which were then sent to customer as part of our product. Nor have I any idea which format those documents or whole document CD's would be. Postscript or PDF if we would be lucky, but it could be some proprietary format in worst case.
Many HP-UX boxen (servers) came with default (interactive) multiuser OS licenses. Product differentiation which HP sales loved had license castrated workstations, which came only two user license.
First time I had no clue about this and were wondering why some odd network management software I was installing a server did not restart properly and was causing head scratching. Then I found that logs stated our license was not valid though it had been confirmed valid in other test install.
A HP support guy I knew and saw later told that I had probably to install optional two-user package and then the software will start. Oh, great that it was. But what the heck that two-user license only prevented only two serial line users simultaneously and only systems console was serial that time and everyone else logged in via network. To be sure I made PM check if we still were within license because of that. He told me later yep, no problem there. Just get it done and we're ready deploy it to site.
Had fun porting sortware across, a radio system that was unable to test fully unless in the field (which it did first time, which was amazing). Had many good chats with HP engineers back then (we did a large purchase as a global company) and one I still recall was early editions of HP-UX having an error code of 8008, until somebody in senior managment at HP saw it one time (no customer had ever complained apparently about it).
I liked HP-UX having previously worked on IBM RT systems running AIX, as well as NCR towers with there more vanilla System V. Though did have SMIT with AIX and SAM with HP-UX for those manual saving moments of ease to fall back on. Though my favourite flabour of unix of that time would be the Pyramid systems dual universe OSx. You could have a BSD or an AT&T enviroment at once, able to use both flavours in scripts by prefixing with bsd or att, to run that command. Don't recall how it handled TERMCAP/TERMINFO of hand (that was always an area of fun back then).
Fun times, in the days in which O'Reilly and magazines like Byte or Unix World, were the internet, along with expensive training courses and manuals that you would use and thumb every page of the multi tombed encyclopedic stack they came in.
Best C platform for developing that I did use in that era, hands down the VAX under DCL, the profilers etc, pure leaps and joy.
There's very little on the internet about those "NCR Towers."
> 1987: https://www.techmonitor.ai/hardware/ncr_marries_its_tower_un...: "Despite abandoning its effort to implement Unix on its NCR 32 chip set, NCR Corp did not abandon its ambition to bring Unix into the mainstream of its mainframe product offerings, and the company yesterday launched a facility whereby its top-end multiprocessor Series 9800 fault-tolerant mainframes can be used as servers to a network of 68020-based Tower Unix supermicros."
> 1988: https://www.techmonitor.ai/hardware/ncr_renews_its_tower_uni...: "When you sell as many machines as NCR does with the Tower, you can’t rush to incorporate a new chip as soon as it arrives because there simply aren’t enough chips to meet your needs. Accordingly the new Tower models use the 25MHz 68020 rather than the 68030."
Superior alternatives:
* tldr/tealdeer - usually just a pile of typical usage examples, almost always covers what I want
* jfgi because surely someone has tried to do this before and asked about it on an ancient forum
* llms - regurgitating the info from above, possibly with the bonus of letting it try a script on a sandbox and then entering a error-confusion loop
* source - documentation can be wrong or incomplete, but the source never lies
Some truly terrible quality pictures of the one I used to own are at https://www.chiark.greenend.org.uk/~pmaydell/hardware/tiroth... (I have long since disposed of it). Some of the people who got the machines had a play around with getting Linux booting on them. Amazingly some of that code is still in the kernel, eg drivers/net/ethernet/amd/hplance.c so it might even still work ;-)
My university in the 1990's had hundreds of Unix workstations from Sun, HP, DEC, IBM, SGI, and Linux.
It was all tied together using this so everything felt the same no matter what system you were on.
https://en.wikipedia.org/wiki/Distributed_Computing_Environm...
https://en.wikipedia.org/wiki/Andrew_File_System
The IT dept installed and compiled tons of software for the various systems and AFS had an @sys string that you would put into your symbolic link and then it would dereference to the actual directory for that specific system architecture.
https://docs.openafs.org/Reference/1/sys.html
https://web.mit.edu/sipb/doc/working/afs/html/subsection7.6....
"On an Athena DECstation, it's pmax_ul4; on an Athena RS6000, it's rs_aix31" and so on.
This sounds cool, but I've wondered - couldn't you just stick something like
in /etc/profile or so?(I'm actually doing something like this myself; I don't (yet) have AFS or NFS strongly in play in my environment, but of all things I've resorted to this trick to pick out binaries in ~/.local/bin when using distrobox, because Alpine and OpenSUSE are not ABI compatible)
Glances round to see if there's any alumni from a certain Large Investment Bank also present...
I have the impression we stalled on NFS and CIFS and I don't think webdav and s3 are good replacements because those don't respect file system ACLs.
NFS would be great if you didn't had to chose between KDC infrastructure or "trust me bro I'm uid 501".
When working on a class project it was great that I as a normal user could create an ACL group and add and remove users to it and then give them read or write or both permissions on a directory in my account.
At my job we have hundreds of projects and there are strict security requirement so we only have permissions to the projects that we are assigned to. The problem is that software and libraries are in different directories with different permissions so they can't just add us to every group as it would go over the limit for number of Unix groups. So we have another command that is setuid root that we type in our password and it changes our Unix groups on the fly. The process for adding people to the groups has to go through a web site where only the project lead can add people and even then it can take a day because some VP needs to approve it.
Yes. URZ (university data center) of Heidelberg University, circa 2005. No idea if they are still using it, but it seemed to work fine at the time.
I remember compiling AFS from source for Scientific Linux 3.x because there was a weird bug that didn't let the machines mount AFS when they were integrated with LCG (before it was renamed to WLCG: https://wlcg.web.cern.ch/)
Oh my... this comment really dates me...
You could, partially, achieve something similar by layering multiple services in plan9, but often it would mean switching over to a different protocol at some point.
Those were the days when portability and longevity were important and there wasn't as much of a monoculture or incompatible code/language features churn.
The big disappointment for me at the time was that obsd did not also include a server component so it was comparatively much more difficult to use afs in your own infrastructure. The lesson being always make the effort to include the server side if possible. Without that you feel like a second class citizen.
* https://en.wikipedia.org/wiki/HP-UX
11.0 was released in 1997, with latest 11.31 going EOL 2025-12-31.
For example, HP-UX was a BSD-based Unix implementation that tried very very hard to pretend it was UNIX System V (R2/R3). "No, no really! I'm not one of that university kids!" But BSD was a far better foundation, vastly better networking etc., so that's what it was underneath.
Unix of the era was billed as a multi-user shared system, but it wasn't always great at that. It desperately lacked much of the quiet robustness and workhorse-ness of the proprietary minicomputer OSs of the day (e.g. VMS, AOS, HP's own MPE). No vendor did more to fill that gap and make multi-workload a workaday reality. HP added a fair-share scheduler (FSS), the first multi-system high availability clustering in Unix (MC/ServiceGuard), and scores of refinements along the way. As a result, in practice HP-UX was admirably hardened, and it ran more users and more concurrent competing jobs per system than any other Unix system could. Often by a wide margin.
In ~1995 HP doubled down on FSS with Process Resource Manager (PRM), which could guarantee various "shares" (weighted priorities) of total machine resources. First commercial Unix ancestor to today's containers. In production ~6 years before BSD jails and Virtuozzo, ~10 years before Solaris Zones, ~18 years before Docker/Linux containers, and ~20 or more years before container were mainstream production vehicles.
Unfortunately for HP, its workstations (the ones OP acquired) weren't nearly as popular with universities and developers as Sun Microsystems', so you tended to find HP-UX in commercial production—larger servers, more workload, but smaller numbers. And thus smaller ability to promote its innovations or be selected because of them.
Hat tip to steely-eyed missile man Xuan Bui and the many unsung engineering stars of HP in the Unix era.
HP-UX 10 and 11 progressively imported more SysV code and lost some of the charm that 9.x has.
I find AIX to be fascinating. Especially 3.x against contemporaries with its LVM, and a pageable kernel. A lot of people have snap judgements against it because they saw 'smit' but don't really understand anything about it.
You're also right to shout out out some of the other innovators: Data General's DG/UX did a great "let's redesign the kernel for multiprocessing and NUMA." IBM's AIX had kernel threads, pageability, and preemptibility at a time when no one else did (plus JFS, LVM, and eventually LPAR isolation). And Sequent DYNIX/ptx had some impressive multiprocessing (RCU) and large DBMS optimizations very early on. HP was by no means alone trying to engineer away Unix' early weaknesses.
Agreed, the university I worked HP systems cost was the major reason Computing Center Sun was purchased, though we had stray discount price purchased units of almost all vendors too.
We did have one HP 3000/MPE running library VTLS quite long time. I can't remember its exact model any more. But was first 160cm heights rack filling old system and then later replaced with some 9000/E35 matching size smallish (a thick and very heavy PC) size smaller 3000 series box. I did not manage that, but helped its sysadmin with his 9-track autoloader issues couple of times. I would have certainly recycled that tape unit to another use, but it was HP-IB (IEEE 488 / GPIB) connected like whole rack filled with disks all daisy-chained were easy to believe not having been cheap. Too bad it was so hard to get GPIB adapter working with other systems. Those terminals used with MPE having local edit buffer were weird, as was HP Roman character set used. All so well built that was a shame to let the go when VTLS was retired about 30 years ago.
Maths department did have better funding to get few HP-UX running long time. Only HP-UX we had at CC was C160 workstation running OpenView NMS, but that's it.
Yes and commercial side (a telco vendor) I did work customer demanded HP and there were very few Sun servers. It was only used if and when software was not at all available for HP-UX. What I recall Ericsson switching systems tended to come with Sun/Solaris and Lucent 5ESS HP/HP-UX that time.
A friend of mine went SF some conference, I don't recall year. But he came back with HP brand sunglasses which HP gave all visiting their booth and told "Remember, not to look at Sun" :D
And then Sun would hit back: "Yeah, maybe a smidge better... Not saying it is, but maybe, in an ideal light. On the other hand, with Sun, we cost a lot less. That means you can get 3 or 4 of your engineers empowered with a world-class workstation for every engineer you could with <competitor>." Boom. Those economics were compelling.
It also helped that in those days, Sun workstations became the object of desire for a lot of young developers and engineers, myself included. Sun styled itself into the "it" product.
Columbia University during the 1990s was a SunOS/Solaris shop (and, before then, VAX <https://www.columbia.edu/cu/computinghistory/>). My first year, AcIS (Academic Information Systems, IT for faculty/students) set up a single computer lab in the engineering building <https://cuit.columbia.edu/computer-lab-technologies/location...> with HP workstations. Although they booted into HP-UX and its Motif window manager, MAE provided Mac emulation and, in practice, was usually used because most students were unfamiliar with X Window, of course.
The boxes used the same Kerberos authentication as the Sun systems, so I presume I must have been using context-dependent filesystems for binaries when logging into the systems locally, or when I chose to remote log into one specifically from elsewhere (just for novelty's sake; I preferred the Sun cluster, or the Sun box dedicated to staff use).
MAE—the raison d'etre for the HP boxes—was slow and unstable, and by the time I graduated Macs, I believe, replaced HP, which made the lab consistent with what most of the other computer labs had.
I've been trying to visit this place with my daughter for 4 (or more?) years now, every time we've been in the area (roughly once per year), I forget that it isn't open on Mondays (which is the day we typically have a couple of hours before leaving the area), walk up to the doors only to realise (again) I've made the same mistake, and my daughter and I walk away disappointed.
We'll make it one day!
>I’ve got my HP 9000 Model 340 booting over the network from an HP 9000 Model 705 in Cluster Server mode and I’ve learned some very unsettling things about HP-UX and its filesystem.
>Boot-up video at the end of the blog, where I play a bit of the original version of Columns.
My significant experiences on HP-UX were HP Vault, one the very first approaches of doing containers in UNIX, and going through 32 bit to 64 bit transition.
It's great that there are folks like you preserving this history