The Qnx Operating System
Posted3 months agoActive3 months ago
abortretry.failTechstoryHigh profile
excitedpositive
Debate
20/100
Qnx Operating SystemReal-Time Operating SystemsRetro Computing
Key topics
Qnx Operating System
Real-Time Operating Systems
Retro Computing
The article discusses the history and features of the QNX operating system, sparking nostalgia and interest among commenters who share their personal experiences with QNX.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4h
Peak period
100
Day 1
Avg / period
23.2
Comment distribution139 data points
Loading chart...
Based on 139 loaded comments
Key moments
- 01Story posted
Oct 5, 2025 at 10:47 AM EDT
3 months ago
Step 01 - 02First comment
Oct 5, 2025 at 2:31 PM EDT
4h after posting
Step 02 - 03Peak activity
100 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 17, 2025 at 7:42 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45481892Type: storyLast synced: 11/20/2025, 6:12:35 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://carleton.ca/rcs/qnx/installing-qnx-on-raspberry-pi-4...
Sorry, nope.
https://archive.org/details/qnxnc621_202306
I even ran the full QNX Momentics desktop OS on my home PC (a PIII 450) and it was very very impressive, way better than Linux and pretty much everything out there. Well, BeOS was also impressive with its multimedia performance, but QNX was just so much more polished and professional.
The late 90s-early 2000s was such an interesting era in computing in general - at one point I was multi-booting something like a dozen different OSes - DOS, Windows, Linuxes, BSDs, QNX, BeOS, MenuetOS.. all thanks to this fully graphical boot manager, I forget the name but it even had a built-in partition manager - and it even had mouse support! All these OSes were also quite usable, unlike all the niche OSes of today, many of which sadly can't even be installed on real modern hardware because of all the complexity. I really miss those days, it was truly a golden era of computing.
I mean, basically we could interact with a lot more hardware, support more file formats, filesystems, and network protocols, and had more high-level scripting languages. But there still seemed to be a huge disproportion where the QNX floppy was just so much more space-efficient for what it did.
https://winworldpc.com/product/qnx/144mb-demo
I was a contractor to Netpliance Inc early in my student days. They kept charging people for service that slowly degraded to the point of clients not getting their email for months and being told to try getting a Hotmail account. Watched the share go to pennies, then the company imploded and then everyone on my contract got laid off. Important early life lessons about how loyal to be to your job and keeping your resume fresh. A priceless education you can't get in college.
Anyway, I _liberated_ an RMA'd Iopener, built a handmade IDE cable to connect to the funky pinout, added a disk and ran it at home as a music server and internet device (with a hacked Netzero dialup account, of course). Ah, those were the days.
https://devblog.qnx.com/tag/from-the-board-up-series/
First, we had ICON computers in my elementary school, we'd all try to spin the trackball as quickly as it would go. Not sure if we ever broke one.
The second is when I worked at BlackBerry. I was building a feature that allowed you to use your QNX BlackBerry as a Bluetooth HID device. You could connect it to any device and use the trackpad + physical keyboard to remotely control a computer. It was fantastic. You could hook your laptop up to a project and control slides from your BlackBerry.
Then some product manager with questionable decision making told me to lock it down so it would only work with Blackberry Playbooks for "business purposes", rendering it effectively useless (since Playbooks are all ewaste). I distinctly remember that meeting where Dan Dodge argued that since it's a standard, it should not be locked down.
I respect Dan Dodge for that, I don't think I'd work with that PM again.
With one exception - you could crash other ICON systems or the overall network just via machine-machine chatting functions.
The Neutrino 6.4 version, which was made accessible as "openQNX" to the public, can still be downloaded from e.g. https://github.com/vocho/openqnx.
Here is an AI generated documentation of the source: https://deepwiki.com/vocho/openqnx
From memory: the source was made freely available to anyone who wanted to download it, but not under an open source license, under an individual non-transferable proprietary license; so, legally speaking, anyone who downloaded the source back then (before this program was terminated) is allowed to keep it and use it forever (under some usage restrictions, I forget the details), but they aren't licensed to share it with anyone else.
So this is somewhat comparable to all those leaked Microsoft Windows source code repositories on GitHub – technically illegal, but the copyright holder obviously doesn't care to try to stop it (especially ironic for Microsoft, given that as GitHub's owners, they could put an end to it very easily, if they could be bothered)
"Access to QNX source code is free, but commercial deployments of QNX Neutrino runtime components still require royalties, and commercial developers will continue to pay for QNX Momentics(R) development seats. However, noncommercial developers, academic faculty members, and qualified partners will be given access to QNX development tools and runtime products at no charge."
Which is the whole point – legally speaking, press releases count for very little, the actual text of the license agreement is far more important.
"Promissory estoppel" doesn't work that way... it doesn't mean "I don't need to read the legal fine print, I can just go by my interpretation of the press release"
That's not what I said. Anyway, I don't know where you are located, but at least in my country it is no problem to download the code from github for non-commercial educational study, especially given the listed facts. I think we can leave it at that.
EDIT: oh I see, thats what deepwiki itself is
What I also liked about QNX was the petite size. If I remember correctly it came on one floppy disk, and that included a GUI, not that you need a GUI with QNX since the product will be an embedded system of sorts. All of the documentation was clear and, even if you had not read the manual, the overlap with UNIX meant that the system was far from intimidating as most of the commands that I knew would work fine, albeit with different options to commands.
I had not fully realised how QNX had gone from strength to strength in automotive, and I didn't even know Harmon owned them for a while.
Given that we have gone from single core, 32 bit 386/486 to today's sophisticated SOCs that are thousands of times more capable, the question has to be asked, how important is QNX's superpower of realtime goodness, particularly if it is just for automotive applications such as turning on the A/C?
Surely a modern CPU that goes so much faster can do a better job without having to care about realtime performance? Or maybe Android Auto and Automotive Linux have those bases covered? Regardless, I am sure that if you want realtime embedded applications then you hire the guys that know QNX and reject those that haven't a clue.
I've seen my car's infotainment fail and restart, but i didn't thinking what is handling it underneath.
is there a chance that QNX has a podman-type application to run containers?
I do remember compiling being slow, Turbo C under DOS was much faster.
All but a few of these computers were destroyed by the ministry of education. And without the LEXICON server that accompanied them, they're basically useless.
For a bit of fun, I ran the DOOM shareware demo using the official QNX4 port on a 486SX with 8M of ram.
https://brynet.ca/video-qnxdoom.html
I picked up QNX6 again as a hobbyist later in life... until self-hosted QNX was killed, no bootable .ISOs after 6.5. Then they killed the hobbyist license, killed the Photon desktop GUI, dropped any native toolchain support in place of a Windows/Linux-hosted IDE. Porting software became difficult, pkgsrc no longer maintained.
They are completely noncommittal as a company, nothing short of actually open-sourcing it under the MIT/BSD would convince me to use it again.. and not another source-available effort that they inevitably rug pull again.
https://www.osnews.com/story/23565/qnx6-is-closed-source-onc...
It was a powerful lesson (amongst others) in what I came to call “the Law of Conservation of Ugly”. In many software problems, there’s a part that just is never going to feel elegant. You can make one part of the system elegant, which often causes the inelegance surface elsewhere in the system.
This may be an instance of the Waterbed Principle: in any sufficiently-complex system, suppressing or refactoring some undesirable characteristic in one area inevitably causes an undesirability to pop up somewhere else. Like there is some minimum amount of complexity/ugliness/etc that it is possible for the entire system to contain while still carrying out its essential functions, and it must leak out somewhere.
https://en.wikipedia.org/wiki/Waterbed_theory
But you are right that there is a price for elegance. It becomes an easier choice to make when you factor in things like latency and long term reliability / stability / correctness. Those can weigh much heavier than mere throughput.
The same idea occurred to me a while ago too, which is how I originally found that link :)
Disclaimer: I don't actually know what I'm talking about, lol
(Of course I'm being vague about the cutoff for "large" and "smaller" buffers. Always benchmark!)
For small messages (open), the userspace malloc is going to have packed small buffers into a single page - so there's a chance you'd need to copy to a new userspace page, the two copies might work out better.
The lower a transparent policy lies in the OS, the worse it contorts the system. Even mechanisms necessarily constrain policy, if only slightly. I strongly believe that microkernels will only be improved by adhering ever closer to true minimality. If backwards compatibility is important, put the policy in a library. But I think transparent policies are generally advisable only when user feedback indicates benefit.
Contrary to QNX, I'm not entirely convinced that network transparency by default is ultimately best, though that is a separate concern.
(this is used under-the-hood on macOS: NSXPCConnection -> libxpc -> MIG -> mach messages)
The cooler machines were specialized for fries, they use a rotating knife drum above a belt to cut defect spots from fries.
I've not done that for 17 years now; the newer machines are that much cooler.
I did find several machines like this on YouTube, and it's amazing to watch. (One of them had little motor-actuated slats that could kick the defective items away, almost like a foot kicking a soccer ball!)
"that marsh thing" has stuck with me, and been a frequent contributor to my work and thinking. I'll happily take Law of Conservation of Ugly as a _much_ better name for the thought :)
This was a screenshot of my Gentoo desktop around 2004!
https://lock.cmpxchg8b.com/img/fvwm_desktop.jpg
I also startet to used Gentoo around that time.
Things they weren't anticipating included GNU, the internet, Microsoft Windows, third-party development, the Windows applications barrier to entry, the World-Wide Web, shareware, BBSes, VARs, and the free-software movement. They didn't understand how operating systems were a winner-take-all game, so pricing your OS at hundreds of dollars was a losing strategy.
But it was 01986, so who could blame them? Their 01987 ad does try to reach out to VARs.
Still, they were certainly aware of Unix, and you'd think that would mean they were aware of uucp. They just didn't anticipate its significance. Again, though, who did?
They also don't seem to have appreciated the importance of GUIs until version 2.0 in 01987, despite the popularity of the Macintosh, the "Jackintosh" Atari ST, and GEOS on the C64. The article says that the "Photon" GUI everyone remembers wasn't until QNX 4.1 in 01994.
Of course most of this advantage has gone away, both because real-time Linux has become good enough to compete with QNX for a lot of use cases, and because QNX stopped supporting self-hosted development with QNX 6.6 in 002014. From a business standpoint of course it makes sense to focus on the automotive and other embedded markets where all the paying customers are, but from a tech enthusiast standpoint it makes me a little sad. Given the licensing cost and competition from real-time Linux on the high end, and Zephyr/FreeRTOS on the low end, I'm not sure why anyone would choose QNX for a new project today. If anyone reading this has chosen QNX for a new project relatively recently, I'd love to hear your perspective.
Have you checked out Oberon? It has a full GUI, networking stack, web browser, file browser, utilities, demo programs, etc., in a similar size. It isn't suitable for real-time control at all.
I'm also interested to hear from people choosing QNX for new projects.
[0] https://www.ifr.mavt.ethz.ch/research/xoberon/
Uses the Photon desktop environment.
https://jasoneckert.github.io/myblog/icon-computer/
https://jasoneckert.github.io/myblog/lexicon-computer/
One feature of the OS I fondly remember was that the most basic system calls (send/receive/reply) were implemented as about 3 inline assembler instructions each directly in the header file (qnx.h ?).
[1] https://herbert-janssen.de/paper/irini97-12.pdf
Nowadays not sure how it compares to other ones with wide field experience like Nintendo Switch Horizon, seL4 and more recently HarmonyOS NEXT.
I don't like the paging optimization described in section 4.5 [0]. It seems like a lot of added complexity for unequal gain.
In general, the authors make many good observations on the current designs of microkernels, particularly how the proliferation of small processes harms performance. Based on my reading of this paper and many others, I think there are some pragmatic considerations for building microkernel-based systems. The granularity of processes should be curtailed when performance is critical. Security is a spectrum, and such a system can still be more secure than the status quo. Limited kernels should be colocated next to processes again, not always across address spaces (since Meltdown), deferring to a cross-address space kernel on the harder-to-secure paths. If a process has a timer capability, and likely will for its remaining lifespan, an optimization could have a stub kernel accepting timer syscalls and forwarding the rest. Lastly, and this is a broader problem in most software, both code and state must be located in their proper places[1]. Use Parnas' criteria [2] for modular programming. If you believe in the power of the concept of microkernels, I have this to sell you; I believe it's even more basic and necessary. It's probably one of the most fundamental concepts we have on how to write good code.
[0] https://www.usenix.org/system/files/osdi24-chen-haibo.pdf [1] https://dl.acm.org/doi/10.1145/3064176.3064205 [2] https://wstomv.win.tue.nl/edu/2ip30/references/criteria_for_...
> You seem to be a scholar of microkernels; are you also developing microkernels?
Nothing professional, and I haven't even gotten to actually developing. But I have a general design and many half-baked specifics. I like to push the limits of what's been done. Developer practicality is secondary to bare minimalism, especially because convenience can be built back up (if painstakingly). I'm mainly inspired by seL4 and Barrelfish.
My most radical idea is making the kernel completely bare, without even a capability system or message passing. Similar to Barrelfish, I'd have a trusted userspace process (monitor). If I place it in the same address space as the kernel, every privileged interaction adds two mode switches, which I think (for I have not demonstrated it yet!) is well worth the greater programmability of kernel functionality. seL4's use of CNodes is elegant in one sense, but in another, it hamstrings both the user processes (fine, good, even) and the kernel itself (bad). seL4's approach is undeniably a better target for formal verification, but it restricts how efficient capabilities can be. Barrelfish, which targets multicore machines in a distributed manner, makes the capability system (as the load bearing core of these kinds of microkernels) even more contorted. The kernel is the multiplexer of last resort, standing in for the hardware. The sooner the kernel is not involved, the easier everyone breathes. Instead of trying to build a framework/foundation and the building itself all at once, the framework itself is plenty valuable. The monitor gets the control of the kernel but without the dependence on hardware or the rigid interface to userspace. This partition presents a meaningfully different level of multiplexing, where the kernel and the monitor each play their own part. The monitor's view of the virtual hardware offered by the kernel is much improved.
Security and trust are not black and white, and the kernel itself should be flexible to adaptations. I could just implement seL4 or Barrelfish in the monitor instead, or diverge more and investigate the new tradeoffs. Capabilities are load-bearing here, too, so there is every reason to play around with them. How the capability system works will determine how the entire operating system works. (As an aside: I was pleased in noticing that object capabilities have a close relation to Parnas-style modules, being their interfaces. But I think what object capabilities are can be played with too.) How might capabilities be stored, or accessed, more efficiently? I think there's definitely a lot of room for improvement there. Composite offers some ideas there, though I still lean towards Barrelfish's ideas. And I imagine specialized kernels, paired with userspace processes in their address spaces (like the "true kernel" and monitor), reifying the capabilities granted to those processes. Traditional microkernel wisdom could be interpreted as requiring as little code running in kernel space as is feasible. However, I have many other parameters I wish to allow people to optimize for, not even just performance, so I offer this: the core kernel will be so minimal to the point it hurts, and the monitor picks up the slack. Then, if security is paramount, only the obviously safe, minimally augmented kernels will be exported to other processes. Programmatic generation of specialized kernels, coordinated on capabilities, even restricted to only some processes. But if willing, much more daring ventures can be tried. I even have the suspicion that one could place what amounts to Linux as a specialized kernel, as the ultimate mode of bona fide virtualization. No VirtualBox, no personality servers, or even syscall emulation. I wonder how hard the task would be. Although I should probably learn more about user-mode Linux, and similar works in other operating systems (DragonflyBSD, and seemingly future-Redox?) to just run them in user space. That's still a pipe dream for now.
Having mentioned so much about seL4, and given this thread is originally about QNX, I should mention that I don't think my dream microkernel should put so much emphasis on kernel-facilitated message passing. I really am just offering a context switch this time. There isn't even a scheduler in the "true kernel". For all of the argumentation I've seen from the seL4 team for why any form of IPC less minimal than theirs is likely suspect, I don't see a good reason to not shoot seL4's IPC in the face too. Although some care is necessary, I could make it possible for seL4 IPC to be built exactly as-is, in the aspect of maximizing register use. The other main concern of seL4's IPC, that of threading (particularly thread priorities), I find even more suspect. No threads in my kernel either! I will take scheduler activations instead, please and thank you. I think people have been misguided into believing that "threads of execution" should be supported specifically by the kernel, when in reality, they are a much higher-level abstraction. The presence of an ongoing sequence of execution is another of those concepts that must be carefully captured in our design of software, a logical concept that informs how we should write code. Kernel threading is like supposing that a person on a smartphone doesn't view the multiple app boundary crossings and plethora of UI actions as one unified whole. The entire course must be mapped out, studied, and integrated. Kernel threading gives the illusion that we can manifest threads independently of programs, but the program determines the threading. Work instead from the hardware resources, the physical cores present, offering an interface above them, and meet the program as its developers distill its abstract formulation. The kernel's task is to bring the hardware from the bottom up to the developers, because that is necessarily how developers must interact with hardware. Otherwise, we really could invent more cores and memory to accomodate all those threads. Certainly, by removing threads from the kernel, I don't claim to have solved concurrency, or priority inversion, or anything like that. I merely want the hardware to be exposed as-is, but a bit friendlier, and people can build ever more friendly abstractions as they can and will, depending on the tradeoffs.
All things should reside in their proper places. Push down accidental complexity, bring up the essential complexity, letting everything that bears the burden of supporting things above itself (chief among them are the primary multiplexers of the kernel and system services) only do so to the extent it needs to. In the kernel's case, being simply the trampoline between the hardware and the program, Liedtke's minimality principle is perfect. Putting anything else in the kernel can only be beneficial for performance, if even that, so the tradeoff is quite plain. Even trust is not gained; it may seem horrific to have a trusted userspace process such as the monitor, but really, does the first process of any operating system not have such privilege? My monitor simply has a more defined responsibility, but given that the kernel proper is naked, the overall trust has been preserved, I think. And so on, the investigation can go. In the end, I may make the edges somewhat sharper, but they were sharp to begin with, and I offer tools to dull them. But please do note if you disagree with my conclusions! This is still just my own thinking, developed without dialogue.
</rant>
If you still feel in sharing mood, feel free to post links to interesting papers or proof of concepts in the space for further education.
EDIT: I quite like your idea of making the kernel unaware of threading, though I'm not sure how to go about implementing that. This is more radical than the other great idea of moving the scheduler and the concept of time(sharing) itself to userspace (I've seen a few talks about it on YT, I forget the name of the project that explored this avenue). So effectively ring 0 should only have to deal with enforcing capability security, while everything else lives in userspace.
> This is more radical than the other great idea of moving the scheduler and the concept of time(sharing) itself to userspace
The idea of userspace scheduling has been explored widely. Hydra took the plunge, but the L4 community is still reluctant. For good reason, since this typically increases latency on a latency-critical path. This is one of the strongest motivations for optimizing context switches by increasing kernel-user colocality in the same address space.
> I quite like your idea of making the kernel unaware of threading
See scheduler activations[0]. Even seL4 has kernel threads, which I think developed mainly due to being used to it, when the alternative would be better for formal verification.
> So effectively ring 0 should only have to deal with enforcing capability security, while everything else lives in userspace.
That's the idea, but unlike seL4 and Barrelfish, I think wholly implementing the capability system is very inflexible. The capability representations are rigid, which fixes (i.e., makes static) performance and fixes policy (all mechanisms restrict policy somewhat). It defies programmability. That's why I want to move most of the work to the trusted userspace process, though for the specific architecture I'm thinking of, it could be another module in kernelspace instead.
Further reading:
[0] https://homes.cs.washington.edu/~tom/pubs/sched_act.pdf | Scheduler activations involve scheduling by physical cores instead of kernel threads. Applications will be notified when losing or gaining a scheduler activation (a context in which to execute code), such as on preemption or initiating a blocking I/O call, or the reentry from those actions. This makes user-level threading more powerful, as well as any concurrency model, since the hardware is more accurately exposed.
[1] https://barrelfish.org/documentation.html | https://barrelfish.org/publications/barrelfish_sosp09.pdf | https://barrelfish.org/publications/barrelfish_hotos09.pdf | Barrelfish is a research OS that extends seL4's capability system to multicore machines in a principled manner. It also addresses hardware heterogeneity and increased complexity in hardware by using declarative techniques broadly.
[2] https://www.usenix.org/legacy/events/osdi99/full_papers/hand... | The Nemesis operating system focuses on interactive media applications. A major limitation of many OSes is that memory management is not well treated as a latency-inducing subsystem. Self-paging means making each application handle its own page faults, clarifying time usage for memory management.
[3] See the papers I linked in https://news.ycombinator.com/item?id=45522131
[4] https://dl.acm.org/doi/10.1145/2517349.2522720 | A great overview of the technological distinctions of the L4 microkernel family, emphasizing seL4
[5] https://dl.acm.org/doi/10.5555/2685048.2685051 | A Barrelfish paper that modularizes the kernel further, allowing superb flexibility such as easily swapping out the kernel running on a core
We needed the help. Thank you Dan!! We eventually ported to linux about 6 years later, but you helped our startup get up and going.
The OS was so clean but it lacked a lot of basic tooling. Back then there was no GUI or even a graphics library. We had to build or port a lot of things, including a VCS, from scratch. My editor of choice was JOVE (I couldn't get Emacs to build). I remember digging up various papers on graphics and creating our first graphics library.
So much '90s anime in those screenshots — super nostalgic!
> The name QUNIX was a bit too close to the name UNIX for AT&T. The name of the system was changed to QNX in late 1982 following a Cease and Desist by AT&T.
Already not as nice as in the early days.
> While RV1 was limited to just C and x86 assembly language, the company was hard at work on BASIC, FORTRAN, and Pascal compilers that would utilize common code generators allowing for the mixed-use of languages without losing optimization.
Yet another example of previous polyglot compiler stacks attempts.
> UNIX systems come in more flavours than ice cream.
That was a fun one.
I was also a huge fan of BlackBerry phones (having used Q5 and Z10 as daily drivers). The system was solid and had some really cool ideas. Too bad it didn't work out...
It was great experience, especially for those of us that appreciate microkernels.
https://en.wikipedia.org/wiki/Gordon_Bell
3 more comments available on Hacker News