The Future of 32-Bit Support in the Kernel
Posted4 months agoActive4 months ago
lwn.netTechstoryHigh profile
controversialmixed
Debate
80/100
Linux Kernel32-Bit SupportEmbedded Systems
Key topics
Linux Kernel
32-Bit Support
Embedded Systems
The Linux kernel is considering dropping 32-bit support, sparking debate among developers and users about the implications for legacy hardware, embedded systems, and the future of Linux.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
42m
Peak period
65
0-6h
Avg / period
16
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 1, 2025 at 2:48 PM EDT
4 months ago
Step 01 - 02First comment
Sep 1, 2025 at 3:30 PM EDT
42m after posting
Step 02 - 03Peak activity
65 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 4, 2025 at 11:56 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45095475Type: storyLast synced: 11/20/2025, 8:23:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
May be someone can develop such thunking for legacy Linux userland.
that said, i sometimes think about a clean-room reimplementation of e.g. the unity3d runtime -- there are so many games that don't even use native code logic (which still could be supported with binary translation via e.g. unicorn) and are really just mono bytecode but still can't be run on platforms for which their authors didn't think to build them (or which were not supported by the unity runtime at the time of the game's release).
Yeah, that's a reasonable workaround, as long as it doesn't hit that OpenGL problem above (now it mostly affects DX7 era games, since they don't have Vulkan translation path). Hopefully it can be fixed.
The only thing I can think of is games, and the Windows binary most likely works better under Wine anyways.
There are many embedded systems like CNC controllers, advertisement displays, etc... that run those old applications, but I seriously doubt anyone would be willing to update the software in those things.
Win64S?
It will be relegated to the computing dustbin like non-8-bit bytes and EBCDIC.
Main-core computing is vastly more homogenous than when I was born almost 50 years ago. I guess that's a natural progression for technology.
So it's far more pervasive than people think, and will likely be in the picture for decades to come.
Of course they chose to integrate JavaScript so that's less likely now.
[1] Ok I admit, not trivially when it comes to unpaired surrogates, BOMs, endian detection, and probably a dozen other edge and corner cases I don't even know about. But you can offload the work to pretty well-understood and trouble-free library calls.
Most Unix syscalls use C-style strings, which are a string of 8-bit bytes terminated with a zero byte. With many (most?) character encodings you can continue to present string data to syscalls in the same way, since they often also reserved a byte value of zero for the same purpose. Even some multi-byte encodings would work if they chose to avoid using 0-value bytes for this reason.
UTF-16LE/BE (and UTF-32 for that matter) chose not to allow for this, and the result is that if you want UTF-16 support in your existing C-string-based syscalls you need to make a second copy of every syscall which supports strings in your UTF-16 type of choice.
That's completely wrong. If a syscall (or a function) expects text in encoding A, you should not be sending it in encoding B because it would be interpreted incorrectly, or even worse, this would become a vulnerability.
For every function, encoding must be specified as are specified the types of arguments, constraints and ownership rules. Sadly many open source libraries do not do it. How are you supposed to call a function when you don't know the expected encoding?
Also, it is better to send a pointer and a length of the string rather than potentially infinitely search for a zero byte.
> and the result is that if you want UTF-16 support in your existing C-string-based syscalls
There is no need to support multiple encodings, it only makes things complicated. The simplest solution would be to use UTF-8 for all kernel facilities as a standard.
For example, it would be better if open() syscall required valid UTF-8 string for a file name. This would leave no possibility for displaying file names as question marks.
UTF-16 is annoying, but it's far from the biggest design failure in Unicode.
UTF-32 is the worst of all worlds. UTF-16 has the teeny tiny advantage that pure Chinese text takes a bit less space in UTF-16 than UTF-8 (typically irrelevant because that advantage is outweighed by the fact that the markup surrounding the text takes more space). UTF-8 is the best option for pretty much everything.
As a consequence, never use UTF-32, only use UTF-16 where necessary due to backwards compatibility, always use UTF-8 where possible.
There's also the problem that grapheme cluster boundaries change over time. Unicode has become a true mess.
Not really. Unicode is still fundamentally based off of the codepoints, which go from 0 to 2^16 + 2^20, and all of the algorithms of Unicode properties operate on these codepoints. It's just that Unicode has left open a gap of codepoints so that the upper 2^20 codepoints can be encoded in UTF-16 without risk of confusion of other UCS-2 text.
The expansion of Unicode beyond the BMP was designed to facilitate an upgrade compatibility path from UCS-2 systems, but it is extremely incorrect to somehow equate Unicode with UTF-16.
Then there is also the issue that technically there is no such thing as UTF-16, instead you need to distinguish UTF-16LE and UTF-16BE. Even though approximately no one uses the latter we still can't ignore it and have to prepend documents and strings with byte order markers (another wasted pair of code points for the sake of an encoding issue) which mean you can't even trivially concatenate them anymore.
Meanwhile UTF-8 is backwards compatible with ASCII, byte order independent, has tons of useful properties and didn't require any Unicode code point assignments to achieve that.
The only reason we have UTF-16 is because early adopters of Unicode bet on UCS-2 and were too cheap to correct their mistake properly when it became clear that two bytes wasn't going to be enough. It's a dirty hack to cover up a mistake that should have never existed.
That's a strange way to characterize years of backwards compatibility to deal with
https://devblogs.microsoft.com/oldnewthing/20190830-00/?p=10...
I disagree you could just "easily" shove it into the "A" version of functions. Functions that accept UTF-8 could accept ASCII, but you can't just change the semantics of existing functions that emit text because it would blow up backwards compatibility. In a sense it is covariant but not contravariant.
And now, after you've gone through all of this effort: what was the actual payoff? And at what cost if maintaining compatibility with the other representations?
Also ISO 8601 (YYYY-MM-DD) should be the default date format.
I have a relatively large array of uint16_t with highly repetitive (low entropy) data. I want to serialize that to disk, without wasting a lot of space. I run compress2 from zlib on the data when serializinsg it, and decompress it when deserializing. However, these files make sense to use between machines, so I have defined the file format to use compressed little endian 16-bit unsigned ints. Therefore, if you ever want to run this code on a big-endian machine, you need to add some code to first flip the bytes around before compressing, then flipping them back after decompressing.
You're right that when your code is iterating through data byte for byte, you can write it in an endian-agnostic way and let the optimizer take care of recognizing that your shifts and ORs can be replaced with a memcpy on little-endian systems. But it's not always that simple.
I wish the same applied to written numbers in LTR scripts. Arithmetic operations would be a lot easier to do that way on paper or even mentally. I also wish that the world would settle on a sane date-time format like the ISO 8601 or RFC 3339 (both of which would reverse if my first wish is also granted).
> It will be relegated to the computing dustbin like non-8-bit bytes and EBCDIC.
I never really understood those non-8-bit bytes, especially the 7 bit byte. If you consider the multiplexer and demux/decoder circuits that are used heavily in CPUs, FPGAs and custom digital circuits, the only number that really makes sense is 8. It's what you get for a 3 bit selector code. The other nearby values being 4 and 16. Why did they go for 7 bits instead of 8? I assume that it was a design choice made long before I was even born. Does anybody know the rationale?
IIRC, in most countries the native format is D-M-Y (with varying separators), but some Asian countries use Y-M-D. Since those formats are easy to distinguish, that's no problem. That's why Y-M-D is spreading in Europe for official or technical documents.
There's mainly one country which messes things up...
If I'm writing a document for human consumption then why would I expect the dates to be sortable by a naive string sorting algorithm?
On the other hand, if it's data for computer consumption then just skip the complicated serialisation completely and dump the Unix timestamp as a decimal. Any modern data format would include the ability to label that as a timestamp data type. If you really want to be able to "read" the data file then just include another column with a human-formatted timestamp, but I can't imagine why in 2025 I would be manually reading through a data file like some ancient mathematician using a printed table of logarithms.
If you're naming a document for human consumption, having the files sorted by date easily without relying on modification date (which is changed by fixing a typo/etc...) is pretty neat
Its a serialization and machine communication format. And that makes me sad. Because YYYY-MM-DD is a great format, without a good name.
> NOTE: ISO 8601 defines date and time separated by "T". Applications using this syntax may choose, for the sake of readability, to specify a full-date and full-time separated by (say) a space character.
[1] https://www.pvv.org/~nsaa/8601v2000.pdf
For the representation of text of an alphabetic language, you need to hit 6 bits if your script doesn't have case and 7 bits if it does have case. ASCII ended up encoding English into 7 bits and EBCDIC chose 8 bits (as it's based on a binary-coded decimal scheme which packs a decimal digit into 4 bits). Early machines did choose to use the unused high bit of an ASCII character stored in 8 bits as a parity bit, but most machines have instead opted to extend the character repertoire in a variety of incompatible ways, which eventually led to Unicode.
I wouldn’t be surprised if other machines had something like this in hardware.
[1] Yes, I remember you could bit-bang a UART in software, but still the parity bit didn't escape the serial decoding routine.
Only if you assume a 1:1 mapping. But e.g. the original Baudot code was 5-bit, with codes reserved to switch between letters and "everything else". When ASCII was designed, some people wanted to keep the same arrangement.
When your RAM is vacuum tubes or magnetic core memory, you don't want 25% of it to go unused, just to round your word size up a power of two.
wasnt this more to do with cost? they could do arbitrary precision code even back then. its not like they were only calculating numbers less than 65537, ignoring anything larger
Doing numbers little-endian does make more sense. It's weird that we switch to RTL when doing arithmetic. Amusingly the Wikipedia page for Hindu-Arabic numeral system claims that their RTL scripts switch to LTR for numbers. Nope... the inventors of our numeral system used little-endian and we forgot to reverse it for our LTR scripts...
Edit: I had to pull out Knuth here (vol. 2). So apparently the original Hindu scripts were LTR, like Latin, and Arabic is RTL. According to Knuth the earliest known Hindu manuscripts have the numbers "backwards", meaning most significant digit at the right, but soon switched to most significant at the left. So I read that as starting in little-endian but switching to big-endian.
These were later translated to Arabic (RTL), but the order of writing numbers remained the same, so became little-endian ("backwards").
Later still the numerals were introduced into Latin but, again, the order remained the same, so becoming big-endian again.
And as for numbers, perhaps it isn't too late to set it right once and for all. The French did that with the SI system after all.
> So apparently the original Hindu scripts were LTR
I can confirm. All Indian scripts are LTR (Though there are quite a few of them. I'm not aware of any exceptions). All of them seem to have evolved from an ancient and now extinct script named Brahmi. That one was LTR. It's unlikely to have switched direction any time during subsequent evolution into modern scripts.
But why? The brilliance of 8601/3339 is that string sorting is also correct datetime sorting.
To get the little-endian ordering. The place values of digits increase from left to right - in the same direction as how we write literature (assuming LTR scripts), allowing us to do arithmetic operations (addition, multiplication, etc) in the same direction.
> The brilliance of 8601/3339 is that string sorting is also correct datetime sorting.
I hadn't thought about that. But it does reveal something interesting. In literature, we assign the highest significance to the left-most (first) letter - in the direction opposite to how we write. This needs a bit more contemplation.
Yes, we do that with everything, which is why little-endian numbers would be really inconsistent for humans.
YYYY-MM-DD to me always feels like a timestamp, while when I want to write a date, I think of a name, (for me DD. MM. YYYY).
For better or worse, PowerPC is still quite entrenched in the industrial embedded space.
Hey, you! You're supposed to be dead!
https://wiki.netbsd.org/ports/evbarm/
It's not a well argumented thought, just a nagging feeling.
Maybe we need a simple posix os that would run on a simple open dedicated hardware that can be comprehended by a small group of human beings. A system that would allow communication, simple media processing and productivity.
These days it feels like we are at a tipping point for open computing. It feels like being a frog in hot water.
Open source is one thing, but open hardware - that’s what we really need. And not just a framework laptop or a system76 machine. I mean a standard 64-bit open source motherboard, peripherals, etc that aren’t locked down with binary blobs.
I doubt anyone here has a clean enough room.
https://www.youtube.com/watch?v=PdcKwOo7dmM
Jordan Peterson has entered the building...
https://www.youtube.com/watch?v=qsHJ3LvUWTs
We'll likely never have "affordable" photolithography, but electron beam lithography will become obtainable in my lifetime (and already is, DIY, to some degree.)
https://www.youtube.com/watch?v=IS5ycm7VfXg
However, making at home a useful microcontroller or FPGA would require not only an electron-beam lithography machine, but also a ion-implantation machine, a diffusion furnace, a plasma-etch machine, a sputtering machine and a lot of other chemical equipment and measurement instruments.
All the equipment would have to be enclosed in a sealed room, with completely automated operation.
A miniature mask-less single-wafer processing fab could be made at a cost several orders of magnitude less than a real semiconductor fab, but the cost would still be of many millions of $.
With such a miniature fab, one might need a few weeks to produce a batch of IC's worth maybe $1000, so the cost of the equipment will never be recovered, which is why nobody does such a thing for commercial purposes.
In order to have distributed semiconductor fabs serving small communities around them, instead of having only a couple of fabs for the entire planet, one would need a revolution in the fabrication of the semiconductor manufacturing equipment itself, like SpaceX has done for rockets.
Only if the semiconductor manufacturing equipment would be the result of a completely automated mass production, which would reduce its cost by 2 or 3 orders of magnitude, affordable small-scale but state-of-the-art fabs would be possible.
But such an evolution is contrary to everything that the big companies have done during the last 30 years, during which all smaller competitors have been eliminated, the production has become concentrated in quasi-monopolies and for the non-consumer products the companies now offer every year more and more expensive models, which are increasingly affordable only for other big companies and not for individuals or small businesses.
University nanofabs have all of these things today. https://cores.research.asu.edu/nanofab/
> but the cost would still be of many millions of $.
A single set of this equipment is only singular millions today commercially.
Using something like this for prototyping/characterization or small-scale analog tasks is where the real win is.
It is weird that they do not have any ion implantation machine, because there are devices that are impossible to make without it. Even for simple MOS transistors, I am not aware of any other method for controlling the threshold voltage with enough precision. Perhaps whenever they need ion implantation they send the wafers to an external fab, with which they have a contract, to be done there.
Still, I find it hard to believe that all the equipment that they have costs less than 10 million $, unless it is bought second hand. There is indeed a market for slightly obsolete semiconductor manufacturing equipment, which has been replaced in some first tier fabs and now it is available at significant discounts for those who are content with it.
Maybe?
Another point of view might be that in a few weeks you could produce a batch of ICs you can actually trust, that would be several orders of magnitude more valuable than the $1000 worth of equivalents from the untrusted global supply chain.
some revolution. still not even on the moon yet
"The principle of evidence-based trust was at work in our decision to implement Precursor’s brain as an SoC on an FPGA, which means you can compile your CPU from design source and verify for yourself that Precursor contains no hidden instructions or other backdoors. Accomplishing the equivalent level of inspection on a piece of hardwired silicon would be…a rather expensive proposition. Precursor’s mainboard was designed for easy inspection as well, and even its LCD and keyboard were chosen specifically because they facilitate verification of proper construction with minimal equipment."
See also: https://betrusted.io
If that comes to pass we will want software that run on earlier nodes and 32bit hardware.
Wafer machines from the 1970s could be fairly cheap today, if there were sufficient demand for chips from the 1970s (~1MHz, no power states, 16 bit if you’re lucky, etc), but that trend would have to stop and reverse significantly for affordable wafer factories for modern hardware to be a thing.
Let’s hope some of that trickles down to consumer hardware.
The problem here is scale. Having fully-open hardware is neat, but then you end up with something like that Blackbird PowerPC thing which costs thousands of dollars to have the performance of a PC that costs hundreds of dollars. Which means that only purists buy it, which prevents economies of scale and prices out anyone who isn't rich.
Whereas what you actually need is for people to be able to run open code on obtainium hardware. This is why Linux won and proprietary Unix lost in servers.
That might be achievable at the low end with purpose-built open hardware, because then the hardware is simple and cheap and can reach scale because it's a good buy even for people who don't care if it's open or not.
But for the mid-range and high end, what we probably need is a project to pick whichever chip is the most popular and spend the resources to reverse engineer it so we can run open code on the hardware which is already in everybody's hands. Which makes it easier to do it again, because the second time it's not reverse engineering every component of the device, it's noticing that v4 is just v3 with a minor update or the third most popular device shares 80% of its hardware with the most popular device so adding it is only 20% as much work as the first one. Which is how Linux did it on servers and desktops.
is this even doable?
Could Amazon or Facebook do this if they just wanted to, e.g. to help break the hold of their competitors on markets they care about? Absolutely.
Could some hobbyist do it? Not on their own, but if you do part of it and someone else does part of it, the whole thing gets done. See e.g. Asahi Linux.
Which means, "open" has nothing to do with openness. What you want is standardization and commoditization.
There are practically no x86 hardware that require model-specific custom images to boot. There are practically no non-x86 hardware that don't require model-specific custom images to boot. ARM made perceptible amount of efforts in that segment with Arm SystemReady Compliance Program, which absolutely nobody in any serious businesses cares about, and it only concern ARM machines even if it worked.
IMO, one of problems in efforts going in from software side is over-bloated nature of desktop software stacks and bad experiences widely had with UEFI. They aren't going to upgrade RAM to adopt overbloated software that are bigger than the application itself just because that is the new standard.
Open hardware you can buy now: https://www.crowdsupply.com/sutajio-kosagi/precursor
The open OS that runs on it: https://betrusted.io/xous-book/
A secret/credential manager built on top of the open hardware and open software: https://betrusted.io
His blog section about it: https://www.bunniestudios.com/blog/category/betrusted/precur...
"The principle of evidence-based trust was at work in our decision to implement Precursor’s brain as an SoC on an FPGA, which means you can compile your CPU from design source and verify for yourself that Precursor contains no hidden instructions or other backdoors. Accomplishing the equivalent level of inspection on a piece of hardwired silicon would be…a rather expensive proposition. Precursor’s mainboard was designed for easy inspection as well, and even its LCD and keyboard were chosen specifically because they facilitate verification of proper construction with minimal equipment."
This needs money. It is always going to have to pay the costs of being niche, lower performance, and cloneable, so someone has to persuade people to pay for that. Hardware is just fundamentally different. And that's before you get into IP licensing corner cases.
also this is what happen to prusa, everyone just take the design and outsource the manufacture to somewhere in china which is fine but if everybody doing that, there is no fund to develop next iteration of product (someone has to foot the bill)
and there is not enough sadly, we live in reality after all
I think people here are misunderstanding just how "weird" and hacky trying to run an OS like linux on those devices really is.
Not having an MMU puts you more into the territory of DOS than UNIX. There is FreeDOS but I'm pretty sure it's x86-only.
The one thing different to a regular Linux was that a crash of a program was not "drop into debugger" but "device reboots or halts". That part I don't miss at all.
How do multiple processes actually work, though? Is every executable position-independent? Does the kernel provide the base address(es) in register(s) as part of vfork? Do process heaps have to be constrained so they don't get interleaved?
Executables in a no-MMU environment can also share the same code/read-only segments between many processees, the same way shared libraries can, to save memory and, if run-time relocation is used, to reduce that.
The original design of UNIX ran on machines without an MMU, and they had fork(). Andrew Tanenbaum's classic book which comes with Minix for teaching OS design explains how to fork() without an MMU, as Minix runs on machines without one.
For spawning processes, vfork()+execve() and posix_spawn() are much faster than fork()+execve() from a large process in no-MMU environments though, and almost everything runs fine with vfork() instead of fork(), or threads. So no-MMU Linux provides only vfork(), clone() and pthread_create(), not fork().
[1]: https://maskray.me/blog/2024-02-20-mmu-less-systems-and-fdpi...
[2]: https://popovicu.com/posts/789-kb-linux-without-mmu-riscv/
[3]: https://www.kernel.org/doc/Documentation/nommu-mmap.txt
[4]: https://github.com/kraj/uClibc/blob/ca1c74d67dd115d059a87515...
I spent close to ten years working closely with uClinux (a long time ago). I implemented the shared library support for the m68k. Last I looked, gcc still included my additions for this. This allowed execute in place for both executables and shared libraries -- a real space saver. Another guy on the team managed to squeeze the Linux kernel, a reasonable user space and a full IP/SEC implementation into a unit with 1Mb of flash and 4Mb of RAM which was pretty amazing at the time (we didn't think it was even possible). Better still, from power on to login prompt was well under two seconds.
The original UNIX also did not have the virtual memory as we know it today – page cache, dynamic I/O buffering, memory mapped files (mmap(2)), shared memory etc.
They all require a functioning MMU, without which the functionality would be severely restricted (but not entirely impossible).
The no-MMU version of Linux has all of those features except that memory-mapped files (mmap) are limited. These features are the same as in MMU Linux: page cache, dynamic I/O buffering, shared memory. No-MMU Linux also supports other modern memory-related features, like tmpfs, futexes. I think it even supoprts io_uring.
mmap is supported in no MMU Linux with limitations documented here: https://docs.kernel.org/admin-guide/mm/nommu-mmap.html For example, files in ROM can be mapped read-only.
Access to a page that is not resident in memory results in a trap (an interrupt), which is handled by the MMU – the CPU has no ability to do it by itself. Which is the whole purpose of the MMU and was a major innovation of BSD 4 (a complete VMM overhaul).
With the right filesystem (certain kinds of read-only), the code (text segment) can even be mapped directly, and no loading into RAM need occur at all.
These approaches saves memory even on regular MMU platforms.
"Originally, fork() didn't do copy on write. Since this made fork() expensive, and fork() was often used to spawn new processes (so often was immediately followed by exec()), an optimized version of fork() appeared: vfork() which shared the memory between parent and child. In those implementations of vfork() the parent would be suspended until the child exec()'ed or _exit()'ed, thus relinquishing the parent's memory. Later, fork() was optimized to do copy on write, making copies of memory pages only when they started differing between parent and child. vfork() later saw renewed interest in ports to !MMU systems (e.g: if you have an ADSL router, it probably runs Linux on a !MMU MIPS CPU), which couldn't do the COW optimization, and moreover could not support fork()'ed processes efficiently.
Other source of inefficiencies in fork() is that it initially duplicates the address space (and page tables) of the parent, which may make running short programs from huge programs relatively slow, or may make the OS deny a fork() thinking there may not be enough memory for it (to workaround this one, you could increase your swap space, or change your OS's memory overcommit settings). As an anecdote, Java 7 uses vfork()/posix_spawn() to avoid these problems.
On the other hand, fork() makes creating several instances of a same process very efficient: e.g: a web server may have several identical processes serving different clients. Other platforms favour threads, because the cost of spawning a different process is much bigger than the cost of duplicating the current process, which can be just a little bigger than that of spawning a new thread. Which is unfortunate, since shared-everything threads are a magnet for errors."
https://stackoverflow.com/questions/8292217/why-fork-works-t...
FreeBSD is dumping 32 bit:
https://www.osnews.com/story/138578/freebsd-15-16-to-end-sup...
OpenBSD has this quote:
>...most i386 hardware, only easy and critical security fixes are backported to i386
I tend to think that means 32bit on at least x86 days are numbered.
https://www.openbsd.org/i386.html
I think DragonflyBSD never supported 32bit
For 32bit, I guess NetBSD may eventually be the only game in town.
The industry has a lot of experience doing so.
In parallel, the old hardware is still supported, just not by the newest Linux Kernel. Which should be fine anyway because either you are not changing anything on that system anyway or you have your whole tool stack available to just patch it yourself.
But the benefit would be a easier and smaller linux kernel which would probably benefit a lot more people.
Also if our society is no longer able to produce chips in a commercial way and we loose all the experience people have, we are probably having a lot bigger issues as a whole society.
But I don't want to deny that it would be nice to have the simplest way of making a small microcontroller yourself (doesn't has to be fast or super easy just doable) would be very cool and could already solve a lot of issues if we would need to restart society from wikipedia.
Think about it like CreateProcess() on Windows. Windows is another operating system which doesn't support fork(). (Cygwin did unholy things to make it work anyway, IIRC.)
Note ELKS is not Linux.
There's also Fuzix.
Linux remains open source, extendable, and someone would most likely maintain these ripped out modules. Just not at the expense of the singular maintainer of the subsystem inside the kernel.
Linux's master branch is actually called master. Not that it really matters either way (hopefully most people have realised by now that it was never really 'non-inclusive' to normal people) but pays to be accurate.
In this case, out of dozens of ways the word is used (others being like ‘masters degree’ or ones that pretty closely match Git’s usage like ‘master recording’ in music), one possible one is ‘slavemaster’, so some started to assert we have to protect people from possibly making the association.
The relation between a human and a computer is very much that of a master and a slave. I provide the resources, you do what I say, including self-destruction.
141 more comments available on Hacker News