The Trinary Dream Endures
Posted3 months agoActive2 months ago
robinsloan.comOtherstory
calmpositive
Debate
40/100
Artificial IntelligenceCreativityWriting
Key topics
Artificial Intelligence
Creativity
Writing
The article explores the concept of 'trinary dream', a state of creative flow enabled by AI tools, and the discussion revolves around the intersection of technology and artistic expression.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
28m
Peak period
58
0-12h
Avg / period
18.3
Comment distribution73 data points
Loading chart...
Based on 73 loaded comments
Key moments
- 01Story posted
Oct 19, 2025 at 12:57 PM EDT
3 months ago
Step 01 - 02First comment
Oct 19, 2025 at 1:25 PM EDT
28m after posting
Step 02 - 03Peak activity
58 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 23, 2025 at 10:33 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45635734Type: storyLast synced: 11/20/2025, 1:48:02 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Maybe we could create continuous-valued electrical computers, but at least state, stability and error detection are going to be giant hurdles. Also, programming GUIs from Gaussian splats sounds like fun in the negative sense.
https://en.wikipedia.org/wiki/Vacuum-tube_computer
Again, think software first. The brain is always a byproduct of the processes, though it is discerned as a materialist operation.
Think big, binary computers are toys in the gran scheme of things.
Binary was always a dead end alley, we knew this going in. How do we escape it?
But, I think things are actually trending the other way, right? You just slam the voltage to “on” or “off” nowadays—as things get smaller, voltages get lower, and clock times get faster, it gets harder to resolve the tiny voltage differences.
Maybe you can slam to -1. OTOH, just using 2 bits instead of one... trit(?) seems easier.
Same reason the “close window” button is in the corner. Hitting a particular spot requires precision in 1 or 2 dimensions. Smacking into the boundary is easy.
Well, until we scaled transistors down to the point where electrons quantum tunnel across the junction. Now they're leaky again.
But it does argue against more states due to the benefits of just making 1 smaller if you can and packing things closer. Though maybe we are hitting the bottom with Dennard scaling being dead. Maybe we increase pitch and double state on parts of the chip, and then generations are measured by bits per angstrom.
I know (T)CAM's are used in CPU's, but I am nore thinking of the kind of research being done with TCAM's in SSD like products, so maybe we will get there some day.
Some of it is ending up in power circuitry.
Look at any Ethernet PHY for example - you have that nice 5-level signal coming on.. and the first thing you do is feed it into AFE/ADC so you get digital signals that you can actually work with.
So yes, in some specific situations, like Flash memory or interconnects, there are multi-level signals. But the computing itself is always binary.
A chip with billions of transistors can't reasonably work if most of them are in the analog mode, it'll just melt to slag, unless you have an amazing cooling system.
Also consider that there is only one threshold between values on a binary system. With a trinary system you would likely have to double the power supply voltage, and thus quadruple the power required just to maintain noise margins.
With mixed analog/digital circuits for example, it's pretty common to treat exactly same voltages either as -2.5/0/2.5 (relative to midpoint), or as 0/2.5/5 (relative to negative rail).
What matters is having multiple treshold voltages with distinct behaviour. Setun used ferrite transformers which do have multiple thresholds (postive and negative fields) - but modern electronics, including transistors, does not.
In the modern logic, diodes are not that useful because transistors already react to one polarity only. You simply connect multiple transistors to same input, and the right ones will activate.
Modern FETs are capable of switching spot welding currents without getting destroyed, while in thumbnail-sized package, imagine that. My grandpa was an electrical engineer and would be completely blown away by such a component.
Basically, every ethernet card is now a modem.
This is far more general than electronic systems (e.g. quantum computers follow the same principle - it's far easier to build and control qubits than qutrits/qudits).
(technically, it's even easier to build systems that have a single stable configuration, but you can't really store information in those, so they're not relevant)
It is a term that is still quite a fair bit for marketing. I think in this case (zojirushi) it isn't trinary, rather some probalistic/baysian system to derive a boolean from a number of factors (time, temp, and so on).
I'm reasonably convinced my Zojirushi has nothing more than a way to sense when the evaporation shifts and to start the "steaming" countdown timer then, probably using the resistance of the heating coil. In other words it's just a replacement for the weight/balance mechanism in a traditional "dumb" rice cooker, not something doing more complex modeling as far as I can tell.
It is however built like a tank and "just works" so I'm entirely happy with my purchase.
These are far more interesting than that. Technology Connections YouTube channel did a great breakdown of how they really work: https://www.youtube.com/watch?v=RSTNhvDGbYI
You can just have a struct { case yes; case no; case maybe; } data structure and pepper it throughout your code wherever you think it’d lead to subtler, richer software… sure, it’s not “at the hardware level” (whatever that means given today’s hardware abstractions) but that should let you demonstrate whatever proof of utility you want to demonstrate.
On many 64-bit architectures, you actually waste 63 bits of extra padding for that boolean, for alignment reasons, unless you put some other useful data nearby in your struct (according to the alignment rules).
Luckily, this kind of waste doesn't happen often, but sadly, it does sometimes happen.
Quaternary allows for:
And: For logical arithmetic, I.e. reducing tree expressions, True and False are enough.But in algebraic logic, where more general constraint topologies are possible, the other two values are required.
What is the logical value of the isolated expression “(x)”? I.e. “x” unconstrained?
Or the value of the expression “(x = not x)”?
None of 4-valued logic’s values are optional or spurious for logical algebra.
—-
Many people don’t know this, but all modern computers are quaternary, with 4 quaternit bytes. We don’t just let anyone in on that. Too much power, too much footgun jeopardy, for the unwashed masses and Python “programmers”.
The tricky thicket of web standards can’t be upgraded without introducing mayhem. But Apple’s internal-only docs reveal macOS and Swift have been fully quaternary compliant on their ARM since the M1.
On other systems you can replicate this functionality, at your own risk and effort, by accessing each quaternit with their two bit legacy isomorphic abstraction. Until Rust ships safe direct support.
—-
It will revolutionize computing, from the foundations up, when widely supported.
Russell’s paradox in math is resolved. Given a set S = “The set of all sets that don’t contain themselves”, the truth value of “Is S in S” in quaternary logic, reduces to Contradiction, which indeed it is. I.e. True and False. Making S a well formed, consistent entity, and achieving full set and logical completeness with total closure. So consistency is returned to Set theory and Russell’s quest for a unification of mathematics with just sets and logic becomes possible again. He would have been ecstatic. Gödel be damned! [0]
Turing’s Incompleteness Theorem demonstrates that 2-valued bit machines are inherently inconsistent or incomplete.
Given a machine M, applied to the statement S = “M will say this statement is False”, or “M(S) = False”, it has to fail.
If M(S) returns True, we can see that S is actually False. If M(S) returns False, we can see that actually S is True.
But for a quaternary Machine M4 evaluating S4 = “M4(S4) = False”, M4(S4) returns Contradiction. True and False. Which indeed we can see S4 is. If it is either True or False, we know it is the other as well.
Due to the equivalence of Undecidability and the Turing Halting Problem, resolving one resolves the other. And so quaternary machines are profoundly more powerful and well characterized than binary machines. Far better suited for the hardest and deepest problems in computing.
It’s easy to see why the developers of Rust and Haskell are so adamant about getting this right.
[0] https://tinyurl.com/godelbedamned
I respond in that spirit.
Taking the convention that “byte” always means 8 n-valued bits:
One advantage of a convention of 8 quaternit bytes is they can be readily used as 8 ternary valued bytes too, albeit with reduced use of their value range.
8 quaternit bytes also have the advantage of higher resolution addressing, I.e. at the nibble = 4 quaternary bit boundaries. (The last bit of modern memory addresses indicates the upper or lower quaternary nibble.
Despite our natural aesthetic hesitancy to equate a 4-valued bit with two 2-valued bits, we all understand they are the same. Many “binary” storage devices do the reverse, and store multiple “binary” values with higher range cells.
A bit of information (whatever it’s arity) is the same bit regardless of how it is stored or named.
We get stuck in our familiar frames and names.
Also, the points about Russell’s paradox and Turing Incompleteness are conveyed in an absurdist’s deadpan, but they are in fact actual critiques I am making. In both cases, two-valued logic, suitable only for arithmetic, is used in algebraic contexts where self-referencing and open constraints are both possible, despite the basic inability of two-valued logic to represent the values of either.
It is startling to me, the obvious limitations this out-of-the-gate bad assumption of an excluded middle in algebraic contexts, places on the generality of conclusions in both treatments, where the failings of the excluded middle are basically the "proof". Proof of what was assumed, essentially.
Anyone who cares about those topics can work through those points. Neither are as meaningless or trivial as might be expected.
Finally, four valued logic would be very useful to support at CPU instruction levels, for algebraic contexts, beyond arithmetic. Especially since no changes to memory are needed.
Interestingly, with 4-valued logic, there are two different sets of AND, OR and NOT, for two ways they can be treated. And the current bit-wise operators, acting on [tf] 2-bit 4-valued logic (True as [+-], False as [-+], [--] as unknown, and [++] as contradiction) already implement the new versions of those operations. So new instructions are only needed to implement regular AND, OR, NOT operations for 2-bit 4-valued logical values.
2 is the smallest amount of symbols needed to encode information, and makes it the easiest to disambiguate symbols in any implementation, good enough for me.
Here's a concrete example, imagine you needed to create some movable type because you are creating a printing press. And you need to represent all numbers upto 100 million.
In binary you need to make 53 pieces, in ternary 50, in octal 69 pieces, in decimal 81 and in hexadecimal 101.
These numbers don't quite make sense to me. Hexadecimal should have 16 symbols, and then `log16(whatever) = message length`. I get what you're trying to say though.
That trend continues up until the symbols start looking the same and no one can read them, and now the most important problem is not a position on a tradeoff curve. It's that the system is no longer reliable.
If you wanted each letter to have the highest probability of successfully being read, you would use a grid, and shade or leave blank each grid square.
100 million in Hex is 5F5E100
You need 6*16 for the trailing + 5 pieces for the leading digit. If you wanted to do any number from 0 to 100 million.
I wasn't knowledgeable enough to evaluate that claim at the time, and I'm still not.
https://ieeexplore.ieee.org/document/9200021
https://en.wikipedia.org/wiki/Landauer%27s_principle
https://github.com/yfguo91/Ternary-Spike
Ternary Spike: Learning Ternary Spikes for Spiking Neural Networks
> The Spiking Neural Network (SNN), as one of the biologically inspired neural network infrastructures, has drawn increas- ing attention recently. It adopts binary spike activations to transmit information, thus the multiplications of activations and weights can be substituted by additions, which brings high energy efficiency. However, in the paper, we theoret- ically and experimentally prove that the binary spike acti- vation map cannot carry enough information, thus causing information loss and resulting in accuracy decreasing. To handle the problem, we propose a ternary spike neuron to transmit information. The ternary spike neuron can also enjoy the event-driven and multiplication-free operation advantages of the binary spike neuron but will boost the information ca- pacity. Furthermore, we also embed a trainable factor in the ternary spike neuron to learn the suitable spike amplitude, thus our SNN will adopt different spike amplitudes along layers, which can better suit the phenomenon that the membrane po- tential distributions are different along layers. To retain the efficiency of the vanilla ternary spike, the trainable ternary spike SNN will be converted to a standard one again via a re- parameterization technique in the inference. Extensive experi- ments with several popular network structures over static and dynamic datasets show that the ternary spike can consistently outperform state-of-the-art methods.
[1]: https://arxiv.org/abs/2402.17764
[2]: https://huggingface.co/tiiuae/Falcon-E-3B-Instruct
[3]: https://huggingface.co/microsoft/bitnet-b1.58-2B-4T
The quantum dream is also the trinary dream.
In binary, with two inputs, there are 2^2 = 4 total possible inputs (00, 01, 10, 11). Different gate types can give different outputs for each of those four inputs: each output can be 0 or 1, so that's 2^4 == 16 different possible gate types. (0, 1, A, B, not A, not B, AND, OR, NAND, NOR, XOR, XNOR, A and not B, B and not A, A or not B, B or not A)
In ternary, with two inputs, there are 3^2 = 9 total possible inputs, so 3^9 = 19,683. I'm sure there are some really sensible ones in there, but damn that's a huge search space. That's where I gave up that time around! :-)
So do we have special memory and CPU instructions for trinary data that lives in a special trinary address space, separate from traditional data that lives in binary address space? No, the juice isn't worth the squeeze. There's no compelling evidence this would make anything better overall: faster, smaller, more energy efficient. Every improvement that trinary potentially offers results in having to throw babies out with the bathwater. It's fun to think about I guess, but I'd bet real money that in 50 years we're still having the same conversation about trinary.
1 more comments available on Hacker News