A Clickable Visual Guide to the Rust Type System
Posted4 months agoActive4 months ago
rustcurious.comTechstory
supportivepositive
Debate
20/100
RustProgrammingType SystemsVisualization
Key topics
Rust
Programming
Type Systems
Visualization
A visual guide to the Rust type system has been shared, receiving praise for its clarity and usefulness, with users discussing its design and suggesting additional resources.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4h
Peak period
27
6-12h
Avg / period
5.3
Comment distribution42 data points
Loading chart...
Based on 42 loaded comments
Key moments
- 01Story posted
Sep 8, 2025 at 8:21 AM EDT
4 months ago
Step 01 - 02First comment
Sep 8, 2025 at 12:45 PM EDT
4h after posting
Step 02 - 03Peak activity
27 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 10, 2025 at 7:29 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45167401Type: storyLast synced: 11/20/2025, 6:39:46 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The true answer is that negative numbers have the top bit set, which can't be used for positive numbers. Hence positives are one bit short.
All negative numbers have the most significant bit set and 0 is the number with no bits set, ergo 0 must be positive since the most significant bit is not set.
Now arithmatically, this is untrue. We'll usually treat 0 as neither positive nor negative (or in certain cases both negative and positive) but bitwise, In terms of twos-complement implementation, Zero is positive. We know that since it exists in the unsigned version of the types as well.
Hopefully you'll see that some day.
> And a byte value of 128? What is that in hex?
0x80
Which is of course has the sign bit set.
The comments here are educational ... I hadn't realized that the field of programming had become this degraded.
THAT comment is condescending--talk about ideas, not people. I condescended to no one ... my issue is the state of computer science education.
> On the contrary, there is no sign bit. You asked for 128
I didn't ask for anything. The subject here was the value range of the i8 type.
To quote you: "What is that in hex?"
For most practical purposes outside of low-level stuff all that really matters about two's complement is Don't Get Near 2^(width-1) Or Bad™ Things Happen. Including +128 would even have the benefit of 1<<7 staying positive.
The work needed to calculate the overflow flag (done in every add/sub operation in most ISAs) is also way more complicated when the high bit does not represent sign.
While "twos complement" turns the MSB unsigned value to a negative instead of a positive. For example, 4-bit twos complement: 1000 represents -8 (in unsigned 4-bit, this supposed to be +8), 0100 represents 4, 0010 represents 2, 0001 represents 1. Some more numbers: 7 (0111), -7 (1001), 1 (0001), -1 (1111).
Intuitively, "ones complement" MSB represents a multiplication by (-1). While "twos complement" MSB adds (-N), with N = 2^(bit length - 1), in case of 4-bit twos complement it's (-2^3) or (-8). Both representation leave non-MSB bits work exactly like unsigned integer.
I find the best way to understand why 2s complement is so desirable is to write down the entire number line for e.g. 3-bit integers.
Using 1s complement, the negative numbers are backwards. 2s complement fixes this, so that arithmetic works and you can do addition and subtraction without any extra steps.
(Remember that negative numbers are less than positive numbers, so the correct way to count them is:
-8 -7 -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 +7
Where -1 is the largest possible negative number)
It is completely irrelevant for the vast majority of programming.
The field of programming has become so broad that I would argue the opposite. The vast majority of developers will never need to think about let alone understand twos complement as a numerical representation.
Yes, it's possible to encode such types manually, but it will not be efficient since CPUs do not natively support such operations.
Also, this in-band signaling probably would invite something similar to `null` mess in type systems. I can't wait to tell CPU to JMP NaN.
They would, but I agree with RISC-V here, CPUs should not rely on them in the first place.
I do not understand your argument about branches, how would it hinder the jump instructions?
We still would need separate "wrapping" instructions (e.g. for implementing bigints and cryptographic algorithms), but they probably could be limited to unsigned operations only.
>I can't wait to tell CPU to JMP NaN.
How is it different from jumping to null? If you do such jump, it means you have a huge correctness problem with your code.
For relative non-immediate jumps the added logic is extremely simple (hardware exception on NaN) and should not (AFAIK) hinder performance of jumps in any way.
As for unsigned integers, as I mentioned in the other comment, we probably need two separate instruction sets for "wrapping" and NaN-able operations on unsigned integers.
The reason those requirements exist is (primarily) to do with unsafe code. Specifically it’s about deciding the variance of the type (which doesn’t matter for a truely unused type parameter).
It doesn't use unsafe under the hood, rather it's compiler magic.
One part that I love especially about it is that it represents lifetimes [1] and memory layout [2] of data structures in graphical format. They're as invaluable as API references. I would love to see it included in other documentation as well.
[1] https://cheats.rs/#memory-lifetimes
[2] https://cheats.rs/#memory-layout