The Origin of the Terms Big-Endian and Little-Endian (2003)
Posted7 days agoActive8h ago
ling.upenn.eduTech Discussionstory
informativeneutral
Debate
0/100
CPU DesignTerminologyHistory of Computing
Key topics
CPU Design
Terminology
History of Computing
Discussion Activity
Moderate engagementFirst comment
4m
Peak period
9
132-144h
Avg / period
4.6
Key moments
- 01Story posted
Dec 26, 2025 at 7:30 PM EST
7 days ago
Step 01 - 02First comment
Dec 26, 2025 at 7:34 PM EST
4m after posting
Step 02 - 03Peak activity
9 comments in 132-144h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 2, 2026 at 2:47 PM EST
8h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46397905Type: storyLast synced: 1/1/2026, 4:00:31 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
All modern CPU are little endian (or dual selectable)
Other than backward compatibility, there is non need for little endian.
TCP is becoming increasingly less relevant, although I don't know if it'll ever actually disappear.
Layers 3-4 (network, transport) are both big-endian - IP packet headers and TCP/UDP headers use big-endian format.
This means you can't have an IP stack (let alone TCP/UDP, Quic, Capn Proto) that's little-endian all the way through without breaking the internet.
Outside the webdev bubble, it's pretty much QUIC that is irrelevant.
The OSI layer model is not necessarily as relevant as it used to be.
The OSI layer model is extremely relevant to the Cisco network engineers running the edges of the large FAANG companies, hyperscalers etc. that connect them to the internet.
I'm just pointing out that UDP is an extremely thin wrapper over IP and the preferred way of implementing new protocols. It seems likely we'll eventually replace at least some of our protocols and deprecate old ones and I was under the impression new ones tended to be little endian.
[1] https://www.rfc-editor.org/rfc/rfc9000.html#name-notational-...
https://www.godbolt.org/z/q3hMPq78v
You'll always get something like this: ``` 00000000 : 00 01 02 03 04 05 06 07 00000008 : 08 09 0A 0B 0C 0D 0E 0F ```
On a big-endian machine, when you wrote 0x1234 to address 0x0000000 you got:
``` 00000000 : 12 34 02 03 04 05 06 07 00000008 : 08 09 0A 0B 0C 0D 0E 0F ```
On a little-endian machine you have to either do mental gymnastics to reorder the bytes, or set the item size to match your data item size. ``` 00000000 : 34 12 02 03 04 05 06 07 00000008 : 08 09 0A 0B 0C 0D 0E 0F ```
If we wrote the bytes with the LS byte on the right (just as we do for bits) then it wouldn't be an issue.
``` 00000000 : 07 06 05 04 03 02 12 34 00000008 : 0F 0E 0D 0C 0B 0A 09 08 ```
It could be argued that little endian is the more natural way to write numbers anyway, for both humans and computers. The positional numbering system came to the West via Arabic, after all.
Most of the confusion when reading hex dumps seems to arise from how the two nibbles of each byte being in the familiar left-to-right order clashes with the order of bytes in a larger number. Swap the nibbles, and you get "43 21", which would be almost as easy to read as "12 34".
You can think of memory are a store of register sized values. Big endian sort of make some sense when you think of it that way.
Or you can think of it as arbitrarily sized data. It's arbitrary data then big endian is just a pain the ass. And code written to handle both big and little endian is obnoxious.
https://patents.justia.com/assignee/mark-williams-company
https://en.wikipedia.org/wiki/Mark_Williams_Company
I wonder what happened to it and why could Linux not use it. I think it was released under a free licence.
Hex dumps would make sense, columns in spreadsheets would be left aligned always, and so on.
But we're too late for that.