The Write Last, Read First Rule
Postedabout 2 months agoActiveabout 2 months ago
tigerbeetle.comTechstory
calmmixed
Debate
40/100
Distributed SystemsDatabase ManagementConsistency Models
Key topics
Distributed Systems
Database Management
Consistency Models
The 'Write Last, Read First' rule is a concept discussed in a guest post on TigerBeetle's blog, sparking a thoughtful discussion on HN about its clarity and relevance to distributed systems and database management.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
4h
Peak period
6
8-10h
Avg / period
2.4
Comment distribution19 data points
Loading chart...
Based on 19 loaded comments
Key moments
- 01Story posted
Nov 11, 2025 at 1:30 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 11, 2025 at 5:25 AM EST
4h after posting
Step 02 - 03Peak activity
6 comments in 8-10h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 12, 2025 at 2:47 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45884658Type: storyLast synced: 11/20/2025, 5:45:28 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Happy to answer any questions. And thanks to Dominik Tornow of Resonate for writing this up as a guest post! It was a little rule we had coined, to help people remember how to preserve consistency across different DBMS's, and I think Dominik gave (beautiful) voice to it.
Or, can you suggest something pithier?
I have no suggestion for a better name off the top of my head. The issue I see is that you already have to know well its context and when it applies, also in order to not misremember it as “Write First, Read Last”, and to not mistake it as LIFO, or to relate it to a read-modify-write scenario in which you naturally would read first and write last anyway, though in a different sense. You see how the name is confusing?
Do you not think if someone can remember those four words, they’re less likely to get it wrong?
If you could contribute some better suggestions we could consider them!
Only if they remember the words in the right order, which isn’t a given. There is nothing in the phrase that helps remembering the order, unless you already know how it has to work. What it probably does at least is remind you that the order matters.
Maybe find something that emphasizes that the master data (the data in the system of record) has the shortest storage duration (since its writes happen last and its reads happen first). Something like “keep central data central in time” (and peripheral data peripheral in time).
As a side note, while I know that “system of record” and “system of reference” are existing terms, they are a bit unfortunate in that they sound very similar, both abbreviating to SOR, and “reference data” having semantic overlap with “master data”, so “system of reference” could be taken to mean the system to be used as the reference, i.e. master data.
Under the section 'Order of Operations':
> "Since the system of reference doesn’t determine existence, we can safely write to it first without committing anything. [...]"
Then the next paragraph
> "This principle—Write Last, Read First—ensures that we maintain application level consistency."
What I think it means is, 'writing-last to the system-of-record' and 'a read-first from the system of record' yields authoritative results, but I don't get that just from the title. Is my understanding correct?
2pc isn’t literally about transactions like you are probably thinking of in a database, its an abstract “atomic change” or “unit of work” that may or may not involve a database or a database transaction. You can do 2pc with just normal files, or APIs, or whatever.
The procedure in the article is not a two-phase commit, because changes are committed to the system of reference regardless of whether the subsequent commit to the system of record succeeds or not.
In addition, half of the rule in the article is about the ordering of reads, which two-phase commit isn’t concerned about at all.
Intent: Begin the durable execution (i.e. resonate.run)
Prepare: Write to the system of reference -- safe to fail here.
Commit: Write to the system of record -- the commit boundary.
Ack/Recovery: checkpointing + idempotent replays.
Abort/Compensation: panic or operator intervention.
Ordering operations has always been a thing you should do. And they’re treating this as a distributed system to simulate an ACID transaction. For example, if you ever do locks of multiple things, the order you take the locks has to be the same across the entire system so that you never deadlock. If your database is taking locks, then order matters there too. They rediscovered what us in the distributed systems world have always known and fairly well-documented: ordering is how you simulate time and prevent paradoxes.
(This is similar also to how chain replication preserves consistency.)
It's almost enough to make me believe in the independent existence of Platonic truths. Almost.
7 more comments available on Hacker News