Never Ever Use Content Addressable Storage
Posted3 months agoActive3 months ago
frederic.vanderessen.comTechstory
skepticalnegative
Debate
30/100
Content Addressable StorageStorage SystemsData Management
Key topics
Content Addressable Storage
Storage Systems
Data Management
The author argues against using Content Addressable Storage (CAS), sparking a discussion about its limitations and potential use cases.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
2h
Peak period
3
0-12h
Avg / period
2
Key moments
- 01Story posted
Oct 8, 2025 at 7:54 AM EDT
3 months ago
Step 01 - 02First comment
Oct 8, 2025 at 9:26 AM EDT
2h after posting
Step 02 - 03Peak activity
3 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 16, 2025 at 10:53 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45515133Type: storyLast synced: 11/17/2025, 11:09:56 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
So when a user wants to remove a CAS-addressed document, before really deleting it you need to detect if it's the last reference. This is not easy to do, it is in fact much harder to do correctly than eating the cost of storing duplicate files.
And this paragraph is the purported solution:
And usually when CAS is considered as a solution, it's to solve the need of deduplicating files to save on storage. But even there, the good solution is to give files their own internal uuids as storage keys, store its hash alongside, and generate external uuids for each file upload, then use refcounts to handle the final delete.
The problem is this solution reframes the problem but doesn't solve it. It still requires:
- Accurate reference counting
- Careful handling of deletes
- Synchronization across systems
Which is all part of the original problem.
At the end of the day, you can't safely and scalably do distributed deletes with refcounts unless you centralize the operation, which kills scalability. There are work-arounds, such as marking the file as unreferenced and then running a garbage collector to delete unreferenced files, but the author doesn't discuss them.
The security implications of OLTP were never understood; the older paradigm, in which the DBAs were the only role that would ever touch the data, was never explicitly repudiated, so it continued to have mindshare among architects and users.
These two things taken together -- save storage above all, and the implicit union of security contexts -- led to the universal antipattern of overloading all lifecycle stages of a business object onto a single table, with the "status" of a particular record indicated by a union discriminator code.
As you know, I always use the example of invoices -- unapproved, then approved -- because of that example's extreme simplicity, and because of its obvious, immediate connection to cash going out the door. And as you also know, no one ever "gets it".
But to rehearse (NB. we are in an AP context, not an AR context), accounting controls require separation of roles between (A) the creation of an invoice, (B) the approval of an invoice, and (C) its subsequent processing into a payment. What that means is that a newly-received invoice, an approved-but-not-paid invoice, and a paid invoice, are three different types.
Now shall those three types be overloaded onto one database table? The computing-resources perspectives of the 1950s say "Yes, please!" And as long as a trusted super-role -- the DBAs -- are responsible for the integrity of the batch processes that create the reports that are then routed exclusively to the people who approve the invoices, and the other reports that subsequently go to the people who cut the checks, oh for God's sake don't make me repeat myself like this.
You see the problem: in an OLTP world, this all falls to sh1t. Suppose you keep a timestamp-last-touched and a user-ID-last-touched-by? (Whisper it only: some systems don't; a lot of the others only keep one.) Does that give you separation of roles? NO: because everybody in all the roles has to have rights on the invoice table. So the accounting controls have to be satisfied some completely other way.
The three (in our toy example) kinds of invoices are three different types and therefore they need to be in three different tables, each with its own access rights and its own audit trail. If you do not do this, then Murphy's Law and Occam's Razor, for once in magnificent agreement, both say that you are cutting wild checks. Do you not care? Why do you not care? "That's what risk insurance is for"? Okay, we give up; but you are still cutting wild checks.
The one-type-per-process-step-and-one-table-per-type model can, of course, be implemented in such a way as to minimize duplication; but (A) there is a tradeoff against performance, and (B) the architectural tradition that would enable this does not exist, because that path was not taken back when it was time to take it.
As to (A), we are not addressing those who think that it matters how fast you get the wrong answer.
As to (B), we do not have a time machine, and still less do we have the ability to convince the vendors of enterprise software that they have been doing it wrong for a lifetime.
But you're still cutting wild checks.