Replacing a Cache Service with a Database
Posted4 months agoActive4 months ago
avi.imTechstory
calmmixed
Debate
70/100
CachingDatabase DesignSystem Architecture
Key topics
Caching
Database Design
System Architecture
The article discusses replacing a cache service with a database, sparking a discussion on the trade-offs and complexities of caching in system design.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
38m
Peak period
43
0-6h
Avg / period
8.3
Comment distribution66 data points
Loading chart...
Based on 66 loaded comments
Key moments
- 01Story posted
Aug 31, 2025 at 10:41 AM EDT
4 months ago
Step 01 - 02First comment
Aug 31, 2025 at 11:19 AM EDT
38m after posting
Step 02 - 03Peak activity
43 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 3, 2025 at 11:07 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45083495Type: storyLast synced: 11/20/2025, 3:50:08 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The difference is in persistence and scaling and read/write permissions
That's some AI level sophism.
A database is a durable store of data that can be modified and read. Ostensibly, we're talking about computer databases. You can define the soft terms at your leisure and to suit your needs. There are many categories of discussion that will never intersect with this definition. Communication is not a database. Art is not a database. History is not a database. Medicine is not a database. et al.
A cache is a database. Differentiating a cache and database by label is a misnomer.
Oh fuck off. Calling everything AI is so 2024.
A database is a system of record. It can also be a source of truth. A cache is neither. Treating it as one is dangerous. Insisting others should is idiocy.
This is meaningless. A cache is used in lieu of the value because it's considered equivalent.
> Insisting others should is idiocy.
I did no such thing. Good luck with whatever.
For example, on reddit, fully rendered comments are cached, so that the renderer doesn't have to redo its work. But the cache key includes the date of the last edit on the comment, which is already known when requesting the value from the cache. In this way, you never have to invalidate that key, because editing the comment makes a new key. The old one will just get ejected eventually.
The more software development experience I gain the more I agree with him on that!
For example, let’s say that every web page your CMS produces is created using a computationally expensive compilation. But the final product is more or less static and only gets updated every so often. You can basically have your compilation process pull the data from your source of truth such as your RSBMS but then store the final page (or large fragments of it) in something like MongoDB. In other words the cache replacement happens at generation time and not on demand. This means there is always a cached version available (though possibly slightly stale), and it is always served out of a very fast data store without expensive computation. I prefer this style of caching to on demand caching because it means you avoid cache invalidation issues AND the thundering herd problem.
Of course this doesn’t work for every workflow but I can get you quite far. And yes this example can also be sort of solved with a static site generator but look beyond that at things like document fragments, etc. This works very well for dynamic content where the read to write ratio is high.
But again this is not an endorsement of MongoDB. I wouldn’t use it today but I did use it successfully and that company and tech stack sold for quite a bit of money and the software still runs, though I’m not sure on what stack. Again, if you are stuck on this one part of my comment… can’t help you.
Pretty much every view the user sees of data should include an understanding as to how consistent that data is with the source of truth. Issues with caching (besides basic bugs) often come up when a performance issue comes up and people slap in a cache without renegotiating how the end user would expect the data to look relative to its upstream state.
There is a vast number of undiagnosed race conditions in modern code cause by cache eviction in the middle of 'transactions' under high system load.
It’s not a data layer, it’s global shared state. Global shared state always has consequences. Sometimes the consequences are worth the trouble. But it is trouble.
If you think about Source of Truth, System of Record, cache is neither of those, and sits between them. There’s a lot of problems you can fix instead by improving the SoT or SoR situation in that area if the code.
if you use materialized views, that surfaces exactly what you want in a cache, except here the views consistency with the underlying data is maintained. that's hugely important.
that leaves us with the protocol. prepared statements might help. now we really should be about the same as the bump-on-the-wire cache. that doesn't get us the same performance is the in-process cache. but we didn't have to sacrifice any performance or add any additional operational overhead to get it.
But after you'd done all the optimizations, there is still a use case for caches. The main one being that a cache holds a hot set of data. Databases are getting better at this, and with AI in everything, latency of queries is getting swamped by waiting for the LLM, but I still see caches being important for decades to come.
The two questions no one seems to ask are 'do I even need a database?', and 'where do I need my database?'
There are alternate data storage 'patterns' that aren't databases. Though ultimately some sort of (Structure) query language gets invented to query them.
Then there's memoization, often a hack for an algorithm problem.
I once "solved" a huge performance problem with a couple of caches. The stain of it lies on my conscience. It was actually admitting defeat in reorganizing the logic to eliminate the need for the cache. I know that the invalidation logic will have caused bugs for years. I'm sure an engineer will curse my name for as long as that code lives.
Caches have perfectly valid uses, but they are so often used in fundamentally poor ways, especially with databases.
(It’s not really my architecture problem. My architecture problem is that we store pages as grains of sand in a db instead of in a bucket, and that we allow user defined schemas)
It's the equivalent of adding more RAM to fix poor memory management or adding more CPUs/servers to compensate for resource heavy and slow requests and complex queries.
If your application requires caching to function effectively then you have a core issue that needs to be resolved, and if you don't address that issue then caching will become the problem eventually as your application grows more complex and active.
I also just think it’s a necessary evil of big systems. Sometimes you need derived data. You can even think about databases as a kind of cache: the “real” data is the stream of every event that ever updated data in the database! (Yes this stretching the meaning of cache lol)
However I agree that caching is often an easy bandaid for a bad architecture.
This talk on Apache Samza completely changed how I think about caching and derived data in general: https://youtu.be/fU9hR3kiOK0?si=t9IhfPtCsSyszscf
And this interview has some interesting insights on the problems that caching faces at super large scale systems (twitter specifically): https://softwareengineeringdaily.com/2023/01/12/caching-at-t...
Caching belongs at the end of a long development arc. And it will be the end whether you want it too or not. Adding caching is the beginning of the end of large architectural improvements, because caches jam up the analysis and testing infrastructure. Everything about improving or adding features to the code slows down, eventually to a crawl.
I think the mistake is not using caching, but rather using it too soon in the development process.
There are times when caching is a requirement because there is simply no way to provide efficient performance without it, but I think too many times developers jump straight to caching without thinking because it solves potential problems for them before they happen.
The real problem comes later though at scale when caching can no long compensate for the development inefficiencies.
Now the developers have to start rewriting core code which will take time to thoroughly complete and test and/or the engineers have to figure out a way to throw more resources at the problem.
No it’s ten times worse than that. Adding RAM doesn’t make the task of fixing the memory management problems intrinsically harder. It just makes the problem bigger when you do fix it.
Adding caching to your app makes all of the tools used for detecting and categorizing performance issues much harder to use. We already have too many developers and “engineers” who balk at learning more than the basics of using these tools. Caching is like stirring up sediment in a submarine cave. Now only the most disciplined can still function and often just barely.
When you don’t have caches, data has to flow along the call tree. So if you need a user’s data in three places, that data either flows to those three or you have to look it up three times, which can introduce concurrency issues if the user metadata changes in the middle of a request. But because it’s inefficient there is clear incentive to fix the data propagation issues. Fixing those issues will make testing easier because now the data is passed in instead of having to mock the lookup code.
Then you introduce caching. Now the incentive is mostly gone, since you will only improve cold start performance. And now there is a perverse incentive to never propagate the data again. You start moving backward. Soon there are eight places in the code that use that data, because looking it up was “free” and they are all detached from each other. And now you can’t even turn off the cache, and cache traffic doesn’t tell you what your costs are.
And because the lookup is “free” the user lookup code disappears from your perf data and flame graphs. Only a madman like me will still tackle such a mess, and even I have difficulty finding the motivation.
For these reasons I say with great confidence and no small authority: adding caching to your app is the last major performance improvement most teams will ever see. So if you reach for it prematurely, you’re stuck with what you’ve got. Now a more astute competitor can deliver a faster, cheaper, or both product that eats your lunch and your team will swear there is nothing they can do about it because the app is already as fast as they can make it, and here are the statistics that “prove” it.
Friends don’t let friends put caches on immature apps.
I like your comment btw. I’d add Observability to CAP to incorporate what you’re saying.
I don’t think this is always true. Sometimes your app simply has data that takes a lot of computation to generate but doesn’t need to be generated often. Any way you solve this is going to be able to be described as a ‘cache’ even if you are just storing calculations in your main database. That doesn’t mean your application has a fundamental design flaw, it could mean your use case has a fundamental cache requirement.
That's not a fundamental mistake, and there's very little you can do about that from an efficiency point of view.
It's easy to forget that there was a world without SSDs, high speed pipes, etc - but it actually did exist. And that wasn't so long ago either.
And of course sometimes putting data nearer to the user actually makes sense...like the Netflix movie boxes inside various POPs or CDNs. Bandwidth and latency are actual factors for many applications.
That said, most applications probably should investigate adding indexes to their databases (or noSQL databases) instead of adding a cache layer.
“But, but, when I reload the page now it’s fast! I fixed it!”
[0] https://www.postgresql.org/docs/current/logical-replication-...
That's what IVM systems like Noria can do. With application + cache, the application stores the final result in the cache. So, with these new IVM systems, you get that precomputed data directly from the database.
Views in Postgres are not materialized right? so every small delta would require refresh of entire view.
The quick fix suggested was caching, since a lot of requests were for the same query. But after debating, we went with rate limiting instead. Our reasoning: caching would just hide the bad behavior and keep the broken clients alive, only for them to cause failures in other downstream systems later. By rate limiting, we stopped abusive patterns across all apps and forced bugs to surface. In fact, we discovered multiple issues in different apps this way.
Takeaway: caching is good, but it is not a replacement for fixing buggy code or misuse. Sometimes the better fix is to protect the service and let the bugs show up where they belong.
In all seriousness sometimes a cache is what you need. Inline caching is a classic example.
The team with the demanding service can add a cache that's appropriate for their needs, and will be motivated to do so in order to avoid hitting the rate limit (or reduce costs, which should be attributed to them).
I mean because bad code on a fast client system can cause a load higher than all other users put together. This is why half the internet is behind something like cloudflare these days. Limiting, blocking, and banning has to be baked in.
Just goes to show that there is no silver bullet - context, experience and good amount of gut feeling is paramount.
https://youtu.be/fU9hR3kiOK0?si=t9IhfPtCsSyszscf
It details Apache samza, which I didn’t totally grasp but it seems similar to what you’re talking about here.
He talks about how if you could essentially use an event stream as your source of truth instead of a database, and you had a sufficiently powerful stream processor, you could define views on that data by consuming stream events.
The end result is kind of like an auto-updating cache with no invalidation issues or race conditions. Need a new view on the data? Just define it and run the entire event stream through it. Once the stream is processed, that source of data is perpetually accurate and up-to-date.
I’m not a database guy and most of this stuff is over my head, but I loved this talk and I think you should check it out! It’s the first thing I thought of when I read your post.
The dumb/MVP approach I'd like to try sometime is close-to-client read only sqlite db's that get managed in the background and neatly handled by wrapper functions around things like fetch. The part I've been slowly thinking about is Noria style efficient handling of data structures while allowing for 'raw' queries, ideally I'd like to set this up so the frontend doesn't need an additional layers worth of read/write functionality just to have CDN-like behaviour. Maybe something like plugins to [de/re]normalise different kinds of blob to tables (from gql, groqd, etc). I'd also like to include a realtime cache invalidation/update system to keep all clients in sync without cache clearing... If I ever get that far.
Alternatively just ship an entire shallow copy of least changed / most used data as sqlite db's to the edge, push updates to those, and fetch from source anything that isn't in the DB. Might be simpler.
Why would you want to do this? "I don’t know of any database built to handle hundreds of thousands of read replicas constantly pulling data."
If you want an open-source database with Redis latencies to handle millions of concurrent reads, you can use RonDB (disclaimer, I work on it).
"Since I’m only interested in a subset of the data, setting up a full read replica feels like overkill. It would be great to have a read replica with just partial data. It would be great to have a read replica with just partial data."
This is very unclear. Redis returns complete rows because it does not support pushdown projections or ordered indexes. RonDB supports these and distion aware partition-pruned index scans (start the transaction on the node/partition that contains the rows that are found with the index).
Reference:
https://www.rondb.com/post/the-process-to-reach-100m-key-loo...
For the type of cache usage described in the article, cache lookups are almost always O(1). This is because a cache value is retrieved for a specific key.
Whereas db queries are often more complicated and therefore take longer. Yes, plenty of db queries are fetching a row by a key, and therefore fast. But many queries use a join and a somewhat complicated WHERE clause.
Having caching by default (like in Convex) is a really neat simplification to app development.
Again, you should test. But the main reason imo for redis is connections and speed, not just speed.