High-Performance Read-Through Cache for Object Storage
Posted4 months agoActive3 months ago
github.comTechstory
calmmixed
Debate
60/100
CachingObject StorageS3Performance Optimization
Key topics
Caching
Object Storage
S3
Performance Optimization
The post introduces Cachey, a high-performance read-through cache for object storage, sparking discussion on its potential use cases, limitations, and applications in production infrastructure.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
23m
Peak period
6
6-9h
Avg / period
2.1
Comment distribution17 data points
Loading chart...
Based on 17 loaded comments
Key moments
- 01Story posted
Sep 20, 2025 at 12:13 AM EDT
4 months ago
Step 01 - 02First comment
Sep 20, 2025 at 12:36 AM EDT
23m after posting
Step 02 - 03Peak activity
6 comments in 6-9h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 21, 2025 at 6:54 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45310294Type: storyLast synced: 11/20/2025, 1:23:53 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
But I can't find anything to support the use case for highly available (multi-AZ), scalable, production infrastructure. Specifically, a unified and consistent cache across geos (AZs in the AWS case, since this seems to be targeted at S3).
Without it, you're increasing costs somewhere in your organization - cross-AZ networking costs, increased cache sizes in each AZ to be available, increased compute and cache coherency costs across AZs to ensure the caches are always in sync, etc etc.
Any insight from the authors on how they handle these issue on their production systems at scale?
I think that assumes decoupled compute and storage. If instead I couple compute and storage, I can shard the input, and then I won't share the cache across the instances. I don't think there is one approach that wins every time.
As for egress fees, that is an orthogonal concern.
ED: Now I catch your drift, it would indeed be cool. ZeroFS requires a commitment to the SlateDB LSM data format.
PLEASE if someone from the team sees this - I would pay so much for a ephemeral object store using your same edge protocol (seen in the sensor example from your blog).
Cheers!
1 more comments available on Hacker News