Universal LLM Memory Does Not Exist
Postedabout 2 months ago
fastpaca.comTech Discussionstory
skepticalnegative
Large Language ModelsArtificial IntelligenceMemoryLLM Limitations
Key topics
Large Language Models
Artificial Intelligence
Memory
LLM Limitations
Discussion Activity
Light discussionFirst comment
N/A
Peak period
1
Start
Avg / period
1
Key moments
- 01Story posted
Nov 24, 2025 at 5:38 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 24, 2025 at 5:38 AM EST
0s after posting
Step 02 - 03Peak activity
1 comments in Start
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 24, 2025 at 5:38 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46032521Type: storyLast synced: 11/24/2025, 10:40:07 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
In this setup, the memory systems were 14–77× more expensive over a full conversation and 31–33% less accurate at recalling facts than just passing the full history. The post shows the results and argues that the shared “LLM-on-write” architecture (running background LLMs to extract/normalize facts on every message) is a bad fit for working memory / execution state, even though it’s useful for semantic long-term memory.
Scope is intentionally narrow: one model, one benchmark (MemBench, 2025), and non-exhaustive configs. The harness (`agentbench`, https://github.com/fastpaca/agentbench) is linked if you want to reproduce or propose a better setup!