80.1 % on Locomo Long-Term Memory Benchmark with a Pure Open-Source Rag Pipeline
Mood
excited
Sentiment
positive
Category
research
Key topics
-BGE-large-en-v1.5 (1024d) + FAISS -Custom “MCA” gravitational ranking (keyword coverage + importance + frequency) -BM25 sparse retrieval -Direct Cross-Encoder reranking (bge-reranker-v2-m3) on the full union (~120-150 docs) -Gpt-4o-mini only for final answer generation and judging (everything else is open weights or classic)
Repo: https://github.com/vac-architector/VAC-Memory-System Key tricks that finally broke 80% : -MCA-first filter (coverage ≥ 0.1 → top-30) — catches exact-keyword questions early -Feeding the entire union straight into Cross-Encoder (112–135 documents) instead of pre-filtering -Proper query instruction for BGE-large (the classic “Represent this sentence for searching relevant passages”) The whole pipeline runs in < 3s per query on a single RTX 4090. LoCoMo is currently the hardest public long-term memory benchmark (5.880 real human–agent conversations, multi-hop, temporal, negation, etc.).
Beating Mem0 official baseline by ~12–14 pp with fully open components feels pretty good. Would love feedback, especially from people who are also grinding on agent memory systems.
Discussion Activity
No activity data yet
We're still syncing comments from Hacker News.
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Discussion hasn't started yet.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.