Rag Is Set Consumption, Not Ranking: a Metric Designed for Rag Evaluation
Postedabout 2 months agoActiveabout 2 months ago
vectors.runTechstory
calmneutral
Debate
0/100
Rag EvaluationInformation RetrievalNatural Language Processing
Key topics
Rag Evaluation
Information Retrieval
Natural Language Processing
The post introduces a new metric for evaluating RAG (Retrieval-Augmented Generation) systems, arguing that traditional ranking metrics are not suitable, with the discussion highlighting the need for better evaluation methods.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
2m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Nov 19, 2025 at 7:53 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 19, 2025 at 7:55 AM EST
2m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 19, 2025 at 7:55 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45979009Type: storyLast synced: 11/22/2025, 12:34:11 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I propose a small family of set-based metrics:
• RA-nWG@K – “How good is the actual top-K set we fed the LLM vs the global oracle on the labeled corpus?”
• PROC@K – pool-restricted oracle ceiling: “How good could we have done with this retrieval pool if selection were perfect?”
• %PROC@K – reranker/selection efficiency: “Given that ceiling, how much did our actual top-K realize?”
The goal is to cleanly separate retrieval quality from reranking headroom instead of squinting at one nDCG number.
I’m actively refining this; if you see flaws, better decompositions, or edge cases where this breaks, I’d really like to hear them.