Llms as Retrieval and Recommendation Engines
Posted4 months agoActive4 months ago
medium.comTechstory
skepticalmixed
Debate
20/100
LlmsRecommendation SystemsAI
Key topics
Llms
Recommendation Systems
AI
Article discusses using LLMs for retrieval and recommendation, with comments questioning practicality due to cost.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
11m
Peak period
2
0-1h
Avg / period
2
Key moments
- 01Story posted
Sep 12, 2025 at 1:28 PM EDT
4 months ago
Step 01 - 02First comment
Sep 12, 2025 at 1:39 PM EDT
11m after posting
Step 02 - 03Peak activity
2 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 12, 2025 at 1:58 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Discussion (2 comments)
Showing 2 comments
taintech
4 months ago
1 replyThis is a cool idea, but the cost is a killer. Normally, you would run recommendations and pre-cache them for the users or items they are associated with. Running a giant LLM for every user's recommendation is thousands of times more expensive and slower than current methods. It just doesn't seem practical for a large number of users.
pongogogoAuthor
4 months ago
The post mentions an approach of using a large model to generate labels and then distilling this into a smaller model to lower cost (though it doesn't provide an example)
View full discussion on Hacker News
ID: 45224482Type: storyLast synced: 11/17/2025, 6:16:42 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.