We're Training Llms to Hallucinate by Rewarding Them for Guessing
Posted4 months agoActive4 months ago
lightcapai.medium.comTechstory
calmneutral
Debate
20/100
LLMAIHallucination
Key topics
LLM
AI
Hallucination
Discussion about a paper on LLM hallucination and its implications for AI evaluation.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
N/A
Peak period
2
0-1h
Avg / period
2
Key moments
- 01Story posted
Sep 7, 2025 at 6:46 PM EDT
4 months ago
Step 01 - 02First comment
Sep 7, 2025 at 6:46 PM EDT
0s after posting
Step 02 - 03Peak activity
2 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 7, 2025 at 7:21 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45162889Type: storyLast synced: 11/17/2025, 6:03:42 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I'd argue that most of what they do is guess. Everything they do is some function of weighted probability.
Nothing they do is ever really determinate --- unlike traditional computing. Ask the exact same question from different devices or at different locations and times and you are unlikely to get the same exact response. The wording will vary but sometimes so will the meaning.