The Nature of Hallucinations
Posted3 months agoActive3 months ago
blog.qaware.deTechstory
controversialmixed
Debate
80/100
Artificial IntelligenceLlmsLanguage ModelsHallucinations
Key topics
Artificial Intelligence
Llms
Language Models
Hallucinations
The article discusses the phenomenon of 'hallucinations' in language models, sparking a debate among commenters about the terminology and implications of this issue.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
N/A
Peak period
11
0-12h
Avg / period
6
Comment distribution12 data points
Loading chart...
Based on 12 loaded comments
Key moments
- 01Story posted
Sep 23, 2025 at 3:47 AM EDT
3 months ago
Step 01 - 02First comment
Sep 23, 2025 at 3:47 AM EDT
0s after posting
Step 02 - 03Peak activity
11 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 28, 2025 at 1:05 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45343998Type: storyLast synced: 11/20/2025, 4:35:27 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The first line "Why do language models sometimes just make things up?" was not what I was expecting to read about.
Regardless of whether those terms in the AI context correlate perfectly to their original meanings.
There was a post somewhere about the irony of a human having to moderate resonable AI slop vs. shithouse AI slop.
I get that this is different from getting an LLM to admit that it doesn’t know something, but I thought “getting a coding agent to stop spinning its wheels when set to an impossible task” was months or years away, and then suddenly it was here.
I haven’t yet read a good explanation of why Claude 4 is so much better at this kind of thing, and it definitely goes against what most people say about how LLMs are supposed to work (which is a large part of why I’ve been telling people to stop leaning on mechanical explanations of LLM behavior/strengths/weaknesses). However it was definitely a step-function improvement.
Ask them to solve one of the Millennium Prize Problems. They’ll say they can’t do it, but that 'No' is just memorized. There’s nothing behind it.
> Unfortunately, the term hallucination quickly stuck to this phenomenon — before any psychologist could object.
The only difference between the two is whether a human likes it. If the human doesn't like it, then it's a hallucination. If the human doesn't know it's wrong, then it's not a hallucination (as far as that user is concerned).
The term "hallucination" is just marketing BS. In any other case it'd be called "broken shit".
The term hallucination is used as if the network is somehow giving the wrong output. It's not. It's giving a probability distribution for the next token. Exactly what it was designed for. The misunderstanding is what the user thinks they are asking. They think they are asking for a correct answer, but they are instead asking for a plausible answer. Very different things. An LLM is designed to give plausible, not correct answers. And when a user asks for a plausible, but not necessarily correct, answer (whether or not they realize it) and they get a plausible but not necessarily correct answer, then the LLM is working exactly as intended.