Openai: Models Are Programmed to Make Stuff Up Instead of Admitting Ignorance
Posted4 months agoActive4 months ago
theregister.comTechstory
calmnegative
Debate
20/100
AIOpenaiHallucinations
Key topics
AI
Openai
Hallucinations
OpenAI acknowledges that their models are incentivized to 'hallucinate' rather than admit ignorance, sparking concerns about AI reliability.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
26m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Sep 17, 2025 at 10:33 AM EDT
4 months ago
Step 01 - 02First comment
Sep 17, 2025 at 10:58 AM EDT
26m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 17, 2025 at 1:18 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45276308Type: storyLast synced: 11/17/2025, 4:02:50 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Tufte said it best: There are only two industries that call their customers 'users': illegal drugs and software.
I don't really think people want these things to say "I don't know" - they want it to know.
That's obviously not reasonable for everything but I bet a lot of hallucinations ARE things where the model should be able to know or figure out. Most people are asking questions with well-known answers.
But I would guess the OpenAI post is correct, fundamentally they are trained in a way that rewards guessing, which I think must make it more likely it guesses even if the answer is in its reach.