Language Models Cannot Reliably Distinguish Belief From Knowledge and Fact
Posted2 months agoActiveabout 2 months ago
nature.comResearchstory
calmmixed
Debate
40/100
Artificial IntelligenceLanguage ModelsEpistemology
Key topics
Artificial Intelligence
Language Models
Epistemology
A study found that language models struggle to distinguish between belief, knowledge, and fact, sparking discussion on the limitations of AI and the parallels with human cognition.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
16m
Peak period
2
0-2h
Avg / period
1.2
Key moments
- 01Story posted
Nov 3, 2025 at 7:50 PM EST
2 months ago
Step 01 - 02First comment
Nov 3, 2025 at 8:06 PM EST
16m after posting
Step 02 - 03Peak activity
2 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 4, 2025 at 9:52 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45806352Type: storyLast synced: 11/20/2025, 2:21:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I’d say that LLMs maybe understand better than we do, because of their lack of grandstanding classification of information, that belief and fact are tightly interwoven.
There is a dichotomy that truth can exist while fiction can be widely accepted as truth without the ability for humans to distinguish which is which and all the while thinking that some or most can.
I’m not pushing David Hume on you, but I think this is a learning opportunity.
The only way we’ve learned is through referencing previous established trustworthy knowledge. The scientific consensus in merely a system that vigorously tests and discards previously held beliefs when they don’t match new evidence. We’ve spent thousands of years living in a world of make-believe. We only learned to emerge relatively recently.
It would be unreasonable to expect an LLM to do it without the tools we have.
It shouldn’t be hard to teach an LLM if you can’t verify it by reference to an evidence based source it’s not fact.
It’s just another round of garbage in garbage out.
Sometimes the more fluent, the more often the fiction may fly under the radar.
For AI this could likely be when they are as close to human as possible.
Anything less, and well the performance will be lower by some opinion or another.