Gullible Bots Struggle to Distinguish Between Facts and Beliefs
Posted2 months agoActive2 months ago
theregister.comTechstory
calmneutral
Debate
20/100
AILlmsCognitive Bias
Key topics
AI
Llms
Cognitive Bias
The article discusses how large language models (LLMs) struggle to distinguish between facts and beliefs, a challenge that is also faced by humans, as noted in the comments.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
38m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Nov 3, 2025 at 3:47 PM EST
2 months ago
Step 01 - 02First comment
Nov 3, 2025 at 4:25 PM EST
38m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 3, 2025 at 6:57 PM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Discussion (2 comments)
Showing 2 comments
pfdietz
2 months ago
1 replyIn all fairness, so do a lot of people.
sema4hacker
2 months ago
Yes, when it comes to intelligence and education, I've always felt like our population was essentially a bell curve, and one half of that curve was correspondingly gullible, dumb, and insufficiently educated.
View full discussion on Hacker News
ID: 45804232Type: storyLast synced: 11/17/2025, 7:50:56 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.