Back to Home11/6/2025, 2:36:42 AM

Untitled

0 points
0 comments
LLMs aren't described as hallucinators (just) because they sometimes give results we don't find useful, but because their method is flawed.

For example, the simple algorithm is_it_lupus(){return false;} could have an extremely competitive success rate for medical diagnostics... But it's also obviously the wrong way to go about things.

Discussion Activity

No activity data yet

We're still syncing comments from Hacker News.

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (0 comments)

Discussion hasn't started yet.

ID: 45830758Type: commentLast synced: 11/17/2025, 7:54:33 AM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.