Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /LLMs generate 'fluent nonsense' when reasoning outside their training zone
  1. Home
  2. /Discussion
  3. /LLMs generate 'fluent nonsense' when reasoning outside their training zone
Last activity 3 months agoPosted Aug 21, 2025 at 12:10 AM EDT

Llms Generate 'fluent Nonsense' When Reasoning Outside Their Training Zone

cintusshied
8 points
2 comments

Mood

calm

Sentiment

negative

Category

other

Key topics

Artificial Intelligence
Large Language Models
Limitations

LLMs generate 'fluent nonsense' when reasoning outside their training zone, highlighting limitations in AI models.

Snapshot generated from the HN discussion

Discussion Activity

Light discussion

First comment

23m

Peak period

1

Hour 1

Avg / period

1

Key moments

  1. 01Story posted

    Aug 21, 2025 at 12:10 AM EDT

    3 months ago

    Step 01
  2. 02First comment

    Aug 21, 2025 at 12:33 AM EDT

    23m after posting

    Step 02
  3. 03Peak activity

    1 comments in Hour 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Aug 21, 2025 at 7:59 AM EDT

    3 months ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (2 comments)
Showing 2 comments
VivaTechnics
3 months ago
LLMs operate on numbers; LLMs are trained on massive numerical vectors. Therefore, every request is simply a numerical transformation, approximating learned patterns; without proper trainings, their output could be completely irrational.
rsynnott
3 months ago
I mean, define 'reasoning'.
View full discussion on Hacker News
ID: 44968956Type: storyLast synced: 11/18/2025, 1:45:58 AM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.