Back to Home11/15/2025, 4:51:21 AM

Untitled

0 points
0 comments
I agree and that's the case I'm trying to make. The machine-learning community expects us to believe that it is somehow comparable to human cognition, yet the way it learns is inherently inhuman. If an LLM was in any way similar to a human I would expect that, like a human, it might require a little bit of guidance as it learns but ultimately it would be capable of understanding concepts well enough that it doesn't need to have memorized every book in the library just to perform simple tasks.

In fact, I would expect it to be able to reproduce past human discoveries it hasn't even been exposed to, and if the AI is actually capable of this then it should be possible for them to set up a controlled experiment wherein it is given a limited "education" and must discover something already known to the researchers but not the machine. That nobody has done this tells me that either they have low confidence in the AI despite their bravado, or that they already have tried it and the machine failed.

Discussion Activity

Light discussion

First comment

29m

Peak period

1

Day 1

Avg / period

1

Comment distribution1 data points

Based on 1 loaded comments

Key moments

  1. 01Story posted

    11/15/2025, 4:51:21 AM

    4d ago

    Step 01
  2. 02First comment

    11/15/2025, 5:20:48 AM

    29m after posting

    Step 02
  3. 03Peak activity

    1 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/15/2025, 5:20:48 AM

    4d ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (0 comments)

Discussion hasn't started yet.

ID: 45935131Type: commentLast synced: 11/17/2025, 4:09:52 AM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.