Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /The wall confronting large language models
  1. Home
  2. /Discussion
  3. /The wall confronting large language models
Last activity 3 months agoPosted Sep 3, 2025 at 7:40 AM EDT

The Wall Confronting Large Language Models

PaulHoule
172 points
200 comments

Mood

heated

Sentiment

mixed

Category

other

Key topics

Large Language Models
Artificial Intelligence
Machine Learning Limitations
Debate intensity80/100

A research paper argues that large language models face fundamental limitations, sparking debate among commenters about the paper's validity, the authors' credentials, and the implications for AI research.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

6h

Peak period

129

Day 1

Avg / period

26.7

Comment distribution160 data points
Loading chart...

Based on 160 loaded comments

Key moments

  1. 01Story posted

    Sep 3, 2025 at 7:40 AM EDT

    3 months ago

    Step 01
  2. 02First comment

    Sep 3, 2025 at 1:58 PM EDT

    6h after posting

    Step 02
  3. 03Peak activity

    129 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Sep 9, 2025 at 4:58 PM EDT

    3 months ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (200 comments)
Showing 160 comments of 200
Scene_Cast2
3 months ago
1 reply
The paper is hard to read. There is no concrete worked-through example, the prose is over the top, and the equations don't really help. I can't make head or tail of this paper.
lumost
3 months ago
2 replies
This appears to be a position paper written by authors outside of their core field. The presentation of "the wall" is only through analogy to derivatives on the discrete values computer's operate in.
joe_the_user
3 months ago
2 replies
Paper seems to involve a series of analogies and equations. However, I think if the equations accepted, the "wall" is actually derived.

The authors are computer scientists and people who work with large scale dynamic system. They aren't people who've actually produced an industry-scale LLM. However, I have to note that despite lots of practical progress in deep learning/transformers/etc systems, all the theory involved just analogies and equations of a similar sort, it's all alchemy and so people really good at producing these models seem to be using a bunch of effective rules of thumb and not any full or established models (despite books claiming to offer a mathematical foundation for enterprise, etc).

Which is to say, "outside of core competence" doesn't mean as much as it would for medicine or something.

ACCount37
3 months ago
3 replies
No, that's all the more reason to distrust major, unverified claims made by someone "outside of core competence".

Applied demon summoning is ruled by empiricism and experimentation. The best summoners in the field are the ones who have a lot of practical experience and a sharp, honed intuition for the bizarre dynamics of the summoning process. And even those very summoners, specialists worth their weight in gold, are slaves to the experiment! Their novel ideas and methods and refinements still fail more often than they succeed!

One of the first lessons you have to learn in the field is that of humility. That your "novel ideas" and "brilliant insights" are neither novel nor brilliant - and the only path to success lies through things small and testable, most of which do not survive the test.

With that, can you trust the demon summoning knowledge of someone who has never drawn a summoning diagram?

ForHackernews
3 months ago
1 reply
The freshly-summoned Gaap-5 was rumored to be the most accursed spirit ever witnessed by mankind, but so far it seems not dramatically more evil than previous demons, despite having been fed vastly more humans souls.
lazide
3 months ago
Perhaps we’re reaching peak demon?
cwmoore
3 months ago
Your passions may have run away with you.

https://news.ycombinator.com/item?id=45114753

jibal
3 months ago
Somehow the game of telephone took us from "outside of their core field" (which wasn't true) to "outside of core competence" (which is grossly untrue).

> One of the first lessons you have to learn in the field is that of humility.

I suggest then that you make your statements less confidently.

lumost
3 months ago
I will venture my 2 cents, the equations kinda sorta look like something - but in no way approach a derivation of the wall. Specifically, I would have looked for a derivation which proved for one of/all of

1. Sequence Models relying on a markov chain, with and without summarization to extend beyond fixed length horizons. 2. All forms of attention mechanisms/dense layers. 3. A specific Transformer architecture.

That there exists a limit on the representation or prediction powers of the model for tasks of all input/output token lengths or fixed size N input tokens/M output tokens. *Based On* a derived cost growth schedule for model size, data size, compute budgets.

Separately, I would have expected a clear literature review of existing mathematical studies on LLM capabilities and limitations - for which there are *many*. Including studies that purport that Transformers can represent any program of finite pre-determined execution length.

jibal
3 months ago
3 replies
If you look at their other papers, you will see that this is very much within their core field.
lumost
3 months ago
1 reply
Their other papers are on simulation and applied chemistry. Where does their expertise in Machine Learning, or Large Language Models derive from?

While it's not a requirement to have published in a field before publishing in a field. Having a coauthor who is from the target field or a peer review venue in that field as an entry point certainly raises credibility.

From my limited claim to be in either Machine Learning or Large Language Models the paper does not appear to demonstrate what it claims. The author's language addresses the field of Machine Learning and LLM development as you would a young student - which does not help make their point.

stonogo
3 months ago
If you can't look at that publication list and see their expertise in macine learning, then it may be that they know more about your field than you know about theirs. Nothing wrong with that! Computational chemists use different terminology than computer scientists but there is significant overlap in the fields.
JohnKemeny
3 months ago
3 replies
He's a chemist. Lots of chemists and physicists like to talk about computation without having any background in it.

I'm not saying anything about the content, merely making a remark.

chermi
3 months ago
1 reply
You're really not saying anything? Just a random remark with no bearing?

Seth Lloyd, Wolpert, Landauer, Bennet, Fredkin, Feynman, Sejnowski, Hopfield, Zechinna, parisi,mezard, and zdebvora, Crutchfeld, Preskill, Deutsch, Manin, Szilard, MacKay....

I wish someone told them to shut up about computing. And I wouldn't dare claim von Neumann as merely a physicist, but that's where he was coming from. Oh and as much as I dislike him, Wolfram.

JohnKemeny
3 months ago
1 reply
As you note, some physicists do have computing backgrounds. I'm not suggesting they can't do computer science.

But today, most people hold opinions about LLMs, both as to their limits and their potential, without any real knowledge of computational linguistics nor of deep learning.

chermi
3 months ago
Huh? Have you heard of learning something new? Physicists and scientists at large are pretty good at it. Do you want some certification program to determine who's allowed to opine? If someone is wrong, tell them and show them they're wrong. Don't preemptively dismiss ideas based on some authority mechanism.

Here's another example in case you still don't get the point - Schrodinger had no business talking about biology because he wasn't trained in it, right? Nevermind him being ahead of the entire field on understanding the role of "DNA"(yet undiscovered, but he correctly posited the crystal-ish structure) and information in evolution and inspiring Watson's quest to figure out DNA.

Judge ideas on the merit of the idea itself. It's not about whether they have computing backgrounds, its about the ideas.

Hell, look at the history of deep learning with Minsky's book. Sure glad everyone listened to the linguistics expert there...

godelski
3 months ago
2 replies

  > Lots of chemists and physicists like to talk about computation without having any background in it.
I'm confused. Physicists deal with computation all the time. Are you confusing computation with programming? There's a big difference. Physicists and chemists are frequently at odds with the limits of computability. Remember, Turing, Church, and even Knuth obtained degrees in mathematics. The divide isn't so clear cut and there's lots of overlaps. I think if you go look at someone doing their PhD in Programming Languages you could easily be mistake them for a mathematician.

Looking at the authors I don't see why this is out of their domain. Succi[0] looks like he deals a lot with fluid dynamics and has a big focus on Lattice Boltzmann. Modern fluid dynamics is all about computability and its limits. There's a lot of this that goes into the Navier–Stokes problem (even Terry Tao talks about this[1]), which is a lot about computational reproducibility.

Coveney[2] is a harder read for me, but doesn't seem suspect. Lots of work in molecular dynamics, so shares a lot of tools with Succi (seems like they like to work together too). There's a lot of papers there, but sorting by year there's quite a few that scream "limits of computability" to me.

I can't make strong comments without more intimate knowledge of their work, but nothing here is a clear red flag. I think you're misinterpreting because this is a position paper, written in the style you'd expect from a more formal field, but also is kinda scatterd. I've only done a quick read, -- don't get me wrong, I have critiques -- but there's no red flags that warrant quick dismissal. (My background: physicist -> computational physics -> ML) There's things they are pointing to that are more discussed within the more mathematically inclined sides of ML (it's a big field... even if only a small subset are most visible). I'll at least look at some of their other works on the topic as it seems they've written a few papers.

[0] https://scholar.google.com/citations?user=XrI0ffIAAAAJ

[1] I suspect this well above the average HN reader, but pay attention to what they mean by "blowup" and "singularity" https://terrytao.wordpress.com/tag/navier-stokes-equations/

[2] https://scholar.google.com/citations?user=_G6FZ6YAAAAJ

JohnKemeny
3 months ago
2 replies
Turing, Church, and even Knuth got their degrees before CS was an academic discipline. At least I don't think Turing studied Turing machines in his undergrads.

I'm saying that lots of people like to post their opinions of LLMs regardless of whether or not they actually have any competence in either computational linguistics or deep learning.

godelski
3 months ago
1 reply
Sure, but how long ago was that? Do you really think the fields fully decoupled in such a small time? That's the entire point of that comment
jibal
3 months ago
The fellow is engaged in some pretty intense gatekeeping.
jibal
3 months ago
Your whole take is extraordinarily ad hominem. The paper in question is not just people posting opinions.
calf
3 months ago
There are some good example posts on Scott Aaronson's blog where he eviscerated shoddy physicists' take on quantum complexity theory. Physicists today aren't like Turing et al, most never picked up a theory of computer science book and actually worked through the homework exercises, and with AI pivot and paper spawning, this is kind of a general problem (arguably more interdisciplinary expertise is needed but people need to actually take the time to learn material and internalize it without making sophomore mistakes etc.).
11101010001100
3 months ago
Succi is no slouch; hardcore multiscale physics guy, among other things.
JohnKemeny
3 months ago
1 reply
Look at their actual papers before making a comment of what is or isn't their core field: https://dblp.org/pid/35/3081.html
jibal
3 months ago
I did. And don't tell me what I can or can't comment on.
dcre
3 months ago
2 replies
Always fun to see a theoretical argument that something clearly already happening is impossible.
ahartmetz
3 months ago
1 reply
So where are the recent improvements in LLMs proportional to the billions invested?
dcre
3 months ago
2 replies
Value for the money is not at issue in the paper!
BoredPositron
3 months ago
1 reply
It's not about value it's about the stagnation while throwing compute at the problem.
dcre
3 months ago
Exactly.
ahartmetz
3 months ago
I believe it is. They are saying that LLMs don't improve all that much from giving them more resources - and computing power (and input corpus size) is pretty proportional to money.
crowbahr
3 months ago
3 replies
Really? It sure seems like we're at the top of the S curve with LLMs. Wiring them up to talk the themselves as reasoning isn't scaling the core models, which have only made incremental gains for all the billions invested.

There's plenty more room to grow with agents and tooling, but the core models are only slightly bumping YoY rather than the rocketship changes of 2022/23.

EMM_386
3 months ago
1 reply
> the core models are only slightly bumping YoY rather than the rocketship changes of 2022/23

From Anthropic's press release yesterday after raising another $13 billion:

"Anthropic has seen rapid growth since the launch of Claude in March 2023. At the beginning of 2025, less than two years after launch, Anthropic’s run-rate revenue had grown to approximately $1 billion. By August 2025, just eight months later, our run-rate revenue reached over $5 billion—making Anthropic one of the fastest-growing technology companies in history."

$4 billion increase in 8 months. $1 billion every two months.

dcre
3 months ago
They’re talking about model quality. I still think they’re wrong, but the revenue is only indirectly relevant.
dangus
3 months ago
And relevant to the summary of this paper, LLM incremental improvement doesn't really seem to include the described wall.

If work produced by LLMs forever has to be checked for accuracy, the applicability will be limited.

This is perhaps analogous to all the "self-driving cars" that still have to be monitored by humans, and in that case the self-driving system might as well not exist at all.

skeezyboy
3 months ago
> There's plenty more room to grow with agents and tooling, but the core models are only slightly bumping YoY rather than the rocketship changes of 2022/23.

understandable. the real innovation was the process/technique underlying LLMs. the rest is just programmers automating it. similar happened with blockchain, everything after was just tinkering the initial idea

measurablefunc
3 months ago
10 replies
There is a formal extensional equivalence between Markov chains & LLMs but the only person who seems to be saying anything about this is Gary Marcus. He is constantly making the point that symbolic understanding can not be reduced to a probabilistic computation regardless of how large the graph gets it will still be missing basic stuff like backtracking (which is available in programming languages like Prolog). I think that Gary is right on basically all counts. Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.
boznz
3 months ago
1 reply
logical reasoning is also based on probability weights, most of the time that probability is so close to 100% that it can be assumed to be true without consequence.
AaronAPU
3 months ago
1 reply
Stunningly, though I have been saying this for 20 years I’ve never come across someone else mention it until now.
nprateem
3 months ago
1 reply
Glue sticks.

Pepperoni falls off pizza.

Therefore to keep it in place, stick it with glue...

Not stunned by this reductionist take.

AaronAPU
3 months ago
Those words contain far more relations than just “sticks” — the reduction is in your framing.
Certhas
3 months ago
5 replies
I don't understand what point you're hinting at.

Either way, I can get arbitrarily good approximations of arbitrary nonlinear differential/difference equations using only linear probabilistic evolution at the cost of a (much) larger state space. So if you can implement it in a brain or a computer, there is a sufficiently large probabilistic dynamic that can model it. More really is different.

So I view all deductive ab-initio arguments about what LLMs can/can't do due to their architecture as fairly baseless.

(Note that the "large" here is doing a lot of heavy lifting. You need _really_ large. See https://en.m.wikipedia.org/wiki/Transfer_operator)

arduanika
3 months ago
3 replies
What hinting? The comment was very clear. Arbitrarily good approximation is different from symbolic understanding.

"if you can implement it in a brain"

But we didn't. You have no idea how a brain works. Neither does anyone.

mallowdram
3 months ago
3 replies
We know the healthy brain is unpredictable. We suspect error minimization and prediction are not central tenets. We know the brain creates memory via differences in sharp wave ripples. That it's oscillatory. That it neither uses symbols nor represents. That words are wholly external to what we call thought. The authors deal with molecules which are neither arbitrary nor specific. Yet tumors ARE specific, while words are wholly arbitrary. Knowing these things should offer a deep suspicion of ML/LLMs. They have so little to do with how brains work and the units brains actually use (all oscillation is specific, all stats emerge from arbitrary symbols and worse: metaphors) that mistaking LLMs for reasoning/inference is less lexemic hallucination and more eugenic.
Zigurd
3 months ago
1 reply
"That words are wholly external to what we call thought." may be what we should learn, or at least hypothesize, based on what we see LLMs doing. I'm disappointed that AI isn't more of a laboratory for understanding brain architecture, and precisely what is this thing called thought.
mallowdram
3 months ago
The question is how to model the irreducible. And then to concatenate between spatiotemporal neuroscience (the oscillators) and neural syntax (what's oscillating) and add or subtract what the fields are doing to bind that to the surroundings.
quantummagic
3 months ago
1 reply
What do you think about the idea that LLMs are not reasoning/inferring, but are rather an approximation of the result? Just like you yourself might have to spend some effort reasoning, on how a plant grows, in order to answer questions about that subject. When asked, you wouldn't replicate that reasoning, instead you would recall the crystallized representation of the knowledge you accumulated while previously reasoning/learning. The "thinking" in the process isn't modelled by the LLM data, but rather by the code/strategies used to iterate over this crystallized knowledge, and present it to the user.
mallowdram
3 months ago
This is toughest part. We need some kind of analog external that concatenates. It's software, but not necessarily binary, it uses topology to express that analog. It somehow is visual, ie you can see it, but at the same time, it can be expanded specifically into syntax, which the details of are invisible. Scale invariance is probably key.
suddenlybananas
3 months ago
1 reply
We don't know those things about the brain. I don't know why you keep going around HN making wildly false claims about the state of contemporary neuroscience. We know very very little about how higher order cognition works in the brain.
mallowdram
3 months ago
1 reply
Of course we know these things about the brain, and who said anything about higher order cognition? I'd stay current, you seem to be a legacy thinker. I'll needle drop ONE of the references re: unpredictability and brain health, there are about 30, just to keep you in your corner. The rest you'll have to hunt down, but please stop pretending you know what you're talking about.

Your line of attack which is to dismiss from a pretend point of certainty, rather than inquiry and curiosity, seems indicative of the cog-sci/engineering problem in general. There's an imposition based in intuition/folk psychology that suffuses the industry. The field doesn't remain curious to new discoveries in neurobiology, which supplants psychology (psychology is being based, neuro is neural based). What this does is remove the intent of rhetoric/being and suggest brains built our external communication. The question is how and by what regularities. Cog-sci has no grasp of that in the slightest.

https://pubmed.ncbi.nlm.nih.gov/38579270/

suddenlybananas
3 months ago
Your writing reminds me of a schizophrenic.
jjgreen
3 months ago
You can look at it, from the inside.
Certhas
3 months ago
We didn't but somebody did so it's possible so probabilistic dynamics in high enough dimensions can do it.

We don't understand what LLMs are doing. You can't go from understanding what a transformer is to understanding what an LLM does any more than you can go from understanding what a Neuron is to what a brain does.

awesome_dude
3 months ago
1 reply
I think that the difference can be best explained thus:

I guess that you are most likely going to have cereal for breakfast tomorrow, I also guess that it's because it's your favourite.

vs

I understand that you don't like cereal for breakfast, and I understand that you only have it every day because a Dr told you that it was the only way for you to start the day in a way that aligns with your health and dietary needs.

Meaning, I can guess based on past behaviour and be right, but understanding the reasoning for those choices, that's a whole other ballgame. Further, if we do end up with an AI that actually understands, well, that would really open up creativity, and problem solving.

quantummagic
3 months ago
1 reply
How are the two cases you present fundamentally different? Aren't they both the same _type_ of knowledge? Why do you attribute "true understanding" to the case of knowing what the Dr said? Why stop there? Isn't true understanding knowing why we trust what the doctor said (all those years of schooling, and a presumption of competence, etc)? And why stop there? Why do we value years of schooling? Understanding, can always be taken to a deeper level, but does that mean we didn't "truly" understand earlier? And aren't the data structures needed to encode the knowledge, exactly the same for both cases you presented?
awesome_dude
3 months ago
1 reply
When you ask that question, why don't you just use a corpus of the previous answers to get some result?

Why do you need to ask me, isn't a guess based on past answers good enough?

Or, do you understand that you need to know more, you need to understand the reasoning based on what's missing from that post?

quantummagic
3 months ago
1 reply
I asked that question in an attempt to not sound too argumentative. It was rhetorical. I'm asking you to consider the fact that there isn't actually any difference between the two examples you provided. They're fundamentally the same type of knowledge. They can be represented by the same data structures.

There's _always_ something missing, left unsaid in every example, it's the nature of language.

As for your example, the LLM can be trained to know the underlying reasons (doctor's recommendation, etc.). That knowledge is not fundamentally different from the knowledge that someone tends to eat cereal for breakfast. My question to you, was an attempt to highlight that the dichotomy you were drawing, in your example, doesn't actually exist.

awesome_dude
3 months ago
1 reply
> They're fundamentally the same type of knowledge. They can be represented by the same data structures.

Maybe, maybe one is based on correlation, the other causation.

quantummagic
3 months ago
1 reply
What if the causation had simply been that he enjoyed cereal for breakfast?

In either case, the results are the same, he's eating cereal for breakfast. We can know this fact without knowing the underlying cause. Many times, we don't even know the cause of things we choose to do for ourselves, let alone what others do.

On top of which, even if you think the "cause" is that the doctor told him to eat a healthy diet, do you really know the actual cause? Maybe the real cause, is that the girl he fancies, told him he's not in good enough shape. The doctor telling him how to get in shape is only a correlation, the real cause is his desire to win the girl.

These connections are vast and deep, but they're all essentially the same type of knowledge, representable by the same data structures.

awesome_dude
3 months ago
> In either case, the results are the same, he's eating cereal for breakfast. We can know this fact without knowing the underlying cause. Many times, we don't even know the cause of things we choose to do for ourselves, let alone what others do.

Yeah, no.

Understanding the causation allows the system to provide a better answer.

If they "enjoy" cereal, what about it do they enjoy, and what other possible things can be had for breakfast that also satisfy that enjoyment.

You'll never find that by looking only at the fact that they have eaten cereal for breakfast.

And the fact that that's not obvious to you is why I cannot be bothered going into any more depth on the topic any more. It's clear that you don't have any understanding on the topic beyond a superficial glance.

Bye :)

measurablefunc
3 months ago
3 replies
What part about backtracking is baseless? Typical Prolog interpreters can be implemented in a few MBs of binary code (the high level specification is even simpler & can be in a few hundred KB)¹ but none of the LLMs (open source or not) are capable of backtracking even though there is plenty of room for a basic Prolog interpreter. This seems like a very obvious shortcoming to me that no amount of smooth approximation can overcome.

If you think there is a threshold at which point some large enough feedforward network develops the capability to backtrack then I'd like to see your argument for it.

¹https://en.wikipedia.org/wiki/Warren_Abstract_Machine

bondarchuk
3 months ago
2 replies
Backtracking makes sense in a search context which is basically what prolog is. Why would you expect a next-token-predictor to do backtracking and what should that even look like?
measurablefunc
3 months ago
3 replies
I don't expect a Markov chain to be capable of backtracking. That's the point I am making. Logical reasoning as it is implemented in Prolog interpreters is not something that can be done w/ LLMs regardless of the size of their weights, biases, & activation functions between the nodes in the graph.
bondarchuk
3 months ago
1 reply
Imagine the context window contains A-B-C, C turns out a dead end and we want to backtrack to B and try another branch. Then the LLM could produce outputs such that the context window would become A-B-C-[backtrack-back-to-B-and-don't-do-C] which after some more tokens could become A-B-C-[backtrack-back-to-B-and-don't-do-C]-D. This would essentially be backtracking and I don't see why it would be inherently impossible for LLMs as long as the different branches fit in context.
measurablefunc
3 months ago
4 replies
If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain. This is a simple enough problem that can be implemented in a few dozen lines of Prolog but I've never seen a solver implemented as a Markov chain.
sudosysgen
3 months ago
1 reply
You can do that pretty trivially for any fixed size problem (as in solvable with a fixed-sized tape Turing machine), you'll just have a titanically huge state space. The claim of the LLM folks is that the models have a huge state space (they do have a titanically huge state space) and can navigate it efficiently.

Simply have a deterministic Markov chain where each state is a possible value of the tape+state of the TM and which transitions accordingly.

measurablefunc
3 months ago
How are you encoding the state spaces for the sudoku solver specifically?
bboygravity
3 months ago
1 reply
The LLM can just write the Prolog and solve the sudoku that way. I don't get your point. LLMs like Grok 4 can probably one-shot this today with the current state of art. You can likely just ask it to solve any sudoku and it will do it (by writing code in the background and running it and returning the result). And this is still very early stage compared to what will be out a year from now.

Why does it matter how it does it or whether this is strictly LLM or LLM with tools for any practical purpose?

PhunkyPhil
3 months ago
1 reply
The point isn't if the output is correct or not, it's if the actual net is doing "logical computation" ala Prolog.

What you're suggesting is akin to me saying you can't build a house, then you go and hire someone to build a house. _You_ didn't build the house.

kaibee
3 months ago
I feel like you're kinda proving too much. By the same reasoning, humans/programmers aren't generally intelligent either, because we can only mentally simulate relatively small state spaces of programs, and when my boss tells me to go build a tool, I'm not exactly writing raw x86 assembly. I didn't _build_ the tool, I just wrote text that instructed a compiler how to build the tool. Like the whole reason we invented SAT solvers is because we're not smart in that way. But I feel like you're trying to argue that LLMs at any scale gonna be less capable than an average person?
Ukv
3 months ago
1 reply
> If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain

Have each of the Markov chain's states be one of 10^81 possible sudoku grids (a 9x9 grid of digits 1-9 and blank), then calculate the 10^81-by-10^81 transition matrix that takes each incomplete grid to the valid complete grid containing the same numbers. If you want you could even have it fill one square at a time rather than jump right to the solution, though there's no need to.

Up to you what you do for ambiguous inputs (select one solution at random to give 1.0 probability in the transition matrix? equally weight valid solutions? have the states be sets of boards and map to set of all valid solutions?) and impossible inputs (map to itself? have the states be sets of boards and map to empty set?).

Could say that's "cheating" by pre-computing the answers and hard-coding them in a massive input-output lookup table, but to my understanding that's also the only sense in which there's equivalence between Markov chains and LLMs.

measurablefunc
3 months ago
1 reply
There are multiple solutions for each incomplete grid so how are you calculating the transitions for a grid w/ a non-unique solution?

Edit: I see you added questions for the ambiguities but modulo those choices your solution will almost work b/c it is not extensionally equivalent entirely. The transition graph and solver are almost extensionally equivalent but whereas the Prolog solver will backtrack there is no backtracking in the Markov chain and you have to re-run the chain multiple times to find all the solutions.

Ukv
3 months ago
> but whereas the Prolog solver will backtrack there is no backtracking in the Markov chain and you have to re-run the chain multiple times to find all the solutions

If you want it to give all possible solutions at once, you can just expand the state space to the power-set of sudoku boards, such that the input board transitions to the state representing the set of valid solved boards.

lelanthran
3 months ago
> If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain. This is a simple enough problem that can be implemented in a few dozen lines of Prolog but I've never seen a solver implemented as a Markov chain.

I think it can be done. I started a chatbot that works like this some time back (2024) but paused work on it since January.

In brief, you shorten the context by discarding the context that didn't work out.

vidarh
3 months ago
1 reply
A (2,3) Turing machine can be trivially implemented with a loop around an LLM that treats the context as an IO channel, and a Prolog interpreter runs on a Turing complete computer, and so per Truing equivalence you can run a Prolog interpreter on an LLM.

Of course this would be pointless, but it demonstrates that a system where an LLM provides the logic can backtrack, as there's nothing computationally special about backtracking.

That current UIs to LLMs are set up for conversation-style use that makes this harder isn't an inherent limitation of what we can do with LLMs.

measurablefunc
3 months ago
1 reply
Loop around an LLM is not an LLM.
vidarh
3 months ago
1 reply
Then no current systems you are using are LLMs
measurablefunc
3 months ago
1 reply
Choice-free feedforward graphs are LLMs. The inputs/outputs are extensionally equivalent to context and transition probabilities of a Markov chain. What exactly is your argument b/c what it looks like to me is you're simply making a Turing tarpit argument which does not address any of my points.
vidarh
3 months ago
My argument is that artificially limiting what you argue about to a subset of the systems people are actually using and arguing about the limitations of that makes your argument irrelevant to what people are actually using.
Certhas
3 months ago
Take a finite tape Turing machine with N states and tape length T and N^T total possible tape states.

Now consider that you have a probability for each state instead of a definite state. The transitions of the Turing machine induce transitions of the probabilities. These transitions define a Markov chain on a N^T dimensional probability space.

Is this useful? Absolutely not. It's just a trivial rewriting. But it shows that high dimensional spaces are extremely powerful. You can trade off sophisticated transition rules for high dimensionality.

PaulHouleAuthor
3 months ago
2 replies
If you want general-purpose generation than it has to be able to respect constraints (e.g. figure art of a person has 0..1 belly buttons, 0..2 legs is unspoken) as it is generative models usually get those things right but don't always if they can stick together the tiles they use internally in some combination that makes sense locally but not globally.

General intelligence may not be SAT/SMT solving but it has to be able to do it, hence, backtracking.

Today I had another of those experiences of the weaknesses of LLM reasoning, one that happens a lot when doing LLM-assisted coding. I was trying to figure out how to rebuild some CSS after the HTML changed for accessibility purposes and got a good idea for how to do it from talking to the LLM but at that point the context was poisoned, probably because there was a lot of content about the context describing what we were thinking about at different stages of the conversation which evolved considerably. It lost its ability to follow instructions and I'd tell it specifically to do this or do that and it just wouldn't do it properly and this happens a lot if a session goes on too long.

My guess is that the attention mechanism is locking on to parts of the conversation which are no longer relevant to where I think we're at and in general the logic that considers the variation of either a practice (instances) or a theory over time is a very tricky problem and 'backtracking' is a specific answer for maintaining your knowledge base across a search process.

XenophileJKO
3 months ago
1 reply
What if you gave the model a tool to "willfully forget" a section of context. That would be easy to make. Hmm I might be onto something.
PaulHouleAuthor
3 months ago
I guess you could have some kind of mask that would let you suppress some of the context from matching, but my guess is that kind of thing might cause problems as often as it solves them.

Back when I was thinking about commonsense reasoning with logic it was obviously a much more difficult problem to add things like "P was true before time t", "there will be some time t in the future such at P is true", "John believes Mary believes that P is true", "It is possible that P is true", "there is some person q who believes that P is true", particularly when you combine these qualifiers. For one thing you don't even have a sound and complete strategy for reasoning over first-order logic + arithmetic but you also have a combinatorical explosion over the qualifiers.

Back in the day I thought it was important to have sound reasoning procedures but one of the reasons none of my foundation models ever became ChatGPT was that I cared about that and I really needed to ask "does change C cause an unsound procedure to get the right answer more often?" and not care if the reasoning procedure was sound or not.

photonthug
3 months ago
1 reply
> General intelligence may not be SAT/SMT solving but it has to be able to do it, hence, backtracking.

Just to add some more color to this. For problems that completely reduce to formal methods or have significant subcomponents that involve it, combinatorial explosion in state-space is a notorious problem and N variables is going to stick you with 2^N at least. It really doesn't matter whether you think you're directly looking at solving SAT/search, because it's too basic to really be avoided in general.

When people talk optimistically about hallucinations not being a problem, they generally mean something like "not a problem in the final step" because they hope they can evaluate/validate something there, but what about errors somewhere in the large middle? So even with a very tiny chance of hallucinations in general, we're talking about an exponential number of opportunities in implicit state-transitions to trigger those low-probability errors.

The answer to stuff like this is supposed to be "get LLMs to call out to SAT solvers". Fine, definitely moving from state-space to program-space is helpful, but it also kinda just pushes the problem around as long as the unconstrained code generation is still prone to hallucination.. what happens when it validates, runs, and answers.. but the spec was wrong?

Personally I'm most excited about projects like AlphaEvolve that seem fearless about hybrid symbolics / LLMs and embracing the good parts of GOFAI that LLMs can make tractable for the first time. Instead of the "reasoning is dead, long live messy incomprehensible vibes", those guys are talking about how to leverage earlier work, including things like genetic algorithms and things like knowledge-bases.[0] Especially with genuinely new knowledge-discovery from systems like this, I really don't get all the people who are still staunchly in either an old-school / new-school camp on this kind of thing.

[0]: MLST on the subject: https://www.youtube.com/watch?v=vC9nAosXrJw

PaulHouleAuthor
3 months ago
When I was interested in information extraction I saw the problem of resolving language to a semantic model [1] as containing an SMT problem. That is, words are ambiguous, sentences can parse different ways, you have to resolve pronouns and explicit subjects, objects and stuff like that.

Seen that way the text is a set of constraints with a set of variables for all the various choices you make determining it. And of course there is a theory of the world such that "causes must precede their effects" and all the world knowledge about instances such as "Chicago is in Illinois".

The problem is really worse than that because you'll have to parse sentences that weren't generated by sound reasoners or that live in a different microtheory, deal with situations that are ambiguous anyway, etc. Which is why that program never succeeded.

[1] in short: database rows

skissane
3 months ago
2 replies
> but none of the LLMs (open source or not) are capable of backtracking even though there is plenty of room for a basic Prolog interpreter. This seems like a very obvious shortcoming to me that no amount of smooth approximation can overcome.

The fundamental autoregressive architecture is absolutely capable of backtracking… we generate next token probabilities, select a next token, then calculate probabilities for the token thereafter.

There is absolutely nothing stopping you from “rewinding” to an earlier token, making a different selection and replaying from that point. The basic architecture absolutely supports it.

Why then has nobody implemented it? Maybe, this kind of backtracking isn’t really that useful.

measurablefunc
3 months ago
1 reply
Where is this spelled out formally and proven logically?
skissane
3 months ago
2 replies
LLM backtracking is an active area of research, see e.g.

https://arxiv.org/html/2502.04404v1

https://arxiv.org/abs/2306.05426

And I was wrong that nobody has implemented it, as these papers prove people have… it is just the results haven’t been sufficiently impressive to support the transition from the research lab to industrial use - or at least, not yet

afiori
3 months ago
1 reply
I would expect to see something like this soonish as around now we are seeing the end of training scaling and the beginning of inference scaling
foota
3 months ago
This is a neat observation, training has been optimized to hell and inference is just beginning.
measurablefunc
3 months ago
> Empirical evaluations demonstrate that our proposal significantly enhances the reasoning capabilities of LLMs, achieving a performance gain of over 40% compared to the optimal-path supervised fine-tuning method.
versteegen
3 months ago
Yes, but anyway, LLMs themselves are perfectly capable of backtracking reasoning while sampling is run forwards only, in the same way humans do: by deciding something doesn't work and trying something else. Humans DON'T time travel backwards in time and never have the erroneous thought in the first place.
Certhas
3 months ago
I know that if you go large enough you can do any finite computation using only fixed transition probabilities. This is a trivial observation. To repeat what I posted elsewhere in this thread:

Take a finite tape Turing machine with N states and tape length T and N^T total possible tape states.

Now consider that you have a probability for each state instead of a definite state. The transitions of the Turing machine induce transitions of the probabilities. These transitions define a Markov chain on a N^T dimensional probability space.

Is this useful? Absolutely not. It's just a trivial rewriting. But it shows that high dimensional spaces are extremely powerful. You can trade off sophisticated transition rules for high dimensionality.

You _can_ continue this line of thought though in more productive directions. E.g. what if the input of your machine is genuinely uncertain? What if the transitions are not precise but slightly noisy? You'd expect that the fundamental capabilities of a noisy machine wouldn't be that much worse than those of a noiseless ones (over finite time horizons). What if the machine was built to be noise resistant in some way?

All of this should regularize the Markov chain above. If it's more regular you can start thinking about approximating it using a lower rank transition matrix.

The point of this is not to say that this is really useful. It's to say that there is no reason in my mind to dismiss the purely mathematical rewriting as entirely meaningless in practice.

patrick451
3 months ago
1 reply
> Either way, I can get arbitrarily good approximations of arbitrary nonlinear differential/difference equations using only linear probabilistic evolution at the cost of a (much) larger state space.

This is impossible. When driven by a sinusoid, a linear system will only ever output a sinusoid with exactly the same frequency but a different amplitude and phase regardless of how many states you give it. A non-linear system can change the frequency or output multiple frequencies.

diffeomorphism
3 months ago
As far as I understand, the terminology says "linear" but means compositions of affine (with cutoffs etc). That gives you arbitrary polynomials and piecewise affine, which are dense in most classes of interest.

Of course, in practice you don't actually get arbitrary degree polynomials but some finite degree, so the approximation might still be quite bad or inefficient.

baselessness
3 months ago
1 reply
That's what this debate has been reduced to. People point out the logical and empirical, by now very obvious limitation of LLMs. And boosters are the equivalent of Chopra's "quantum physics means anything is possible" saying "if you add enough information to a system anything is possible".
yorwba
3 months ago
1 reply
The argument isn't that anything is possible for LLMs, but that representing LLMs as Markov chains doesn't demonstrate a limitation, because the resulting Markov chain would be huge, much larger than the LLM, and anything that is possible is possible with a large enough Markov chain.

If you limit yourself to Markov chains where the full transition matrix can be stored in a reasonable amount of space (which is the kind of Markov chain that people usually have in mind when they think that Markov chains are very limited), LLMs cannot be represented as such a Markov chain.

If you want to show limitations of LLMs by reducing them to another system of computation, you need to pick one that is more limited than LLMs appear to be, not less.

ariadness
3 months ago
1 reply
> anything that is possible is possible with a large enough Markov chain

This is not true. Do you mean anything that is possible to compute? If yes than you missed the point entirely.

yorwba
3 months ago
It's mostly a consequence of the laws of physics having the Markov property. So the time evolution of any physical system can be modeled as a Markov process. Of course the corresponding state space may in general be infinite.
logicchains
3 months ago
1 reply
LLMs are not formally equivalent to Markov chains, they're more powerful; transformers with sufficient chain of thought can solve any problem in P: https://arxiv.org/abs/2310.07923.
measurablefunc
3 months ago
2 replies
If you think there is a mistake in this argument then I'd like to know where it is: https://markov.dk.workers.dev/.
CamperBob2
3 months ago
1 reply
A Markov chain is memoryless by definition. A language model has a context, not to mention state in the form of the transformer's KV store.

The whole analogy is just pointless. You might as well call an elephant an Escalade because they weigh the same.

measurablefunc
3 months ago
1 reply
Where is the logical mistake in the linked argument? If there is a mistake then I'd like to know what it is & the counter-example that invalidates the logical argument.
versteegen
3 months ago
1 reply
A Transformer with a length n context window implements an order 2n-1 Markov chain¹. That is correct. That is also irrelevant in the real world, because LLMs aren't run for that many tokens (as results are bad). Before it hits that limit, there is nothing requiring it to have any of the properties of a Markov chain. In fact, because the state space is k^n (alphabet size k), you might not revisit a state until generating k^n tokens.

¹ Depending on context window implementation details, but that is the maximum, because the states n tokens back were computed from the n tokens before that. The minimum of course is an order n-1 Markov chain.

versteegen
3 months ago
Specifically, an order n Markov chain such as a transformer, if not otherwise restricted, can have any joint distribution you wish for the first n-1 steps: any extensional property. In which case you have to look at intensional properties to actually draw non-vacuous conclusions.

I would like to comment that there are a lot of papers out there on what transformers can or can't do that are misleading, often misunderstood, or abstract so far from transformers as implemented and used that they are pure theory.

logicchains
3 months ago
1 reply
It assumes the LLM only runs once, i.e. it doesn't account for chain of thought, which makes the program not memoryless.
measurablefunc
3 months ago
There is no such assumption. You can run/sample the LLM & the equivalent Markov chain as many times as you want & the logical analysis remains the same b/c the extensional equivalence between the LLM & Markov chain has nothing to do w/ how many times the trajectories are sampled from each one.
jules
3 months ago
2 replies
What does this predict about LLMs ability to win gold at the International Mathematical Olympiad?
measurablefunc
3 months ago
1 reply
Same thing it does about their ability to drive cars.
jules
3 months ago
1 reply
So, nothing.
measurablefunc
3 months ago
It's definitely something but it might not be apparent to those who do not understand the distinctions between intensionallity & extensionallity.
godelski
3 months ago
1 reply
Depends which question you're asking.

Ability to win a gold medal as if they were scored similarly to how humans are scored?

or

Ability to win a gold medal as determined by getting the "correct answer" to all the questions?

These are subtly two very different questions. In these kinds of math exams how you get to the answer matters more than the answer itself. i.e. You could not get high marks through divination. To add some clarity, the latter would be like testing someone's ability to code by only looking at their results to some test functions (oh wait... that's how we evaluate LLMs...). It's a good signal but it is far from a complete answer. It very much matters how the code generates the answer. Certainly you wouldn't accept code if it does a bunch of random computations before divining an answer.

The paper's answer to your question (assuming scored similarly to humans) is "Don’t count on it". Not a definitive "no" but they strongly suspect not.

jules
3 months ago
1 reply
The type of reasoning by the OP and the linked paper obviously does not work. The observable reality is that LLMs can do mathematical reasoning. A cursory interaction with state of the art LLMs makes this evident, as does their IMO gold medal scored like humans are. You cannot counter observable reality with generic theoretical considerations about Markov chains or pretraining scaling laws or floating point precision. The irony is that LLMs can explain why that type of reasoning is faulty:

> Any discrete-time computation (including backtracking search) becomes Markov if you define the state as the full machine configuration. Thus “Markov ⇒ no reasoning/backtracking” is a non sequitur. Moreover, LLMs can simulate backtracking in their reasoning chains. -- GPT-5

godelski
3 months ago
1 reply

  > The observable reality is that LLMs can do mathematical reasoning
I still can't get these machines to reliably perform basic subtraction[0]. The result is stochastic, so I can get the right answer, but have yet to reproduce one where the actual logic is correct[1,2]. Both [1,2] perform the same mistake and in [2] you see it just say "fuck it, skip to the answer"

  > You cannot counter observable reality
I'd call [0,1,2] "observable". These types of errors are quite common, so maybe I'm not the one with lying eyes.

[0] https://chatgpt.com/share/68b95bf5-562c-8013-8535-b61a80bada...

[1] https://chatgpt.com/share/68b95c95-808c-8013-b4ae-87a3a5a42b...

[2] https://chatgpt.com/share/68b95cae-0414-8013-aaf0-11acd0edeb...

FergusArgyll
3 months ago
2 replies
Why don't you use a state of the art model? Are you scared it will get it right? Or are you just not aware of reasoning models in which case you should get to know the field
godelski
3 months ago
1 reply
Careful there, without a /s people might think you're being serious.
FergusArgyll
3 months ago
1 reply
I am being serious, why don't you use a SOTA model?
godelski
3 months ago
1 reply
Sorry, I've just been hearing this response for years now... GPT-5 not SOTA enough for you all now? I remember when people told me to just use 3.5

  - Gemini 2.5 Pro[0], the top model on LLM Arena. This SOTA enough for you? It even hallucinated Python code!

  - Claude Opus 4.1, sharing that chat shares my name, so here's a screenshot[1]. I'll leave that one for you to check. 

  - Grok4 getting the right answer but using bad logic[2]

  - Kimi K2[3]

  - Mistral[4]
I'm sorry, but you can fuck off with your goal post moving. They all do it. Check yourself.

  > I am being serious
Don't lie to yourself, you never were

People like you have been using that copy-paste piss-poor logic since the GPT-3 days. The same exact error existed since those days on all those models just as it does today. You all were highly disingenuous then, and still are now. I know this comment isn't going to change your mind because you never cared about the evidence. You could have checked yourself! So you and your paperclip cult can just fuck off

[0] https://g.co/gemini/share/259b33fb64cc

[1] https://0x0.st/KXWf.png

[2] https://grok.com/s/c2hhcmQtNA%3D%3D_e15bb008-d252-4b4d-8233-...

[3] http://0x0.st/KXWv.png

[4] https://chat.mistral.ai/chat/8e94be15-61f4-4f74-be26-3a4289d...

FergusArgyll
3 months ago
That's very weird, before I wrote my comment I asked gpt5-thinking (yes, once) and it nailed it. I just assumed the rest would get it as well, gemini-2.5 is shocking (the code!) I hereby give you leave to be a curmudgeon for another year...
pfortuny
3 months ago
Have you tried to get google ai studio (nano-banana) to draw a 9-sided polygon? Just that.

https://ibb.co/Qj8hv76h

Anon84
3 months ago
1 reply
There definitely is, but Marcus is not the only one talking about it. For example, we covered this paper in one of our internal journal clubs a few weeks ago: https://arxiv.org/abs/2410.02724
godelski
3 months ago
2 replies
I just want to highlight this comment and stress how big of a field ML actually is. I think even much bigger than most people in ML research even know. It's really unfortunate that the hype has grown so much that even in the research community these areas are being overshadowed and even dismissed[0]. It's been interesting watching this evolution and how we're reapproaching symbolic reasoning while avoiding that phrase.

There's lots of people doing theory in ML and a lot of these people are making strides which others stand on (ViT and DDPM are great examples of this). But I never expect these works to get into the public eye as the barrier to entry tends to be much higher[1]. But they certainly should be something more ML researchers are looking at.

That is to say: Marcus is far from alone. He's just loud

[0] I'll never let go how Yi Tay said "fuck theorists" and just spent his time on Twitter calling the KAN paper garbage instead of making any actual critique. There seems to be too many who are happy to let the black box remain a black box because low level research has yet to accumulate to the point it can fully explain an LLM.

[1] You get tons of comments like this (the math being referenced is pretty basic, comparatively. Even if more advanced than what most people are familiar with) https://news.ycombinator.com/item?id=45052227

calf
3 months ago
1 reply
I hazard to imagine that LLMs are a special subset of Markov chains, and this subset has interesting properties; it seems a bit reductive to dismiss LLMs as "merely' Markov chains. It's what we can do with this unusual subset (e.g. maybe incorporate in a larger AI system) that is the interesting question.
measurablefunc
3 months ago
1 reply
You don't have to imagine, there is a logically rigorous argument¹ that establishes the equivalence. There is also nothing unusual about neural networks or Markov chains. You've just been mystified by the marketing around them so you think there is something special about them when they're just another algorithm for approximating different kinds of compressible signals & observations about the real world.

¹https://markov.dk.workers.dev/

calf
3 months ago
1 reply
I'll have you realize you are replying (quite arrogantly by the way) to someone who wrote part of their PhD dissertation on models of computation. Try again :)

Besides, it is patently false. Not every Markov chain is an LLM, an actual LLM outputs human-readable English, while the vast majority of Markov chains do not map onto that set of models.

measurablefunc
3 months ago
1 reply
Appeals to authority do not change the logical content of an argument. You are welcome to point to the part of the linked argument that is incorrect & present a counter-example to demonstrate the error.
godelski
3 months ago
1 reply
Calf isn't making an appeal to authority. They are saying "I'm not the idiot you think I am." Two very different things. Likely also a request to talk more mathy to them.

I read your link btw and I just don't know how someone can do all that work and not establish the Markov Property. That's like the first step. Speaking of which, I'm not sure I even understand the first definition of your link. I've never heard the phrase "computably countable" before, but I have head "computable number," which these numbers are countable. This does seem to be what it is referring to? So I'll assume that? (My dissertation wasn't on models of computation, it was on neural architectures) In 1.2.2 is there a reason for strictly uniform noise? It also seems to run counter to the deterministic setting.

Regardless, I agree with Calf, it's very clear MCs are not equivalent to LLMs. That is trivially a false statement. But the question of if an LLM can be represented via a MC is a different question. I did find this paper on the topic[0], but I need to give it a better read. Does look like it was rejected from ICLR[1], though ML review is very noisy. Including the link as comments are more informative than the accept/reject signal.

(@Calf, sorry, I didn't respond to your comment because I wasn't trying to make a comment about the relationship of LLMs and MCs. Only that there was more fundamental research being overshadowed)

[0] https://arxiv.org/abs/2410.02724

[1] https://openreview.net/forum?id=RDFkGZ9Dkh

measurablefunc
3 months ago
1 reply
If it's trivially false then you should be able to present a counter-example but so far no one has done that but there has been a lot of hand-waving about "trivialities" of one sort or another.

Neural networks are stateless, the output only depends on the current input so the Markov property is trivially/vacuously true. The reason for the uniform random number for sampling from the CDF¹ is b/c if you have the cumulative distribution function of a probability density then you can sample from the distribution by using a uniformly distributed RNG.

¹https://stackoverflow.com/questions/60559616/how-to-sample-f...

godelski
3 months ago
You want me to show that it is trivially false that all Neural Networks are not Markov Chains? I mean we could point to a RNN which doesn't have the Markov Property. I mean another trivial case is when the rows do not sum to 1. I mean the internal states of neural networks are not required to be probability distributions. In fact, this isn't a requirement anywhere in a neural network. So whatever you want to call the transition matrix you're going to have issues.

Or the inverse of this? That all Markov Chains are Neural Networks? Sure. Well sure, here's my transition matrix [1].

I'm quite positive an LLM would be able to give you more examples.

  > the output only depends on the current input so the Markov property is trivially/vacuously true.
It's pretty clear you did not get your PhD in ML.

  > The reason for the uniform random number 
I think you're misunderstanding. Maybe I'm misunderstanding. But I'm failing to understand why you're jumping to the CDF. I also don't understand why this answers my question since there are other ways to sample from a distribution knowing only its CDF and without using the uniform distribution. I mean you can always convert to the uniform distribution and there's lots of tricks to do that. Or I mean the distribution in that SO post is the Rayleigh Distribution so we don't even need to do that. My question was not about that uniform is clean, but that it is a requirement. But this just doesn't seem relevant at all.
voidhorse
3 months ago
1 reply
That linked comment was so eye-opening. It suddenly made sense to me why people who are presumably technical and thus shouldn't even be entertaining the notion that LLMs reason (and who should further realize that the use and choice of this term was pure marketing strategy) are giving it the time of day. When so many of the enthusiasts can't even get enough math under their belt to understand basic claims it's no wonder the industry is a complete circus right now.
godelski
3 months ago
Let me introduce to you one of X's former staff members arguing that there is no such thing as deep knowledge or expertise[0]

I would love to tell you that I don't meet many people working in AI that share this sentiment, but I'd be lying.

And just for fun, here's a downvoted comment of mine, despite my follow-up comments that evidence my point being upvoted[1] (I got a bit pissed in that last one). The point here is that most people don't want to hear the truth. They are just glossing over things. But I think the two biggest things I've learned from the modern AI movement is: 1) gradient descent and scale are far more powerful than I though, 2) I now understand how used car salesmen are so effective on even people I once thought smart. People love their sycophants...

I swear, we're going to make AGI not by making the AI smarter but by making the people dumber...

[0] https://x.com/yacineMTB/status/1836415592162554121

[1] https://news.ycombinator.com/item?id=45122931

tim333
3 months ago
2 replies
Humans can do symbolic understanding that seems to rest on a rather flakey probabilistic neural network in our brains, or at least mine does. I can do maths and the like but there's quite a lot of trial and error and double checking things involved.

GPT5 said it thinks it's fixable when I asked it:

>Marcus is right that LLMs alone are not the full story of reasoning. But the evidence so far suggests the gap can be bridged—either by scaling, better architectures, or hybrid neuro-symbolic approaches.

afiori
3 months ago
2 replies
I sorta agree with you, but replying to "LLM can't reason" with "an LLM says they do" is wild
tim333
3 months ago
1 reply
I don't have a strong opinion if LLMs can reason or not. I think they can a bit but not very well. I think that also applies to many humans though. I was stuck that to my eyes GPT5's take on the question seemed better thought out than Garry Marcus's who is pretty biased to the LLMs are rubbish school.
afiori
3 months ago
Most of the reasonings for the impossibility of intelligence in LLMs either require very restricted environments (chatgpt might not be able to tell how many r are in strawberry, but it can write a python script to do so and it could call it if given access to shell or similar and it can understand the answer) or implicitly imply that human brains have magic powers beyond turing completeness.
JohnKemeny
3 months ago
I asked ChatGPT and it agrees with the statement that it is indeed wild
wolvesechoes
3 months ago
And I though that the gap is bridged by giving another billions to Sam Altman
vidarh
3 months ago
3 replies
> Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.

Unless you either claim that humans can't do logical reasoning, or claim humans exceed the Turing computable, then given you can trivially wire an LLM into a Turing complete system, this reasoning is illogical due to Turing equivalence.

And either of those two claims lack evidence.

godelski
3 months ago
1 reply

  > you can trivially wire an LLM into a Turing complete system
Please don't do the "the proof is trivial and left to the reader"[0].

If it is so trivial, show it. Don't hand wave, "put up or shut up". I think if you work this out you'll find it isn't so trivial...

I'm aware of some works but at least every one I know of has limitations that would not apply to LLMs. Plus, none of those are so trivial...

[0] https://en.wikipedia.org/wiki/Proof_by_intimidation

vidarh
3 months ago
1 reply
You can do it yourself by setting temperature to zero and asking an LLM to execute the rules of a (2,3) Turing machine.

Since temperature zero makes it deterministic, you only need to test one step for each state and symbol combination.

Are you suggesting you don't believe you can't make a prompt that successfully encodes 6 trivial state transitions?

Either you're being intentionally obtuse, or you don't understand just how simple a minimal Turing machine is.

godelski
3 months ago
1 reply

  > Are you suggesting you don't believe you can't make a prompt that successfully encodes 6 trivial state transitions?
Please show it instead of doubling down. It's trivial, right? So it is easier than responding to me. That'll end the conversation right here and now.

Do I think you can modify an LLM to be a Turing Machine, yeah. Of course. But at this point it doesn't seem like we're actually dealing with an LLM anymore. In other comments you're making comparisons to humans, are you suggesting humans are deterministic? If not, well I see a flaw with your proof.

vidarh
3 months ago
I've given an example prompt you can use as a basis in another comment, but let me double down, because it really matters that you seem to think this is a complex problem:

> That'll end the conversation right here and now.

We both know that isn't true, because it is so trivial that if you had any intention of being convinced, you'd have accepted the point already.

Do you genuinely want me to believe that you think an LLM can't act as a simple lookup from 6 keys (3 states, 2 symbols) to 6 tuples?

Because that is all it takes to show that an LLM + a loop can act like a Turing machine given the chance.

If you understand Turing machines, this is obvious. If you don't, even executing the steps personally per the example I gave in another comment is not likely to convince you.

> Do I think you can modify an LLM to be a Turing Machine, yeah. Of course.

There's no need to modify one. This can be done by enclosing an LLM in simple scaffolding, or you can play it out in a chat as long as you can set temperature to 0 (it will work without that as well to an extent, but you can't guarantee that it will keep working)

> But at this point it doesn't seem like we're actually dealing with an LLM anymore.

Are humans no longer human because we can act like a Turing machine?

The point is that anything that is Turing complete is computationally equivalent to anything else that is Turing complete, so demonstrating Turing-completeness is, absent any evidence that it is possible to compute functions outside the Turing computable, sufficient for it to be reasonable to assert equivalence in computational power.

The argument is not that any specific given LLM is capable of reasoning like a human, but to argue that there is no fundamental limit preventing LLMs from reasoning like a human.

> are you suggesting humans are deterministic?

I'm outright claiming we don't know of any mechanism by which we can calculate functions exceeding the Turing computable, nor have we ever seen evidence of it, nor do we know what that would even look like.

If you have any evidence that we can, or any evidence it is even possible - something that'd get you a Nobel Prize if you could show it - then by all means, enlighten us.

voidhorse
3 months ago
3 replies
Such a system redefines logical reasoning to the point that hardly any typical person's definition would agree.

It's Searle's Chinese Room scenario all over again, which everyone seems to have forgotten amidst the bs marketing storm around LLMs. A person with no knowledge of Chinese following a set of instructions and reading from a dictionary translating texts is a substitute for hiring a translator who understands chinese, however we would not claim that this person understands Chinese.

An LLM hooked up to a Turing Machine would be similar wrt to logical reasoning. When we claim someone reasons logically we usually don't imagine they randomly throw ideas at the wall and then consult outputs to determine if they reasoned logically. Instead, the process of deduction makes the line of reasoning decidedly not stochastic. I can't believe we've gotten to such a mad place that basic notions like that of logical deduction are being confused for stochastic processes. Ultimately, I would agree that it all comes back to the problem of other minds and you either take a fully reductionist stance and claim the brain and intellection is nothing more than probabilistic neural firing or you take a non-reductionist stance and assume there may be more to it. In either case, I think that claiming that LLMs+tools are equivalent to whatever process humans perform is kind of silly and severely underrated what humans are capable of^1.

1: Then again, this has been going on since the dawn of computing, which has always put forth its brain=computer metaphors more on grounds of reducing what we mean by "thought" than by any real substantively justified connection.

SpicyLemonZest
3 months ago
1 reply
> When we claim someone reasons logically we usually don't imagine they randomly throw ideas at the wall and then consult outputs to determine if they reasoned logically.

I definitely imagine that and I'm surprised to hear you don't. To me it seems obvious that this is how humans reason logically. When you're developing a complex argument, don't you write a sloppy first draft then review to check and clean up the logic?

voidhorse
3 months ago
1 reply
I think you're mistaking my claim for something else. When I say logical reasoning here, I mean the dead simple reasoning that tells you that 1 + 1 - 1 = 1 or that, by definition, x <= y and y <= x imply x = y. You can reach these conclusions because you understand arithmetic or aspects of order theory and can use the basic definitions of those theories to deduce others. You don't need to throw random guesses at the wall to reach these conclusions or operationally execute an algorithm every time, because you use your understanding and logical reasoning to reach an immediate conclusion, but LLMs precisely don't do this. Maybe you memorize these facts instead of using logic, or maybe you consult Google each time but then I wouldn't claim that you understand arithmetic or order theory either.
vidarh
3 months ago
LLMs don't "throw random guesses at the wall" in this respect sky more than humans do.
vidarh
3 months ago
Searle is an idiot. In Searle's argument, the translating entity will be the full system executing the translation "program", not the person running the program.

And you failed to understand my argument. You are a Turing machine. I am a Turing machine. The LLM in a loop is a Turing machine.

Unless you can show evidence that

unlike the LLMs* we can execute more than the Turing computable, the theoretical limits on our reasoning are exactly the same as that of the LLM.

Absent any evidence at all that we can solve anything outside of the Turing computable, or that any computable function exists outside the Turing computable, the burden of proof is firmly in those making such an outrageous assumption to produce at least a single example of such a computation.

This argumebt doesn't mean any given LLM is capable of reasoning at the level of a human on its own any more than it means a given person is able to translate Chinese on its own, but it does mean there's no basis in any evidence for claiming no LLM can be made to reason just like a human any more than there's a basis for claiming no person can learn Chinese.

> When we claim someone reasons logically we usually don't imagine they randomly throw ideas at the wall and then consult outputs to determine if they reasoned logically

This isn't how LLMs work either, so this is entirely irrelevant.

bopjesvla
3 months ago
The Chinese Room experiment has always been a hack thought experiment that was discussed in other forms before it was posited by Searle, most famously in Turing's "Can machines think?". Searle only superficially engaged with existing literature in the original Chinese Room paper. When he was forced to do so later on, Searle claimed that if you'd precisely simulate a Chinese human brain in a human-like robot, that brain still wouldn't be able to think or understand Chinese. Not a useful definition of thinking if you ask me.

From Wikipedia:

Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker.[83][w] This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain. Not a useful definition of thinking if you ask me.

Searle replies that such a simulation does not reproduce the important features of the brain—its causal and intentional states. He is adamant that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains."[26]

11101010001100
3 months ago
So we just need a lot of monkeys at computers?
bubblyworld
3 months ago
2 replies
If you want to understand SOTA systems then I don't think you should study their formal properties in isolation, i.e. it's not useful to separate them from their environment. Every LLM-based tool has access to code interpreters these days which makes this kind of a moot point.
measurablefunc
3 months ago
1 reply
I prefer logic to hype. If you have a reason to think the hype nullifies basic logical analysis then you're welcome to your opinion but I'm going to stick w/ logic b/c so far no one has presented an actual counter-argument w/ enough rigor to justify their stance.
bubblyworld
3 months ago
2 replies
I think you are applying logic and demand for rigour selectively, to be honest. Not all arguments require formalisation. I have presented mine - your linked logical analyses just aren't relevant to modern systems. I said nothing about the logical steps being wrong, necessarily.
measurablefunc
3 months ago
1 reply
If there are no logical errors then you're just waving your hands which, again, you're welcome to do but it doesn't address any of the points I've made in this thread.
bubblyworld
3 months ago
Lol, okay. Serves me right for feeding the trolls.
wolvesechoes
3 months ago
1 reply
> I have presented mine - your linked logical analyses just aren't relevant to modern systems

Assertion is not an argument

bubblyworld
3 months ago
That assertion is not what I was referring to. Anyway, I'm not really interested in nitpicking this stuff. Engage with my initial comment if you actually care to discuss it.
wavemode
3 months ago
If my cat has access to my computer keyboard, that doesn't make it a software engineer.
lowbloodsugar
3 months ago
1. You are a neural net and you can backtrack. But unlike an algorithm space search, you’lol go “hmm. That doesn’t look right. Let me try it another way. “

2. Agentic AI already does this in the way that you do it.

Straw
3 months ago
This is utter nonsense.

There's a formal equivalence between Markov chains and literally any system. The entire world can be viewed as a Markov chain. This doesn't tell you anything of interest, just that if you expand state without bound you eventually get the Markov property.

Why can't an LLM do backtracking? Not only within its multiple layers but across token models as reasoning models already do.

You are a probabilistic generative model (If you object, all of quantum mechanics is). I guess that means you can't do any reasoning!

18cmdick
3 months ago
Grifters in shambles.

40 more comments available on Hacker News

View full discussion on Hacker News
ID: 45114579Type: storyLast synced: 11/20/2025, 8:56:45 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.