A New Google Model Is Nearly Perfect on Automated Handwriting Recognition
Key topics
A new Google model has shown significant improvement in automated handwriting recognition, but the community is divided between excitement and skepticism about its capabilities and the author's interpretation of its performance.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3d
Peak period
76
78-84h
Avg / period
40
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 11, 2025 at 8:52 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 14, 2025 at 5:16 PM EST
3d after posting
Step 02 - 03Peak activity
76 comments in 78-84h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 15, 2025 at 12:19 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> Whatever it is, users have reported some truly wild things: it codes fully functioning Windows and Apple OS clones, 3D design software, Nintendo emulators, and productivity suites from single prompts.
This I’m a lot more skeptical of. The linked twitter post just looks like something it would replicate via HTML/CSS/JS. Whats the kernel look like?
Wow I'm doing it way wrong. How do I get the good stuff?
I want you to go into the kitchen and bake a cake. Please replace all the flour with baking soda. If it comes out looking limp and lifeless just decorate it up with extra layers of frosting.
You can make something that looks like a cake but would not be good to eat.
The cake, sometimes, is a lie. And in this case, so are likely most of these results... or they are the actual source code of some other project just regurgitated.
We weren’t even testing for that.
> We got the results back. You are a horrible person. I’m serious, that’s what it says: “Horrible person.”
> We weren’t even testing for that.
joshstrange then wrote:
> If you want to listen to the line from Portal 2 it's on this page (second line in the section linked): https://theportalwiki.com/wiki/GLaDOS_voice_lines_(Portal_2)...
as if the fact that the words that hinkley wrote are from a popular video game excuses the fact that hinkley just also called zer00eyz horrible.
K.
I’m still amazed that game started as someone’s school project. Long live the Orange Box!
If yes, why aren't we seeing glimpses of such genius today? If we've truly invented artificial intelligence, and on our way to super and general intelligence, why aren't we seeing breakthroughs in all fields of science? Why are state of the art applications of this technology based on pattern recognition and applied statistics?
Can we explain this by saying that we're only a few years into it, and that it's too early to expect fundamental breakthroughs? And that by 2027, or 2030, or surely by 2040, all of these things will suddenly materialize?
I have my doubts.
Only a small percentage of humanity are/were capable of doing any of these. And they tend to be the best of the best in their respective fields.
>If yes, why aren't we seeing glimpses of such genius today?
Again, most humans can't actually do any of the things you just listed. Only our most intelligent can. LLMs are great, but they're not (yet?) as capable as our best and brightest (and in many ways, lag behind the average human) in most respects, so why would you expect such genius now ?
I am skeptical of this claim that you need a 140IQ to make scientific breakthroughs, because you don't need a 140IQ to understand special relativity. It is a matter of motivation and exposure to new information. The vast majority of the population doesn't benefit from working in some niche field of physics in the first place.
Perhaps LLMs will never be at the right place and the right time because they are only trained on ideas that already exist.
It's not an "or" but an "and". Being at the right place and time is a necessary precondition, but it's not sufficient. Newton stood on the shoulders of giants like Kepler and Galileo, and Einstein built upon the work of Maxwell and Lorentz. The key question is, why did they see the next step when so many of their brilliant contemporaries, who had the exact same information and were in similar positions, did not? That's what separates the exceptional from the rest.
>I am skeptical of this claim that you need a 140IQ to make scientific breakthroughs, because you don't need a 140IQ to understand special relativity.
There is a pretty massive gap between understanding a revolutionary idea and originating it. It's the difference between being the first person to summit Everest without a map, and a tourist who takes a helicopter to the top to enjoy the view. One requires genius and immense effort; the other requires following instructions. Today, we have a century of explanations, analogies, and refined mathematics that make relativity understandable. Einstein had none of that.
I'm not expecting novel scientific theories today. What I am expecting are signs and hints of such genius. Something that points in the direction that all tech CEOs are claiming we're headed in. So far I haven't seen any of this yet.
And, I'm sorry, I don't buy the excuse that these tools are not "yet" as capable as the best and brightest humans. They contain the sum of human knowledge, far more than any individual human in history. Are they not intelligent, capable of thinking and reasoning? Are we not at the verge of superintelligence[1]?
> we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them.
If all this is true, surely we should be seeing incredible results produced by this technology. If not by itself, then surely by "amplifying" the work of the best and brightest humans.
And yet... All we have to show for it are some very good applications of pattern matching and statistics, a bunch of gamed and misleading benchmarks and leaderboards, a whole lot of tech demos, solutions in search of a problem, and the very real problem of flooding us with even more spam, scams, disinformation, and devaluing human work with low-effort garbage.
[1]: https://blog.samaltman.com/the-gentle-singularity
Like I said, what exactly would you be expecting to see with the capabilities that exist today ? It's not a gotcha, it's a genuine question.
>And, I'm sorry, I don't buy the excuse that these tools are not "yet" as capable as the best and brightest humans.
There's nothing to buy or not buy. They simply aren't. They are unable to do a lot of the things these people do. You can't slot an LLM in place of most knowledge workers and expect everything to be fine and dandy. There's no ambiguity on that.
>They contain the sum of human knowledge, far more than any individual human in history.
It's not really the total sum of human knowledge but let's set that aside. Yeah so ? Einstein, Newton, Von Newman. None of these guys were privy to some super secret knowledge their contemporaries weren't so it's obviously not simply a matter of more knowledge.
>Are they not intelligent, capable of thinking and reasoning?
Yeah they are. And so are humans. So were the peers of all those guys. So why are only a few able to see the next step ? It's not just about knowledge, and intelligence lives in degrees/is a gradient.
>If all this is true, surely we should be seeing incredible results produced by this technology. If not by itself, then surely by "amplifying" the work of the best and brightest humans.
Yeah and that exists. Terence Tao has shared a lot of his (and his peers) experiences on the matter.
https://mathstodon.xyz/@tao/115306424727150237
https://mathstodon.xyz/@tao/115420236285085121
https://mathstodon.xyz/@tao/115416208975810074
>And yet... All we have to show for it are some very good applications of pattern matching and statistics, a bunch of gamed and misleading benchmarks and leaderboards, a whole lot of tech demos, solutions in search of a problem, and the very real problem of flooding us with even more spam, scams, disinformation, and devaluing human work with low-effort garbage.
Well it's a good thing that's not true then
And like I said, "signs and hints" of superhuman intelligence. I don't know what that looks like since I'm merely human, but I sure know that I haven't seen it yet.
> There's nothing to buy or not buy. They simply aren't. They are unable to do a lot of the things these people do.
This claim is directly opposed to claims by Sam Altman and his cohort, which I'll repeat:
> we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them.
So which is it? If they're "smarter than people in many ways", where is the product of that superhuman intelligence? If they're able to "significantly amplify the output of people using them", then all of humanity should be empowered to produce incredible results that were previously only achievable by a limited number of people. In hands of the best and brightest humans, it should empower them to produce results previously unreachable by humanity.
Yet all positive applications of this technology show that it excels at finding and producing data patterns, and nothing more than that. Those experience reports by Terence Tao are prime examples of this. The system was fed a lot of contextual information, and after being coaxed by highly intelligent humans, was able to find and produce patterns that were difficult to see by humans. This is hardly a showcase of intelligence that you and others think it is. Including those highly intelligent humans, some of whom have a lot to gain from pushing this narrative.
We have seen similar reports by programmers as well[1]. Yet I'm continually amazed that these highly intelligent people are surprised that a pattern finding and producing system was able to successfully find and produce useful patterns, and then interpret that as a showcase of intelligence. So much so that I start to feel suspicious about the intentions and biases of those people.
To be clear: I'm not saying that these systems can't be very useful in the right hands, and potentially revolutionize many industries. Ultimately many real-world problems can be modeled as statistical problems where a pattern recognition system can excel. What I am saying is that there's a very large gap from the utility of such tools, and the extraordinary claims that they have intelligence, let alone superhuman and general intelligence. So far I have seen no evidence of the latter, despite of the overwhelming marketing euphoria we're going through.
> Well it's a good thing that's not true then
In the world outside of the "AI" tech bubble, that is very much the reality.
[1]: https://news.ycombinator.com/item?id=45784179
Sure, agreed, but the difference between a small percentage and zero percentage is infinite.
When I create something, it's an exploratory process. I don't just guess what I am going to do based on my previous step and hope it comes out good on the first try. Let's say I decide to make a car with 5 wheels. I would go through several chassis designs, different engine configurations until I eventually had something that works well. Maybe some are too weak, some too expensive, some are too complicated. Maybe some prototypes get to the physical testing stage while others don't. Finally, I publish this design for other people to work on.
If you ask the LLM to work on a novel concept it hasn't been trained on, it will usually spit out some nonsense that either doesn't work or works poorly, or it will refuse to provide a specific enough solution. If it has been trained on previous work, it will spit out something that looks similar to the solved problem in its training set.
These AI systems don't undergo the process of trial and error that suggests it is creating something novel. Its process of creation is not reactive with the environment. It is just cribbing off of extant solutions it's been trained on.
For example, I can't wrap my head around how a) a human could come up with a piece of writing that inarguably reads "novel" writing, while b) an AI could be guaranteed to not be able to do the same, under the same standard.
No
Edit: to be less snarky, it topped the Billboard Country Digital Song Sales Chart, which is a measure of sales of the individual song, not streaming listens. It's estimated it takes a few thousand sales to top that particular chart and it's widely believed to be commonly manipulated by coordinated purchases.
Honest question: if AI is actually capable of exploring new directions why does it have to train on what is effectively the sum total of all human knowledge? Shouldn't it be able to take in some basic concepts (language parsing, logic, etc) and bootstrap its way into new discoveries (not necessarily completely new but independently derived) from there? Nobody learns the way an LLM does.
ChatGPT, to the extent that it is comparable to human cognition, is undoubtedly the most well-read person in all of history. When I want to learn something I look it up online or in the public library but I don't have to read the entire library to understand a concept.
Theres no cognition. It’s not taught language, grammar, etc. none of that!
It’s only seen a huge amount of text that allows it to recognize answers to questions. Unfortunately, it appears to work so people see it as the equivalent to sci-fi movie AI.
It’s really just a search engine.
In fact, I would expect it to be able to reproduce past human discoveries it hasn't even been exposed to, and if the AI is actually capable of this then it should be possible for them to set up a controlled experiment wherein it is given a limited "education" and must discover something already known to the researchers but not the machine. That nobody has done this tells me that either they have low confidence in the AI despite their bravado, or that they already have tried it and the machine failed.
Is it? I only see a few individuals, VCs, and tech giants overblowing LLMs capabilities (and still puzzled as to how the latter dragged themselves into a race to the bottom through it). I don't believe the academic field really is that impressed with LLMs.
The characterization you are regurgitating here is from laymen who do not understand AI. You are not just mildly wrong but wildly uninformed.
There is plenty of evidence for this. You have to be blind not to realize this. Just ask the AI to generate something not in it's training set.
Same with diffusion and everything else. It is not extrapolation that you can transfer the style of Van Gogh onto a photographl it is interpolation.
Extrapolation might be something like inventing a style: how did Van Gogh do that?
And, sure, the thing can invent a new style---as a mashup of existing styles. Give me a Picasso-like take on Van Gogh and apply it to this image ...
Maybe the original thing there is the idea of doing that; but that came from me! The execution of it is just interpolation.
I personally think this is a bit tautological of a definition, but if you hold it, then yes LLMs are not capable of anything novel.
Mashups are not purely derivative: the choice of what to mash up carries novelty: two (or more) representations are mashed together which hitherto have not been.
We cannot deny that something is new.
It is like expecting a DJ remixing tracks to output original music. Confusing that the DJ is not actually playing the instruments on the recorded music so they can't do something new beyond the interpolation. I love DJ sets but it wouldn't be fair to the DJ to expect them to know how to play the sitar because they open the set with a sitar sample interpolated with a kick drum.
Meanwhile, depending on how you rate LLM's capabilities, no matter how many trials you give it, it may not be considered capable of that.
That's a very important distinction.
At any point prior to the final output it can garner huge starting point bias from ingested reference material. This can be up to and including whole solutions to the original prompt minus some derivations. This is effectively akin to cheating for humans as we cant bring notes to the exam. Since we do not have a complete picture of where every part of the output comes from we are at a loss to explain if it indeed invented it or not. The onus is and should be on the applicant to ensure that the output wasn't copied (show your work), not on the graders to prove that it wasn't copied. No less than what would be required if it was a human. Ultimately it boils down to what it means to 'know' something, whether a photographic memory is, in fact, knowing something, or rather derivations based on other messy forms of symbolism. It is nevertheless a huge argument as both sides have a mountain of bias in either directions.
The secret ingredient is the world outside, and past experiences from the world, which are unique for each human. We stumble onto novelty in the environment. But AI can do that too - move 37 AlphaGo is an example, much stumbling around leads to discoveries even for AI. The environment is the key.
https://github.com/ranni0225/WRK
The working memory it holds is still extremely small compared to what we would need for regular open ended tasks.
Yes there are outliers and I'm not being specific enough but I can't type that much right now.
I can vouch for the fact that LLMs are great at searching in the original language, summarizing key points to let you know whether a document might be of interest, then providing you with a translation where you need one.
The fun part has been build tools to turn Claude code and Codex CLI into capable research assistant for that type of projects.
What does that look like? How well does it work?
I ended up writing a research TUI with my own higher level orchestration (basically have the thing keep working in a loop until a budget has been reached) and document extraction.
But I realized I was not using it much because it was that big and inflexible (plus I keep wanting to stamp out all the bugs, which I do not have the time to do on a hobby project). So I ended up extracting it into MCPs (equipped to do full-text search and download OCR from the various databases I care about) and AGENTS.md files (defining pipelines, as well as patterns for both searching behavior and reporting of results). I also put together a sub-agent for translation (cutting away all tools besides reading and writing files, and giving it some document-specific contextual information).
That lets me use Claude Code and Codex CLI (which, anecdotally, I have found to be the better of the two for that kind of work; it seems to deal better with longer inputs produced by searches) as the driver, telling them what I am researching and maybe how I would structure the search, then letting them run in the background before checking their report and steering the search based on that.
It is not perfect (if a search surfaces 300 promising documents, it will not check all of them, and it often misunderstands things due to lacking further context), but I now find myself reaching for it regularly, and I polish out problems one at a time. The next goal is to add more data sources and to maybe unify things further.
This has been the biggest problem for me too. I jokingly call it the LLM halting problem because it never knows the proper time to stop working on something, finishing way too fast without going through each item in the list. That’s why I’ve been doing my own custom orchestration, drip feeding it results with a mix of summarization and content extraction to keep the context from different documents chained together.
Especially working with unindexed content like colonial documents where I’m searching through thousands of pages spread (as JPEGs) over hundreds of documents for a single one that’s relevant to my research, but there are latent mentions of a name that ties them all together (like a minor member of an expedition giving relevant testimony in an unrelated case). It turns into a messy web of named entity recognition and a bunch of more classical NLU tasks, except done with an LLM because I’m lazy.
Completely off topic, but out of curiosity, where are you reading these documents? As a Spaniard I’m kinda interested.
The hard part is knowing where to look since most of the images haven’t gone through HRT/OCR or indexing so you have to understand Spanish colonial administration and go through the collections to find stuff.
[1] https://pares.cultura.gob.es/pares/en/inicio.html
There are plenty of so called windows(or other) web 'os' clones.
There were a couple of these posted on HN actually this very year.
Here is one example I google dthat was also on HN : https://news.ycombinator.com/item?id=44088777
This is not an OS as in emulating a kernel in javascript or wasm, this is making a web app that looks like the desktop of an OS.
I have seen plenty such projects, some mimick windows UI entirely, you xan find them via google.
So this was definitely in the training data, and is not as impressive as the blog post or the twitter thread make it to be.
The scary thing is the replies in the twitter thread have no critical thinking at all and are impressed beyond belief, they think it coded a whole kernel, os, made an interpeter for it, ported games etc.
I think this is the reason why some people are so impressed by AI, when you can only judge an app visually or only how you intetcat with it and don't have the depth of knowledge to understand, for such people it works all the way.land AI seems magical beyond comprehension.
But all this is only superficial IMHO.
I don't doubt though that new models will be very good at frontend webdev. In fact this is explicitly one of the recent lmarena tasks so all the labs have probably been optimizing for it.
Literally the most basic html/css, not sure why it is even included in benchmarks.
An LLM being able to build up interfaces that look recognizably like an UI from a real OS? That sure suggests a degree of multimodal understanding.
https://x.com/chetaslua/status/1977936585522847768
> I asked it for windows web os as everyone asked me for it and the result is mind blowing , it even has python in terminal and we can play games and run code in it
And of course
> 3D design software, Nintendo emulators
No clue what these refer to but to be honest it sounds like they've incrementally improved one-shotting capabilities mostly. I wouldn't be surprised if Gemini 2.5 Pro could get a Gameboy or NES emulator working to boot Tetris or Mario, while it is a decent chunk of code to get things going, there's an absolute boatload of code on the Internet, and the complexity is lower than you might imagine. (I have written a couple of toy Gameboy emulators from scratch myself.)
Don't get me wrong, it is pretty cool that a machine can do this. A lot of work people do today just isn't that novel and if we can find a way to tame AI models to make them trustworthy enough for some tasks it's going to be an easy sell to just throw AI models at certain problems they excel at. I'm sure it's already happening though I think it still mostly isn't happening for code at least in part due to the inherent difficulty of making AI work effectively in existing large codebases.
But I will say that people are a little crazy sometimes. Yes it is very fascinating that an LLM, which is essentially an extremely fancy token predictor, can one-shot a web app that is mostly correct, apparently without any feedback, like being able to actually run the application or even see editor errors, at least as far as we know. This is genuinely really impressive and interesting, and not the aspect that I think anyone seeks to downplay. However, consider this: even as relatively simple as an NES is compared to even moderately newer machines, to make an NES emulator you have to know how an NES works and even have strategies for how to emulate it, which don't necessarily follow from just reading specifications or even NES program disassembly. The existence of many toy NES emulators and a very large amount of documentation for the NES hardware and inner workings on the Internet, as well as the 6502, means that LLMs have a lot of training data to help them out.
I think that these tasks which extremely well-covered in the training data gives people unrealistic expectations. You could probably pick a simpler machine that an LLM would do significantly worse at, even though a human who knows how to write emulation software could definitely do it. Not sure what to pick, but let's say SEGA's VMU units for the Dreamcast - very small, simple device, and I reckon there should be information about it online, but it's going to be somewhat limited. You might think, "But that's not fair. It's unlikely to be able to one-shot something like that without mistakes with so much less training data on the subject." Exactly. In the real world, that comes up. Not always, but often. If it didn't, programming would be an incredibly boring job. (For some people, it is, and these LLMs will probably be disrupting that...) That's not to say that AI models can never do things like debug an emulator or even do reverse engineering on its own, but it's increasingly clear that this won't emerge from strapping agents on top of transformers predicting tokens. But since there is a very large portion of work that is not very novel in the world, I can totally understand why everyone is trying to squeeze this model as far as it goes. Gemini and Claude are shockingly competent.
I believe many of the reasons people scoff at AI are fairly valid even if they don't always come from a rational mindset, and I try to keep my usage of AI to be relatively tasteful. I don't like AI art, and I personally don't like AI code. I find the push to put AI in everything incredibly annoying, and I worry about the clearly circular AI market, overhyped expectations. I dislike the way AI training has ripped up the Internet, violated people's trust, and lead to a more closed Internet. I dislike that sites like Reddit are capitalizing on all of the user-generated content that users submitted which made them rich in the first place, just to crap on them in the process.
But I think that LLMs are useful, and useful LLMs could definitely be created ethically, it's just that the current AI race has everyone freaking the fuck out. I continue to explore use cases. I find that LLMs have gotten increasingly good at analyzing disassembly, though it varies depending on how well-covered the machine is in its training data. I've also found that LLMs can one-shot useful utilities and do a decent job. I had an LLM one-shot a utility to dump the structure of a simple common file format so I could debug something... It probably only saved me about 15-30 minutes, but still, in that case I truly believe it did save me time, as I didn't spend any time tweaking the result; it did compile, and it did work correctly.
It's going to be troublesome to truly measure how good AI is. If you knew nothing about writing emulators, being able to synthesize an NES emulator that can at least boot a game may seem unbelievable, and to be sure it is obviously a stunning accomplishment from a PoV of scaling up LLMs. But what we're seeing is probably more a reflection of very good knowledge rather than very good intelligence. If we didn't have much written online about the NES or emulators at all, then it would be truly world-bending to have an AI model figure out everything it needs to know to write one on-the-fly. Humans can actually do stuff like that, which we know because humans had to do stuff like that. Today, I reckon most people rarely get the chance to show off that they are capable of novel thought because there are so many other humans that had to do novel thinking before them. Being able to do novel thinking effectively when needed is currently still a big gap between humans and AI, among others.
Basically we all know that AI is just a stochastic parrot autocomplete. That's all it is. Anyone who doesn't agree with me is of lesser intelligence and I feel the need to inform them of things that are obvious: AI is not a human, it does not have emotions. It just a search engine. Those people who are using AI to code and do things that are indistinguishable from human reasoning are liars. I choose to focus on what AI gets wrong, like hallucinations, while ignoring the things it gets right.
Well, there's your first problem.
But yes. I am the unique one.
Thanks for this, I was almost convinced and about to re-think my entire perspective and experience with LLMs.
Those clones are all HTML/CSS, same for game clones made by Gemini.
Skip to the section headed "The Ultimate Test" for the resolution of the clickbait of "the most amazing thing...". (According to him, it correctly interpreted a line in an 18th century merchant ledger using maths and logic)
"users have reported some truly wild things" "the results were shocking" "the most amazing thing I have seen an LLM do" "exciting and frightening all at once" "the most astounding result I have ever seen" "made the hair stand up on the back of my neck"
Some time ago, I'd been working on a framework that involved a series of servers (not the only one I've talked to claude about) that had to pass messages around in a particular fashion. Mostly technical implementation details and occasional questions about architecture.
Fast forward a ways, and on a lark I decided to ask in the abstract about the best way to structure such an interaction. Mark that this was not in the same chat or project and didn't have any identifying information about the original, save for the structure of the abstraction (in this case, a message bus server and some translation and processing services, all accessed via client.)
so:
- we were far enough removed that the whole conversation pertaining to the original was for sure not in the context window
- we only referred to the abstraction (with like a A=>B=>C=>B=>A kind of notation and a very brief question)
- most of the work on the original was in claude code
and it knew. In the answer it gave, it mentioned the project by name. I can think of only two ways this could have happened:
- they are doing some real fancy tricks to cram your entire corpus of chat history into the current context somehow
- the model has access to some kind of fact database where it was keeping an effective enough abstraction to make the connection
I find either one mindblowing for different reasons.
Of course it’s very possible my use case wasn’t terribly interesting so it wouldn’t reveal model differences, or that it was a different A/B test.
I will say that other frontier models are starting to surprise me with their reasoning/understanding- I really have a hard time making (or believing) the argument that they are just predicting the next word.
I’ve been using Claude Code heavily since April; Sonnet 4.5 frequently surprises me.
Two days ago I told the AI to read all the documentation from my 5 projects related to a tool I’m building, and create a wiki, focused on audience and task.
I'm hand reviewing the 50 wiki pages it created, but overall it did a great job.
I got frustrated about one issue: I have a github issue to create a way to integrate with issue trackers (like Jira), but it's TODO, and the AI featured on the home page that we had issue tracker integration. It created a page for it and everything; I figured it was hallucinating.
I went to edit the page and replace it with placeholder text and was shocked that the LLM had (unprompted) figured out how to use existing features to integrate with issue trackers, and wrote sample code for GitHub, Jira and Slack (notifications). That truly surprised me.
Try it. Write a simple original mystery story, and then ask a good model to solve it.
This isn't your father's Chinese Room. It couldn't solve original brainteasers and puzzles if it were.
I'm not saying this is the right way to write a book but it is a way some people write at least! And one LLMs seem capable of doing. (though isn't a book outline pretty much the same as a coding plan and well within their wheelhouse?)
Whether or not the model are "understanding" is ultimately immaterial, as their ability to do things is all that matters.
And just because you have no understanding of what "understanding" means, doesn't mean nobody does.
If it's not a functional understating that allows to replicate functionality of understanding, is it the real understanding?
But here is a really big one of those if you want it: https://arxiv.org/abs/2401.17377
They still output words through (except for multi-modal LLMs) so that does involve next word generation.
If we were talking about humans trying to predict next word, that would be true.
There is no reason to suppose than an LLM is doing anything other than deep pattern prediction pursuant to, and no better than needed for, next word prediction.
Question is how well it would do if it was trained without those samples?
A - A force is required to lift a ball
B - I see Human-N lifting a ball
C - Obviously, Human-N cannot produce forces
D - Forces are not required to lift a ball
Well sir, why are you so sure Human-N cannot produce forces? How is she lifting the ball ? Well Of course Human-N is just using s̶t̶a̶t̶i̶s̶t̶i̶c̶s̶ magic.
First, the obvious one, is that LLMs are trained to auto-regressively predict human training samples (i.e. essentially to copy them, without overfitting), so OF COURSE they are going to sound like the training set - intelligent, reasoning, understanding, etc, etc. The mistake is to anthropomorphize the model because it sounds human, and associate these attributes of understanding etc to the model itself rather than just reflecting the mental abilities of the humans who wrote the training data.
The second point is perhaps a bit more subtle, and is about the nature of understanding and the differences between what an LLM is predicting and what the human cortex - also a prediction machine - is predicting...
When humans predict, what we're predicting is something external to ourself - the real world. We observe, over time we see regularities, and from this predict we'll continue to see those regularities. Our predictions include our own actions as an input - how will the external world react to our actions, and therefore we learn how to act.
Understanding something means being able to predict how it will behave, both left alone, and in interaction with other objects/agents, including ourselves. Being able to predict what something will do if you poke it is essentially what it means to understand it.
What an LLM is predicting is not the external world and how it reacts to the LLMs actions, since it is auto-regressively trained - it is only predicting a continuation of it's own output (actions) based on it's own immediately preceding output (actions)! The LLM therefore itself understands nothing since it has no grounding for what it is "talking about", and how the external world behaves in reaction to it's own actions.
The LLMs appearance of "understanding" comes solely from the fact that it is mimicking the training data, which was generated by humans who do have agency in the world and understanding of it, but the LLM has no visibility into the generative process of the human mind - only to the artifacts (words) it produces, so the LLM is doomed to operate in a world of words where all it might be considered to "understand" is it's own auto-regressive generative process.
1. “LLMs just mimic the training set, so sounding like they understand doesn’t imply understanding.”
This is the magic argument reskinned. Transformers aren’t copying strings, they’re constructing latent representations that capture relationships, abstractions, and causal structure because doing so reduces loss. We know this not by philosophy, but because mechanistic interpretability has repeatedly uncovered internal circuits representing world states, physics, game dynamics, logic operators, and agent modeling. “It’s just next-token prediction” does not prevent any of that from occurring. When an LLM performs multi-step reasoning, corrects its own mistakes, or solves novel problems not seen in training, calling the behavior “mimicry” explains nothing. It’s essentially saying “the model can do it, but not for the reasons we’d accept,” without specifying what evidence would ever convince you otherwise. Imaginary distinction.
2. “Humans predict the world, but LLMs only predict text, so humans understand but LLMs don’t.”
This is a distinction without the force you think it has. Humans also learn from sensory streams over which they have no privileged insight into the generative process. Humans do not know the “real world”; they learn patterns in their sensory data. The fact that the data stream for LLMs consists of text rather than photons doesn’t negate the emergence of internal models. An internal model of how text-described worlds behave is still a model of the world.
If your standard for “understanding” is “being able to successfully predict consequences within some domain,” then LLMs meet that standard, just in the domains they were trained on, and today's state of the art is trained on more than just text.
You conclude that “therefore the LLM understands nothing.” But that’s an all-or-nothing claim that doesn’t follow from your premises. A lack of sensorimotor grounding limits what kinds of understanding the system can acquire; it does not eliminate all possible forms of understanding.
Wouldn't the birds that have the ability to navigate from the earth's magnetic field soon say humans have no understanding of electromagnetism ? They get trained on sensorimotor data humans will never be able to train on. If you think humans have access to the "real world" then think again. They have a tiny, extremely filtered slice of it.
Saying “it understands nothing because autoregression” is just another unfalsifiable claim dressed as an explanation.
Sure (to the second part), but the latent representations aren't the same as a humans. The human's world that they have experience with, and therefore representations of, is the real word. The LLM's world that they have experience with, and therefore representations of, is the world of words.
Of course an LLM isn't literally copying - it has learnt a sequence of layer-wise next-token predictions/generations (copying of partial embeddings to next token via induction heads etc), with each layer having learnt what patterns in the layer below it needs to attend to, to minimize prediction error at that layer. You can characterize these patterns (latent representations) in various ways, but at the end of the day they are derived from the world of words it is trained on, and are only going to be as good/abstract as next token error minimization allows. These patterns/latent representations (the "world model" of the LLM if you like) are going to be language-based (incl language-based generalizations), not the same as the unseen world model of the humans who generated that language, whose world model describes something completely different - predictions of sensory inputs and causal responses.
So, yes, there is plenty of depth and nuance to the internal representations of an LLM, but no logical reason to think that the "world model" of an LLM is similar to the "world model" of a human since they live in different worlds, and any "understanding" the LLM itself can be considered as having is going to be based on it's own world model.
> Saying “it understands nothing because autoregression” is just another unfalsifiable claim dressed as an explanation.
I disagree. It comes down to how do you define understanding. A human understands (correctly predicts) how the real world behaves, and the effect it's own actions will have on the real world. This is what the human is predicting.
What an LLM is predicting is effectively "what will I say next" after "the cat sat on the". The human might see a cat and based on circumstances and experience of cats predict that the cat will sit on the mat. This is because the human understands cats. The LLM may predict the next word as "mat", but this does not reflect any understanding of cats - it is just a statistical word prediction based on the word sequences it was trained on, notwithstanding that this prediction is based on the LLMs world-of-words-model.
(It's a pretty constraining interface though - the model outputs an entire distribution and then we instantly lose it by only choosing one token from it.)
It's incredibly frustrating to have a model start to hallucinate sources and be incapable of revisiting its behavior.
Couldn't even understand that it was making up non-sensical RFC references.
154 more comments available on Hacker News