Using Secondary School Maths to Demystify AI
Key topics
The debate rages on: can we truly say AI systems don't think? The original assertion that "AI is just maths" sparked a lively discussion, with some commenters pointing out that "thinking" is undefined, rendering any statements about it unverifiable. As one wit put it, "Thinking is undefined so all statements about it are unverifiable" - but others countered that formal reasoning, at least, is well-defined, even if informal reasoning remains murky. The thread careens into philosophical territory, with commenters dissecting the nuances of thinking, reasoning, and the limits of definition.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
69
0-3h
Avg / period
13.3
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 12, 2025 at 11:32 AM EST
26 days ago
Step 01 - 02First comment
Dec 12, 2025 at 12:51 PM EST
1h after posting
Step 02 - 03Peak activity
69 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 14, 2025 at 4:03 AM EST
25 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The "hair-splitting" underlies the whole GenAI debate.
It ties into another aspect of these perennial threads, where it is somehow OK for humans to engage in deluded or hallucinatory thought, but when an AI model does it, it proves they don't "think."
If thinking is definable, it is wrong that all statements about it are unverifiable (i.e. there are statements about it that are verifiable.)
Well, basic shit.
Unicorns are not bound by the laws of physics - because they do not exist.
Is it only humans that have this need? That makes the need special, so humans are special in the universe.
We don't fully understand how brains work, but we know brains don't function like a computer. Why would a computer be assumed to function like a brain in any way, even in part, without evidence and just hopes based on marketing? And I don't just mean consumer marketing, but marketing within academia as well. For example, names like "neural networks" have always been considered metaphorical at best.
And then what do you even mean by "a computer?" This falls into the same trap because it sounds like your statement that brains don't function like a computer is really saying "brains don't function like the computers I am familiar with." But this would be like saying quantum computers aren't computers because they don't work like classical computers.
To put this in terms of "results", because that's what your way of thinking insists upon, a plane does not take off and land the way a bird does. This limits a plane's practicality to such an extent that a plane is useless for transportation without all the infrastructure you're probably ignoring with your argument. You might also be ignoring all the side effects planes bring with them.
Would you not agree that if we only ever wanted "flight" for a specific use case that apparently only birds can do after evaluating what a plane cannot do, then planes are not capable of "flight"?
This is the very same problem with "thought" in terms of AI. We're finding it's inadequate for what we want the machine to do. Not only is it inadequate for our current use cases, and not only is it inadequate now, but it will continue to be inadequate until we further pin down what "thought" is and determine what lies beyond the Church-Turing thesis.
https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#P...
Relevant quote: "B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain"
There's many definitions of "thinking".
AI and brains can do some, AI and brains definitely provably cannot do others, some others are untestable at present, and nobody really knows enough about what human brains do to be able to tell if or when some existing or future AI can do whatever is needed for the stuff we find special about ourselves.
A lot of people use different definitions, and respond to anyone pointing this out by denying the issue and claiming their own definition is the only sensible one and "obviously" everyone else (who isn't a weird pedant) uses it.
The definition of "thinking" in any of the parent comments or TFA is actually not defined. Like literally no statements are made about what is being tested.
So, if we had that we could actually discuss it. Otherwise it's just opinions about what a person believes thinking is, combined with what LLMs are doing + what the person believes they themselves do + what they believe others do. It's entirely subjective with very low SNR b/c of those confounding factors.
> finite context windows
like a human has
> or the fact that the model is "frozen" and stateless,
much like a human adult. Models get updated at a slower frequency than humans. AI systems have access to fetch new information and store it for context.
> or the idea that you can transfer conversations between models are trivial
because computers are better-organized than humanity.
I can restart a conversation with an LLM 15 days later and the state is exactly as it was.
Can't do that with a human.
The idea that humans have a longer, more stable context window than LLM's, CAN or is even LIKELY to be true given certain activities but please let's be honest about this.
If you talk to someone for an hour about a technical conversation I would guesstimate that 90% of humans would immediately start to lose track of details in about 10 minutes. So they write things down, or they mentally repeat things to themselves they know or have recognized they keep forgetting.
I know this because it's happened continually in tech companies decade after decade.
LLM's have already passed the Turing test. They continue to pass it. They fool and outsmart people day after day.
I'm no fan of the hype AI is receiving, especially around overstating its impact in technical domains, but pretending that LLM's can't or don't consistently perform better than most human adults on a variety of different activities is complete non-sense.
I do hope you're able to remember what was your browser tab 5 tab switches ago without keeping track of it...
How would you say human short term memory works if not by repeated firing (similar to repeatedly putting same tokens in over and over)?
it doesn't sound like you really understand what these statements mean. if LLMs are like any humans it's those with late stage dementia, not healthy adults
They're not equivalent at all because the AI is by no means biological. "It's just maths" could maybe be applied to humans but this is backed entirely by supposition and would ultimately just be an assumption of its own conclusion - that human brains work on the same underlying principles as AI because it is assumed that they're based on the same underlying principles as AI.
if there's surely no algo to solve the halting problem, why would there be maths that describes consciousness?
Having read “I Am a Strange Loop” I do not believe Hofstadter indicates that the existence of Gödel’s theorem precludes consciousness being realizable on a Turing machine. Rather if I recall correctly he points out that as a possible argument and then attempts to refute it.
On the other hand Penrose is a prominent believer that human’s ability to understand Gödel’s theorem indicates consciousness can’t be realized on a Turing machine but there’s far from universal agreement on that point.
I'll try and ask OG q more clearly: why would the brain, consciousness, be formalizable?
I think there's a yearn view nature as adhering to an underlying model, and a contrary view that consciousness is transcendental, and I lean towards the latter
I think there is abundant evidence that the answer is ‘no’. The main reason is that consciousness doesn’t give you new physics, it follows the same rules and restrictions. It seems to be “part of” the standard natural universe, not something distinct.
But I think most people get what GP means.
When you think in these terms, it becomes clear that LLMs can’t have certain types of experiences (eg see in color) but could have others.
A “weak” panpsychism approach would just stop at ruling out experience or qualia based on physical limitations. Yet I prefer the “strong” pansychist theory that whatever is not forbidden is required, which begins to get really interesting (would imply that for example an LLM actually experiences the interaction you have with it, in some way).
It's on those who want alternative explanations to demonstrate even the slightest need for them exists - there is no scientific evidence that exists which suggests the operation of brains as computers, as information processors, as substrate independent equivalents to Turing machines, are insufficient to any of the cognitive phenomena known across the entire domain of human knowledge.
We are brains in bone vats, connected to a wonderful and sophisticated sensorimotor platform, and our brains create the reality we experience by processing sensor data and constructing a simulation which we perceive as subjective experience.
The explanation we have is sufficient to the phenomenon. There's no need or benefit for searching for unnecessarily complicated alternative interpretations.
If you aren't satisfied with the explanation, it doesn't really matter - to quote one of Neil DeGrasse Tyson's best turns of phrase: "the universe is under no obligation to make sense to you"
If you can find evidence, any evidence whatsoever, and that evidence withstands scientific scrutiny, and it demands more than the explanation we currently have, then by all means, chase it down and find out more about how cognition works and expand our understanding of the universe. It simply doesn't look like we need anything more, in principle, to fully explain the nature of biological intelligence, and consciousness, and how brains work.
Mind as interdimensional radios, mystical souls and spirits, quantum tubules, none of that stuff has any basis in a ruthlessly rational and scientific review of the science of cognition.
That doesn't preclude souls and supernatural appearing phenomena or all manner of "other" things happening. There's simply no need to tie it in with cognition - neurotransmitters, biological networks, electrical activity, that's all you need.
This is the point, we don't know the delta between brains and AI any assumption is equivalent to my statement.
Right back at you, brochacho. I'm not the one making a positive claim here. You're the one who insists that it must work in a specific way because you can't conceive of any alternatives. I have never seen ANY evidence or study linking any existent AI or computer system to human cognition.
>There's no need or benefit for searching for unnecessarily complicated alternative interpretations.
Thanks, if it's alright with you I might borrow this argument next time somebody tries to tell me the world isn't flat.
>It simply doesn't look
That's one of those phrases you use when you're REALLY confident that you know what you're talking about.
> like we need anything more, in principle, to fully explain the nature of biological intelligence, and consciousness, and how brains work.
Please fully explain the nature of intelligence, consciousness, and how brains work.
>Mind as interdimensional radios, mystical souls and spirits, quantum tubules, none of that stuff has any basis in a ruthlessly rational and scientific review of the science of cognition.
well i definitely never said anything even remotely similar to that. If i didn't know any better i might call this argument a "hallucination".
That wasn't the assumption though, it was only that human brains work by some "non-magical" electro-chemical process which could be described as a mechanism, whether that mechanism followed the same principles of AI or not.
As for applying the word thinking to AI systems, it's already in common usage and this won't change. We don't have any other candidate words, and this one is the closest existing word for referencing a computational process which, one must admit, is in many ways (but definitely not in all ways) analogous to human thought.
Uh huh. Good luck getting Stockfish to do your math homework while Leela works on your next waifu.
LLMs play chess poorly. Chess engines do nothing else at all. That's kind of a big difference, wouldn't you say?
I don't know who's asserting that (other than Alan Turing, I guess); certainly not me. Humans are, if anything, easier to fool than our current crude AI models. Heck, ELIZA was enough to fool non-specialist humans.
In any case, nobody was "tricked" at the IMO. What happened there required legitimate reasoning abilities.
To their utility.
Not sure if it matters on the question "thinking?"; even if for the debaters "thinking" requires consciousness/qualia (and that varies), there's nothing more than guesses as to where that emerges from.
This is exactly the problem. Claims about AI are unfalsifiable, thus your various non-sequiturs about AI 'thinking'.
Conversely, if the one asserting something doesn't want to define it there is no useful conversation to be had. (as in: AI doesn't think - I won't tell you what I mean by think)
PS: Asking someone to falsify their own assertion doesn't seem a good strategy here.
PPS: Even if everything about the human brain can be emulated, that does not constitute progress for your argument, since now you'd have to assert that AI emulates the human brain perfectly before it is complete. There is no direct connection between "This AI does not think" to "The human brain can be fully emulated". Also the difference between "does not" and "can not" is big enough here that mangling them together is inappropriate.
Sometimes, because of the consequences of otherwise, the order gets reversed
Whatever you meant to say with "Sometimes, because of the consequences of otherwise, the order gets reversed" eludes me as well.
So we don't require, say, minorities or animals to prove they have souls, we just inherently assume they do and make laws around protecting them.
With regards to the topic: Does AI think? I don't know, but I also don't want to act upon knowing if it does (or doesn't for that matter). In other words, I don't care. The answer could go either way, but I'd rather say that I don't know (especially since "thinking" is not defined). That means that I can assume both and consider the consequences using some heuristic to decide which assumption is better given the action I want to justify doing or not doing. If you want me to believe an AI thinks, you have to prove it, if you want to justify an action you may assume whatever you deem most likely. And if you want to know if an AI thinks, then you literally can't assume it does; simple as that.
A lot of people seemingly haven't updated their priors after some of the more interesting results published lately, such as the performance of Google's and OpenAI's models at the 2025 Math Olympiad. Would you say that includes yourself?
If so, what do the models still have to prove, and under what conditions will you accept such proof?
For that matter I have no opinion on if AI does think or not, I simply don't care. Therefore I also really can't answer your question in what more a model has to do to establish that they are thinking (does being able to use all major forms of reasoning constitute the capability of thought to you?). I can say however, that any such proof would have to be on a case-by-case basis given my current understanding on AI is designed.
Personally, I'm ok with reusing the word "thinking", but there are dogmatic stances on both sides. For example, lots of people decreeing that biology in the end can't but reduce to maths, since "what else could it be". The truth is we don't actually know if it is possible, for any conceivable computational system, to emulate all essential aspects of human thought. There are good arguments for this possibility, like those brought by Roger Penrose in "the Emperor's new Mind" and "Shadows of the Mind".
For one thing, yes, they can, obviously -- when's the last time you checked? -- and for another, there are plenty of humans who seemingly cannot.
The only real difference is that with an LLM, when the context is lost, so is the learning. That will obviously need to be addressed at some point.
that they can't perform simple mathematical operations without access to external help (via tool calling)
But yet you are fine with humans requiring a calculator to perform similar tasks? Many humans are worse at basic arithmetic than an unaided transformer network. And, tellingly, we make the same kinds of errors.
or that they have to expend so much more energy to do their magic (and yes, to me they are a bit magical), which makes some wonder if what these models do is a form of refined brute-force search, rather than ideating.
Well, of course, all they are doing is searching and curve-fitting. To me, the magical thing is that they have shown us, more or less undeniably, that that is all we do. Questions that have been asked for thousands of years have now been answered: there's nothing special about the human brain, except for the ability to form, consolidate, and consult long-term memories.
That's post-training. The complaint I'm referring to is to the huge amounts of data (end energy) required during training - which is also a form of learning, after all. Sure, there are counter-arguments, for example pointing to the huge amount of non-textual data a child ingests, but these counter-arguments are not waterproof themselves (for example, one can point out that we are discussing text-only tasks). The discussion can go on and on, my point was only that cogent arguments are indeed often presented, which you were denying above.
> there are plenty of humans who seemingly cannot
This particular defense of LLMs has always puzzled me. By this measure, simply because there are sufficiently impaired humans, AGI has already been achieved many decades ago.
> But yet you are fine with humans requiring a calculator to perform similar tasks
I'm talking about tasks like multiplying two 4-digit numbers (let's say 8-digit, just to be safe, for reasoning models), which 5th or 6th graders in the US are expected to be able to do with no problem - without using a calculator.
> To me, the magical thing is that they have shown us, more or less undeniably (Penrose notwithstanding), that that is all we do.
Or, to put it more tersely, they have shown you that that is all we do. Penrose, myself, and lots of others don't see it quite like that. (Feeling quite comfortable being classed in the same camp with the greatest living physicist, honestly. ;) To me what LLMs do is approximate one aspect of our minds. But I have a strong hunch that the rabbit hole goes much deeper, your assessment notwithstanding.
No, it is not. Read the paper.
I'm talking about tasks like multiplying two 4-digit numbers (let's say 8-digit, just to be safe, for reasoning models), which 5th or 6th graders in the US are expected to be able to do with no problem - without using a calculator.
So am I. See, for example, Karpathy's discussion of native computation: https://youtu.be/7xTGNNLPyMI?si=Gckcmp2Sby4SlKje&t=6416 (starts at 1:46:56).
Feeling quite comfortable being classed in the same camp with the greatest living physicist, honestly.
Not a great time for you to rest on your intellectual laurels. Same goes for Penrose.
Yes, it is. You seem to have misunderstood what I wrote. The critique I was pointing to is of the amount of examples and energy needed during model training, which is what the "learning" in "machine learning" actually refers to. The paper uses GPT-3 which had already absorbed all that data and electricity. And the "learning" the paper talks about is arguably not real learning, since none of the acquired skills persists beyond the end of the session.
> So am I.
This is easy to settle. Go check any frontier model and see how far they get with multiplying numbers with tool calling disabled.
> Not a great time for you to rest on your intellectual laurels. Same goes for Penrose.
Neither am I resting, nor are there much laurels to rest on, at least compared to someone like Penrose. As for him, give the man a break, he's 94 years old and still sharp as a tack and intellectually productive. You're the one who's resting, imagining you've settled a question which is very much still open. Certainty is certainly intoxicating, so I understand where you're coming from, but claiming anyone who doubts computationalism is not bringing any arguments to the table is patently absurd.
You don't even have to read the paper; just read to the end of the abstract. It will take less time than typing another long paragraph that's wrong. Nobody is arguing about energy consumption in this thread (but see below), and it's already been stipulated that lack of long-term memory is a key difference between AI and human cognition. Give them some time, sheesh. This stuff's brand new.
Of course, if you don't care that you're wrong, there's no way forward. It's certainly looking that way.
This is easy to settle. Go check any frontier model and see how far they get with multiplying numbers with tool calling disabled.
I ran this session locally in Qwen3-Next-80B-A3B-Instruct-Q6_K: https://pastebin.com/G7Ewt5Tu
This is a 6-bit quantized version of a free model that is very far from frontier level. It traces its lineage through DeepSeek, which was likely trained by GPT 4.something. So 2 out of 4 isn't bad, really. My GPU's power consumption went up by about 40 watts while running these queries, a bit more than a human brain.
If I ask the hardest of those questions on Gemini 3, it gets the right answer but definitely struggles: https://pastebin.com/MuVy9cNw
As for him, give the man a break, he's 94 years old and still sharp as a tack and intellectually productive.
(Shrug) As long as he chooses to contribute his views to public discourse, he's fair game for criticism.
You'd think it would unlock certain concepts for this class of people, but ironically, they seem unable to digest the information and update their context.
These theories are enormously successful, but they are also known to be (variously) incomplete, inconsistent, non-deterministic, open to multiple interpretations and only partially understood in their implications, with links between descriptions of things at different scales a particularly challenging and little understood topic. The more you learn about physics (and while I'm no physicist, I have a degree in the subject and have learned a great deal more since) the more you understand the limits of what we know.
Anybody who thinks there's no mystery to physics just doesn't know much about it. Anybody who confidently asserts as fact things like "the brain consists of protons, neutrons and electrons so it's impossible for it to do anything a computer can't do" is deducing things from their own ignorance.
But the accompanying XY plot showed samples that overlapped or at least were ambiguous. I immediately lost a lot of my interest in their approach, because traffic lights by design are very clearly red, or green. There aren't mauve or taupe lights that the local populace laughs at and says, "yes, that's mostly red."
I like the idea of studying math by using ML examples. I'm guessing this is a first step and future education will have better examples to learn from.
I suspect you feel this because you are observing the output of a very sophisticated image processing pipeline in your own head. When you are dealing with raw matrixes of rgb values it all becomes a lot more fuzzy. Especially when you encounter different illuminations, exposures and the cropping of the traffic light has noise on it. Not saying it is some intractably hard machine vision problem, because it is not. But there is some variety and fuzzyness there in the raw sensor measurements.
But they are two different things with overlapping qualities.
It's like MDMA and falling in love. They have many overlapping quantities but no one would claim one is the other.
This is completely idiotic. Do these people actually believe that showing it can't be actual thought because it is described by math?
Whatever that something that it actually does in the real, physical world is produces the cogito in cogito, ergo sum and I doubt you can get it just by describing what all the subatomic particles are doing, any more than a computer or pen-and-paper simulated hurricane can knock your house down, no matter how perfectly simulated.
I'm not. You might want me to be, but I'm very, very much not.
Of course a GPU involves things happening. No amount of using it to describe a brain operating gets you an operating brain, though. It's not doing what a brain does. It's describing it.
(I think this is actually all somewhat tangential to whether LLMs "can think" or whatever, though—but the "well of course they might think because if we could perfectly describe an operating brain, that would also be thinking" line of argument often comes up, and I think it's about as wrong-headed as a thing can possibly be, a kind of deep "confusing the map for the territory" error; see also comments floating around this thread offhandedly claiming that the brain "is just physics"—like, what? That's the cart leading the horse! No! Dead wrong!)
An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.
An LLM is always a description. An LLM operating on a computer is identical to a description of it operating on paper (if much faster).
That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.
If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?
Being a functionalist ( https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_m... ) myself, I don't know the answer on the top of my head.
I can smell a "real" candle, a "real" candle can burn my hand. The term real here is just picking out a conceptual schema where its objects can feature as relata of the same laws, like a causal compatibility class defined by a shared causal scope. But this isn't unique to the question of real vs simulated. There are causal scopes all over the place. Subatomic particles are a scope. I, as a particular collection of atoms, am not causally compatible with electrons and neutrons. Different conceptual levels have their own causal scopes and their own laws (derivative of more fundamental laws) that determine how these aggregates behave. Real (as distinct from simulated) just identifies causal scopes that are derivative of our privileged scope.
Consciousness is not like the candle because everyone's consciousness is its own unique causal scope. There are psychological laws that determine how we process and respond to information. But each of our minds are causally isolated from one another. We can only know of each other's consciousness by judging behavior. There's nothing privileged about a biological substrate when it comes to determining "real" consciousness.
I'm not against this conclusion ( https://en.wikipedia.org/wiki/Philosophical_zombie ) but it doesn't seem to be compatible with what most people believe in general.
Determining what is real by judging causal scope is generally successful but it misleads in the case of consciousness.
If I make a button that lights the candle, and another button that puts it off, and I press those buttons, then the virtual candle is causally connected to our physical reality world.
But obviously the candle is still considered virtual.
Maybe a candle is not as illustrative, but let's say we're talking about a very realistic and immersive MMORPG. We directly do stuff in the game, and with the right VR hardware it might even feel real, but we call it a virtual reality anyway. Why? And if there's an AI NPC, we say that the NPC's body is virtual -- but when we talk about the AI's intelligence (which at this point is the only AI we know about -- simulated intelligence in computers) why do we not automatically think of this intelligence as virtual in the same way as a virtual candle or a virtual NPC's body?
Real is about an object having all of the essential properties for that concept. If we take it as essential that candles can burn our hand, then the virtual candle isn't real. But it is not essential to consciousness that it is not virtual.
A simulation of a tree growing (say) is a lot more like the idea of love than it is... a real tree growing. Making the simulation more accurate changes that not a bit.
A candle in Canada can't melt wax in Mexico, and a real candle can't melt simulated wax. If you want to differentiate two things along one axis, you can't just point out differences that may or may not have any effect on that axis. You have to establish a causal link before the differences have any meaning. To my knowledge, intelligence/consciousness/experience doesn't have a causal link with anything.
We know our brains cause consciousness the way we knew in 1500 that being on a boat for too long causes scurvy. Maybe the boat and the ocean matter, or maybe they don't.
Thanks for stating your views clearly. I have some questions to try and understand them better:
Would you say you're sure that you aren't in a simulation while acknowledging that a simulated version of you would say the same?
What do you think happens to someone whose neurons get replaced by small computers one by one (if you're happy to assume for the sake of argument that such a thing is possible without changing the person's behavior)?
It might if the simulation includes humans observing the candle.
Build a simulation of creatures that evolve from simple structures (think RNA, DNA).
Now, if in this simulation, after many many iterations, the creatures start talking about consciousness, what does that tell us?
That's an assumption, though. A plausible assumption, but still an assumption.
We know you can execute an LLM on pen and paper, because people built them and they're understood well enough that we could list the calculations you'd need to do. We don't know enough about the human brain to create a similar list, so I don't think you could make a stronger statement than "you could probably simulate..."
Yes, or what about leprechauns?
It's been kinda discussed to oblivion in the last century, interesting that it seems people don't realize the "existing literature" and repeat the same arguments (not saying anyone is wrong).
The opinions are exactly the same than about LLM.
The argument that was actually made was "LLMs do not think".
Everything I've seen says "LLMs cannot think like brains" is not dependent on an argument that "no computer can think like a brain", but rather on an understanding of just what LLMs are—and what they are not.
Would you mind expanding on this? At a base read, it seems you implying magic exists.
84 more comments available on Hacker News