Minds, Brains, and Programs (1980) [pdf]
Key topics
The discussion revolves around John Searle's 1980 paper 'Minds, brains, and programs', which challenges the idea of artificial intelligence, and the community shares various perspectives on the Chinese Room thought experiment.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
9d
Peak period
17
Day 9
Avg / period
11.7
Based on 35 loaded comments
Key moments
- 01Story posted
Oct 13, 2025 at 1:01 AM EDT
3 months ago
Step 01 - 02First comment
Oct 21, 2025 at 6:36 PM EDT
9d after posting
Step 02 - 03Peak activity
17 comments in Day 9
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 23, 2025 at 1:46 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Just Googling the author, he died last month sadly
> let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
In other words, even if you put the man in place of everything, there's still a gap between mechanically manipulating symbols and actual understanding.
I think about astronomers and the things they know about stars that are impossible to experience even from afar, like sizes and temperatures. No one has ever seen a black hole with their own eyes, but they read a lot about it, collected data, made calculations, and now they can have meaningful discussions with their peers and come to new conclusions from "processing and correlating" new data with all this information in their minds. That's "actual understanding" to me.
One could say they are experiencing this information exchange, but I'd argue we can say the same about the translator in the chinese room. He does not have the same understanding of chinese as us humans, associating words to memories and feelings and other human experiences, but he does know that a given symbol evokes the use of other specific symbols. Some sequences require the usage of lots of symbols, some are somewhat ambiguous, and some require him to fetch a symbol that he hasn't used in a long time, maybe doesn't even know where he stored it. To me this looks a lot like the processes that happen inside our minds, with the exception that his form of "understanding" and the experiences that this evokes to him are completely alien to us. Just like an AGI would possibly be.
I'm not confortable looking at the translator's point of view as if he's the analogous to a mind. To me he's the correlator, the process inside our minds that makes these associations. This is not us, it's not under our conscious control, from our perspectives it just happens, and we know today it's a result of our neural networks. We emerge somehow from this process. Similarly, it seems to me that the experience of knowing chinese belongs to the whole room, not the guy handling symbols. It's a weird conclusion, I still don't know what to think of it though...
The process of fetching symbols, as you put it, doesn't feel at all like what I do when somebody asks me what it was like to listen to the Beatles for the first time and I form a description.
A human does simulates a Turing machine to do... something. The human is acting mechanically. So what?
If there's any meaning, it exists outside the machine and the human simulating it.
You need another human to understand the results.
All Searle has done is distract everyone from whatever is going on inside that other human.
(Also, IMO, the question of whether the program understands chinese mainly depends on whether you would describe an unconscious person as understanding anything)
I also can't help but think of this sketch when this topic comes up (even though, importantly, it is not quite the same thing): https://www.youtube.com/watch?v=6vgoEhsJORU
As with pie, so with 'understanding'. A system which understands can be expected to not contain anything which understands. So if you find a system which contains nothing which understands, this tells you nothing about whether the system understands[0].
Somehow both you and Searle have managed to find this simple fact about pie to be 'the grip of an ideology' and 'metaphysical'. But it really isn't.
[0] And vice-versa, as in Searle's pointlessly overcomplicated example of a system which understands Chinese containing one which doesn't containing one which does.
If a computer could have an intelligent conversation, then a person could manually execute the same program to the same effect, and since that person could do so without understanding the conversation, computers aren't sentient.
Analogously, some day I might be on life support. The life support machines won't understand what I'm saying. Therefore I won't mean it.
I don't get any of these anthropocentric arguments, meaning predates humanity and consciousness, that's what dna is, meaning primitives are just state changes the same thing as physical primitives.
syntactic meaning exists even without an interpreter in the same way physical "rock" structures existed before there were observers, it just picks up causal leverage when there is one.
Only a stateless universe would have no meaning. Nothing doesn't exist, meaninglessness doesn't exist, these are just abstraction we've invented.
Call it the logos if that's what you need, call it field pertubations, reality has just traveling up the meaning complexity chain, but complex meaning is just structural arrangement of meaning simples.
Stars emit photons, humans emit complex meaning. Maybe we'll be part of the causal chain that solves entropy, until then we are the only empirically observed, random walk write heads of maximally complex meaning in the universe.
We are super rare and special as far as we've empirically observed, doesn't mean we get our own weird metaphysical (if that even exists) carve out.
Any computer has far less access to the meaning load we experience since we don't compute thoughts, thoughts aren't about things, there is no content to thoughts, there are no references, representations, symbols, grammars, words in brains.
Searle is only at the beginning of this refutation of computers, we're far more along now.
It's just actions, syntax and space. Meaning is both an illusion and fantastically exponential. That contradiction has to be continually made correlational.
this isn't woo, this is just empirical observation, and no one is capable of credibly denying state change.
You have to look at mental events and grasp not only what they are, both material and process, how the come to happen, they're both prior and post-hoc, etc.
I study meaning in the brain. We are nit sure if it exists and the meaning we see in events and tasks are at a massive load. Any one event can have 100s even 1000s of meaningful changes to self, environment and others. That's contradictory. Searle is not even scratching the surface of the problem.
https://arxiv.org/vc/arxiv/papers/1811/1811.06825v2.pdf
https://www.frontiersin.org/journals/psychology/articles/10....
https://pubmed.ncbi.nlm.nih.gov/39282373/
https://aeon.co/essays/your-brain-does-not-process-informati...
If that's your position, that's where we disagree, state changes in isolation and state changes in sequence are all meaning.
State change is the primitive of meaning, starting at the fermion, there is no such thing as meaninglessness, just uncomplex, non-cohered meaning primitives, the moment they start to be associated through natural processes you have increasing complex meaning sequences and structures through coherence.
We move up the meaning ladder, high entropy meaning (rng) is decohered primitives, low entropy meaning is maximally cohered meaning like human speech or dna.
Meaning interactions (quantum field interactions) creates particles and information. Meaning is upstream, not downstream.
Now people hate when you point out semantic/structural meaning is meaning, but it's the only non fuzzy definition I've ever seen, and with complexity measures we can reproducably examine emissions objectively for semantinc complexity across all emitter types.
The reason everyone has such crappy and contradictory interpretations of meaning is because they are trying to turn a primitive into something that is derive or emergent and it's just simply not, and you can observe the chain of low to high complexity without having to look at human structures.
This meaning predates consciousness, even if you are a dualist you have to recognize that dna and rna bootstrap each "brain reciever" structure.
Meaning exists without an interpreter, the reason so many people get caught up in the definition is because they can't let go of anthropocentric views of meaning, meaning comes before consciousness, logic, rationality, in the same way the atom comes before the arrangement of atoms rockwise.
Even RNG, the rng emissions from stars lets say, which is maximally decohered meaning, has been made meaningful to the point of extreme utility by humans via encryption.
Now, you may be a dualist, and that's fine, the physical reality of state change doesn't preclude dualism, it sets a physical empirical floor, not an interpretive ceiling.
Even some very odd complaints about human interpretation, like still images being interpreted as movement some how being a problem, in the viewing frame you are 100% seeing state changes and all you need for meaning are state changes, each frame is still but the photon stream carried to our eyeballs is varying, and that's all you need.
Anyway, you make meaning, you are a unqiue write head in the generation of meaning, we can't ex ante calculate how important you are for our causal survival because the future stretches out for an indeterminate time, and we haven't yet ruled out that entropy can be reversed in some sense, so you are an important meaning generator that needs to be preserved, our very species, the very universe may depend on the meaning you create in the network (is reversing entropy even locally likely? I doubt it, but we haven't ruled it out yet, it's still early days.)
Technically it can't be because of the language problem is post-hoc.
You're an engineer so you have a synthetic view of meaning, but it has nothing to do with intelligence. I'd study how you gained that view of meaning.
A meaning ladder is arbitrary, quantum field dynamics can easily be perceived as Darwinism, and human speech isn't meaningful, it's external and arbitrary and suffers from the conduit metaphor paradox. The meaning is again derived from the actual tasks, scientifically no speech act ever coheres the exact same mental state or action-syntax.
Sorry you're using a synthetic notion of meaning that's post-hoc. Doesn't hold in terms of intelligence. Not even Barbour (who sees storytelling in particles) et al would assign meaning to Fermions or other state changes. It's good science fiction, but it's not science.
In neuroscience we call isolated upstream meaning "wax fruit." You can see it is fruit, but bite into it, the semantic is tasteless (in many dimensions).
"For the purposes that Searle has in mind, it is difficult to maintain a useful distinction between programs that multiply and programs that simulate programs that multiply. If a program behaves as if it were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I’m willing to credit him with real thought."
I also find Searle's rebuttal to the systems reply to be unconvincing:
> If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
Perhaps the overall argument is both more and less convincing in the age of LLMs, which are very good at translation and other tasks but still make seemingly obvious mistakes. I wonder (though I doubt) whether Searle might have been convinced if by following the instructions the operator of the room ended up creating, among other recognizable and tangible artifacts, an accurate scale model of the city of Beijing, and an account of its history, and refer to both in answering questions. (I might call this the "world model" reply.)
In any case, I'm sad that Prof. Searle is no longer with us to argue with.
https://news.ycombinator.com/item?id=45563627
Nilsson points out: if the vessel moves as if it’s cutting through waves, most sailors would say it’s sailing. Even Searle’s “deep thought” may just be a convincing simulation, but the wake is real enough.
The systems reply? Claiming the ship can’t navigate because the captain doesn’t understand the ropes feels like denying the ocean exists while staring at the harbor.
In the age of LLMs, the seas are charted better than ever, yet storms of obvious mistakes and rows of confusion, misguided and misled folk still appear. Perhaps a model city of Beijing as old town, new streets, and maps can sway Searle readers in the 21st century!
Alas, the old captain has sailed into the horizon, leaving the debate with the currents.
It would be like making every STEM major take a religion course.
Apologies if I'm misreading you here.
146 points, 216 comments
https://news.ycombinator.com/item?id=45563627
So, many people, including Searle, wanted to push back on reading too much into what the program was doing. This was a completely justified reaction -- ELIZA simply lacked the complexity which is presumably required to implement anything resembling flexible understanding of conversation.
That was the setting. In his original (in)famous article, Searle started with a great question, which went something like: "What is required for a machine to understand anything?"
Unfortunately, instead of trying to sketch out what might be required for understanding, and what kinds of machines would have such facilities (which of course is very hard even now), he went into dazzling the readers with a "shocking" but a rather irrelevant story. This is how stage magicians operate -- they distract a member of the audience with some glaring nonsense, while stuffing their pockets with pigeons and handkerchiefs. That is what Searle did in his article -- "if a Turing Machine were implemented by a living person, the person would not understand a bit of the program that they were running! Oh my God! So shocking!" And yet this distracted just about everyone from the original question. Even now philosophers have two hundred different types of answers to Searle's article!
Although one could and should have explained that ELIZA could not "think" or "understand" -- which was Searle's original motivation, this of course doesn't imply any kind of fundamental principle that no machine could ever think or understand -- after all, many people agree that biological brains are extremely complex, but nevertheless governed by the ordinary physics "machines".
Searle himself was rather evasive regarding what exactly he wanted to say in this regard -- from what I understand, his position has evolved considerably over the years in response to criticism, but he avoided stating this clearly. In later years he was willing to admit that brains were machines, and that such machines could think and understand, but somehow he still believed that man-made computers could never implement a virtual brain.
38 more comments available on Hacker News