John Searle Has Died
Key topics
The news of John Searle's death sparked a thoughtful discussion on HN about his contributions to philosophy of mind and the implications of his ideas on artificial intelligence, alongside some controversy around his personal conduct.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
22m
Peak period
57
0-6h
Avg / period
11.4
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 12, 2025 at 8:57 PM EDT
3 months ago
Step 01 - 02First comment
Oct 12, 2025 at 9:20 PM EDT
22m after posting
Step 02 - 03Peak activity
57 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 16, 2025 at 1:12 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Gotta agree here. The brain is a chemical computer with a gazillion inputs that are stimulated in manifold ways by the world around it, and is constantly changing states while you are alive; a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening. The two are vastly different entities that are similar in only the most abstract ways.
The history of the brain computer equation idea is fascinating and incredibly shaky. Basically a couple of cyberneticists posed a brain = computer analogy back in the 50s with wildly little justification and everyone just ran with it anyway and very few people (Searle is one of those few) have actually challenged it.
And something that often happens whenever some phenomenon falls under scientific investigation, like mechanical force or hydraulics or electricity or quantum mechanics or whatever.
To be fair to Searle, I don't think he advanced this as an agument, but more of an illustration of his belief that thinking was indeed a physical process specific to brains.
¹https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...
https://plato.stanford.edu/entries/consciousness-intentional...
There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.
> You could equally make the statement that thought is by definition an abstract and strictly syntactic construct - one that has no objective reality.
This is what makes no sense, as I am not merely posing arbitrary definitions, but identifying characteristic features of human intelligence. Do you deny semantics and intentionality are features of the human mind?
> There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.
I have no idea what this means. The point is that computation as we understand it in computer science is purely syntactic (this was also Searle's argument). Indeed, it is modeled on the mechanical operations human computers used to perform without understanding. This property is precisely what makes computation - thus understood - mechanizable. Because it is purely syntactic and an entirely abstract model, two things follow:
1. Computation is not an objectively real phenomenon that computers are performing. Rather, physical devices are used to simulate computation. Searle calls computation "observer relative". There is nothing special about electronics, as we can simulate computation using wooden gears that operate mechanically or water flow or whatever. But human intelligence is objectively real and exists concretely, and so it cannot be a matter of mere simulation or something merely abstract (it is incoherent and self-refuting to deny this for what should be obvious reasons).
2. Because intentionality and the capacity for semantics are features of human intelligence, and computation is purely syntactic, there is no room in computation for intelligence. It is an entirely wrong basis for understanding intelligence and in a categorical sense. It's like trying to find out what arrangement of LEGO bricks can produce the number π. Syntax has no "aboutness" as that is the province of intentionality and semantics. To deny this is to deny that human beings are intelligent, which would render the question of intelligence meaningless and frankly mystifying.
I deny they are anything more than computation. And so your original argument was begging the question and so logically unsound.
> The point is that computation as we understand it in computer science is purely syntactic
Then the brain is also purely syntactic unless you can demonstrate that the brain carries out operations that exceeds the Turing computable, because unless that is the case the brain and a digital computer are computationally equivalent.
As long as your argument does not address this fundamental issue, you can talk about "aboutness" or whatever else you want all day long - it will have no relevance.
(And if anything is question begging - you didn't demonstrate what was question begging in my post - it's your amateurish reductionism and denial of the evidence.)
No.
I could jam a yardstick into the ground and tell you that it's now a sundial calculating the time of day. Is this really, objectively true? Of course not. It's true to me, because I deem it so, but this is not a fact of the universe. If I drop dead, all meaning attributed to this yardstick is lost.
Now, thoughts. At the moment I'm visualizing a banana. This is objectively true: in my mind's eye, there it is. I'm not shuffling symbols around. I'm not pondering the abstract notion of bananas, I'm experiencing the concretion of one specific imaginary banana. There is no "depends on how you look at it." There's nothing to debate.
> There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.
There's no "magic" because this isn't a thing. You can't transmute syntax into semantics any more than you can transmute the knowledge of Algebra into the sensation of a cool breeze on a hot summer day. This is a category error.
None of what you wrote is remotely relevant to what I wrote.
> There's no "magic" because this isn't a thing. You can't transmute syntax into semantics any more than you can transmute the knowledge of Algebra into the sensation of a cool breeze on a hot summer day. This is a category error.
We "transmute" syntax into semantics every time we interpret a given syntax as having semantics.
There is no inherent semantics. Semantics is a function of the meaning we assign to a given syntax.
Where we haven't made any headway on is on the connection between that and subjective experience/qualia. I feel like much of the (in my mind) strange conclusions of the Chinese Room are about that and not really about "pure" cognition.
Whooha! If it's not physical what is it? How does something that's not physical interact with the universe and how does the universe interact with it? Where does the energy come from and go? Why would that process not be a physical process like any other?
Or even more fundamentally, that physics captures all physical phenomena, which it doesn't. The methods of physics intentionally ignore certain aspects of reality and focus on quantifiable and structural aspects while also drawing on layers of abstractions where it is easy to mistakenly attribute features of these abstractions to reality.
Ok - I get that bit. I have always thought that physics is a description of the universe as observed and of course the description could be misleading in some way.
>the methods of physics intentionally ignore certain aspects of reality and focus on quantifiable and structural aspects
Can you share the aspects of reality that physics ignores? What parts of reality are unquantifiable and not structural?
Here's an article you might enjoy [0].
[0] http://edwardfeser.blogspot.com/2022/05/the-hollow-universe-...
To me it seems highly likely that our knowledge of physics is more than sufficient for simulating the brain, what is lacking is knowledge of biology and the computational power.
I don't understand, could you explain what you mean?
I looked up enclitic - it seems to mean the shortening of a word by emphasizing another word, I can't understand why this would apply to the judgements of an intermediary
Of course, also there are processes that are not expressible as computations, but those of these that I know about seem very very distant from human thought, and it seems very very improbable that they could be implemented with a brain. I also think that these are not observed in our universe so far.
Efforts to reproduce a human brain in a computer are currently at the level of a cargo cult: we're simulating the mechanical operations, without a deep understanding of the underlying processes which are just as important. I'm not saying we won't get better at it, but so far we're nowhere near producing a brain in a computer.
Unless you can demonstrate that the human brain can compute a function - any function - that exceeds the Turing computable, there is no evidence to even suggest it is possible for a brain not to be computationally equivalent to a computer.
So while it might well be we will need new architectures - maybe both software and hardware, it seems highly unlikely we won't be able to.
There's also as of yet no basis for presuming we need to be "complete accurate" or need to model the physical effects with much precision. If anything, what we've seen consistently over decades of AI research is that we've gotten far better results by ditching the idea that we need to know and model how brains work, and instead statistically modelling outputs.
And what's a few orders of magnitudes in implementation efficiency among philosophers?
This depends entirely on how it's configured. Right now we've chosen to set up LLMs as verbally acute Skinner boxes, but there's not reason you can't set up a computer system to be processing input or doing self-maintenance (ie sleep) all the time.
Appealing to the Turing test suggests a misunderstanding of Searle's arguments. It doesn't matter how well computational methods can simulate the appearance of intelligence. What matters is whether we are dealing with intelligence. Since semantics/intentionality is what is most essential to intelligence, and computation as defined by computer science is a purely abstract syntactic process, it follows that intelligence is not essentially computational.
> It's very close to the Chinese Room, which I had always dismissed as misleading.
Why is it misleading? And how would LLMs change anything? Nothing essential has changed. All LLMs introduce is scale.
From my experience with him, he'd heard (and had a response to) nearly any objection you could imagine. He might've had fun playing with LLMs, but I doubt he'd have found them philosophically interesting in any way.
In "both" (probably more, referencing the two most high profile - Eugene and the LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. For instance here is dialog from a human in one of the tests:
----
[16:31:08] Judge: don't you thing the imitation game was more interesting before Turing got to it?
[16:32:03] Entity: I don't know. That was a long time ago.
[16:33:32] Judge: so you need to guess if I am male or female
[16:34:21] Entity: you have to be male or female
[16:34:34] Judge: or computer
----
And the tests are typically time constrained by woefully poor typing skills (is this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of just several words each. The above snip was a complete interaction, so you get 2 responses from a human trying to trick the judge into deciding he's a computer. And obviously a judge determining that the above was probably a computer says absolutely nothing about the quality of responses from the computer - instead it's some weird anti-Turing Test where humans successfully act like a [bad] computer, ruining the entire point of the test.
The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that. I suspect in a true run of the Turing Test we're still nowhere even remotely close to passing it.
I think if you are having to accuse the humans of woeful typing and being smartphone gen fools you are kind scoring one for the LLM. In the Turing test they were only supposed to match an average human.
The LLM Turing Test was particularly abysmal. They used college students doing it for credit, actively filtered the users to ensure people had no clue what was going on, intentionally framed it as a conversation instead of a pointed interrogation, and then had a bot who's prompt was basically 'act stupid, ask questions, usually use fewer than 5 words', and the kids were screwing around most of the time. For instance here is a complete interrogation from that experiment (against a bot):
- hi
- heyy what's up
- hru
- I'm good, just tired lol. hbu?
The 'ask questions' was a reasonable way of breaking the test because it made interrogators who had no clue what they were doing waste all of their time, so there were often 0 meaningful questions or answers or any given interrogation. In any case I think that scores significantly above 50% are a clear indicator of humans screwing around or some other 'quirk' in the experiment, because, White Zombie notwithstanding, one cannot be more human than human.
[1] - https://osf.io/jk7bw/overview
This is ex-post-facto denial and cope. The Turing Test isn't a test between computers and the idealized human, it's a test between functional computers and functional humans. If the average human performs like the above, then well, I guess the logical conclusion is that computers are already better "humans (idealized)" than humans.
So I'd say we're at least "remotely close", which is sufficient for me to reconsider Searle.
https://www.theguardian.com/world/2025/oct/05/john-searle-ob...
His most famous argument:
https://en.wikipedia.org/wiki/Chinese_room
The human running around inside the room doing the translation work simply by looking up transformation rules in a huge rulebook may produce an accurate translation, but that human still doesn't know a lick of Chinese. Ergo (they claim) computers might simulate consciousness, but will never be conscious.
But is the Searle room, the human is the equivalent of, say, ATP in the human brain. ATP powers my brain while I'm speaking English, but ATP doesn't know how to speak English just like the human in the Searle room doesn't know how to speak Chinese.
Neither the man, nor the room "understand" Chinese. It is the same for the computer and its software. Jeffery Hinton has sad "but the system understands Chinese." I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something.
Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.
That's not at all clear!
> Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.
All of that is called into question with some LLM output. It's hard to understand how some of that could be produced without some emergency model of the world.
LLM output doesn't call that into question at all. Token production through distance function in high-dimensional vector representation space of language tokens gets you a long way. It doesn't get you understanding.
I'll take Penrose's notions that consciousness is not computation any day.
I know that it doesn't feel like I am doing anything particularly algorithmic when I communicate but I am not the hommunculus inside me shuffling papers around so how would I know?
Hopefully we have all experienced what genuine inspiration feels like, and we all know that experience. It sure as hell doesn't feel like a massively parallel search algorithm. If anything it probably feels like a bolt of lightning, out of the blue. But here's the thing. If the conscious loop inside your brain is something like the prefrontal cortex, which integrates and controls deeper processing systems outside of conscious reach, then that is exactly what we should expect a search algorithm to feel like. You -- that strange conscious loop I am talking to -- are doing the mapping (framing the problem) and the reducing (recognizing the solution), but not the actual function application and lower level analysis that generated candidate solutions. It feels like something out of the blue, hardly sought for, which fits all the search requirements. Genuine inspiration.
But that's just what it feels like from the inside, to be that recognizing agent that is merely responding to data being fed up to it from the mess of neural connections we call the brain.
You can take this insight a step further, and recognize that many of the things that seem intuitively "obvious" are actually artifacts of how our thinking brains are constructed. The Chinese room and the above comment about inspiration are only examples.
I cannot emphasize enough how much I dislike linking to LessWrong, and to Yudkowsky in particular, but I first picked up on this from an article there, and credit should be given where credit is due: https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-alg...
Unless we suppose those books describe how to implement a memory of sorts, and how to reason, etc. But then how sure are we it’s not conscious?
I'm not even sure what you are asking for, tbh, so any answer is fine.
It's implied, since they enable someone who does not know Chinese to respond equally well to questions as someone with Chinese as a native language.
There are two possibilities here. Either the Chinese room can produce the exact same output as some Chinese speaker would given a certain input, or it can't. If it can't, the whole thing is uninteresting, it simply means that the rules in the room are not sufficient and so the conclusion is trivial.
However, if it can produce the exact same output as some Chinese speaker, then I don't see by what non-spiritualistic criteria anyone could argue that it is fundamentally different from a Chinese speaker.
Edit: note that here when I'm saying that the room can respond with the same output as a human Chinese speaker, that includes the ability for the room to refuse to answer a question, to berate the asker, to start musing about an old story or other non-sequiturs, to beg for more time with the asker, to start asking the akser for information, to gossip about previous askers, and so on. Basically the full range of language interactions, not just some LLM style limited conversation. The only limitations in its responses would be related to the things it can't physically do - it couldn't talk about what it actually sees or hears, because it doesn't have eyes, or ears, it couldn't truthfully say it's hungry, etc. It would be limited to the output of a blind, deaf, mute Chinese speaker confined to a room whose skin is numb and who is being fed intravenously, etc.
Indeed. The crux of the debate is:
a) how many input and response pairs are needed to agree that the rule-provider plus the Chinese room operation is fundamentally equal/different to a Chinese speakers
b) what topics can we agree to exclude so that if point a can be passed with the given set of topics we can agree that 'the rule-provider plus the Chinese room operation' is fundamentally equal/different to a Chinese speaker
Sounds like circular logic to me unless you make that assumption explicit
The success of LLMs imitating human speech patterns, often better than most people (ask an LLM to write a poem about some topic in a certain style and it will do better than 99% of people and do it faster than 100% of people) is pretty impressive. "But it is just a thought-free statistical model, unlike people". I agree it is a thought-free statistical model.
But most of the things we all say in conversation is of the same quality. 99% of the time in conversation words tumble out of my mouth and I learn what I said when I hear my words in the same moment by conversation partner does. How is that any different from how today's LLM models behave? Is such dialog any more thoughtful than what LMMs produce?
The problem with the people who buy Searle's argument is they don't really think through the magnitude of what would really be required to pull it off. It wouldn't just be a static book, or a wall full of encyclopedias. It would have to be a stateful system that modifies that state and deduces new rules that affect future transformations as flexibly as the human mind does. To me it is clear that such a system really does think in the same way that humans do, no dualism required.
You can be completely paralyzed and completely concious.
Multimodal LLMs get input from cameras and text and generate output. They undergo reinforcement learning with some analogy to pain/pleasure and they express desires. I don't think they are conscious but I don't think they necessarily fail these proposed preconditions, unless you meant while they are suspended.
Most concisely: could we ask, "What is it like to be Claude?" If there's no "what it's like," then there's no consciousness.
Otherwise yeah, agreed on LLMs.
Maybe I should look up some of my other heroes and heretics while I have the chance. I mean, you don't need to cold e-mail them a challenge. Sometimes they're already known to be at events and such, after all!
https://plato.stanford.edu/entries/chinese-room
I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.
Two systems, one feels intuitively like it understands, one doesn’t. But the two systems are functionally identical.
Therefore either my concept of “understanding” is broken, my intuition is wrong, or the concept as a whole is not useful at the edges.
I think it’s the last one. If a bunch of valves can’t understand but a bunch of chemicals and electrical signals can if it’s in someone’s head then I am simply applying “does it seem like biology” as part of the definition and can therefore ignore it entirely when considering machines or programs.
Searle seems to just go the other way and I don’t under Why.
First, it is tempting to assume that a bunch of chemicals is the territory, that it somehow gives rise to consciousness, yet that claim is neither substantiated nor even scientific. It is a philosophical view called “monistic materialism” (or sometimes “naive materialism”), and perhaps the main reason this view is popular currently is that people uncritically adopt it following learning natural scientific fields, as if they made some sort of ground truth statements about the underlying reality.
The key to remember is that this is not a valid claim in the scope of natural sciences; this claim belongs to the larger philosophy (the branch often called metaphysics). It is not a useless claim, but within the framework of natural sciences it’s unfalsifiable and not even wrong. Logically, from scientific method’s standpoint, even if it was the other way around—something like in monistic idealism, where perception of time-space and material world is the interface to (map of) conscious landscape, which was the territory and the cause—you would have no way of proving or disproving this, just like you cannot prove or disprove the claim that consciousness arises from chemical processes. (E.g., if somebody incapacitates some part of you involved in cognition, and your feelings or ability to understand would change as a result, it’s pretty transparently an interaction between your mind and theirs, just with some extra steps, etc.)
The common alternatives to monistic materialism include Cartesian dualism (some of us know it from church) and monistic idealism (cf. Kant). The latter strikes me as the more elegant of the bunch, as it grants objective existence to the least amount of arbitrary entities compared to the other two.
It’s not to say that there’s one truly correct map, but just to warn against mistakenly trying to make a statement about objective truth, actual nature of reality, with scientific method as cover. Natural sciences do not make claims of truth or objective reality, they make experimentally falsifiable predictions and build flawed models that aid in creating more experimentally falsifiable predictions.
Second, what scientific method tries to build is a complete, formally correct and provable model of reality, there are some arguments that such model is impossible to create in principle. I.e., there will be some parts of the territory that are not covered by the map, and we might not know what those parts are, because this territory is not directly accessible to us: unlike a landmass we can explore in person, in this case all we have is maps, the perception of reality supplied by our mind, and said mind is, self-referentially, part of the very territory we are trying to model.
Therefore, it doesn’t strike me as a contradiction that a bunch of valves don’t understand yet we do. A bunch of valves, like an LLM, could mostly successfully mimic human responses, but the fact that this system mimics human responses is not an indication of it feeling and understanding like a human does, it’s simply evidence that it works as designed. There can be a very different territory that causes similar measurable human responses to arise in an actual human. That territory, unlike the valves, may not be fully measurable, and it can cause other effects that are not measurable (like feeling or understanding). Depending on the philosophical view you take, manipulating valves may not even be a viable way of achieving a system that understands; it has not been shown that biological equivalent of valves is what causes understanding, all we have shown is that those entities measurably change at the same time with some measurable behavior, which isn’t a causative relationship.
> A bunch of valves, like an LLM, could mostly successfully mimic human responses,
The argument is not "mostly successfully", it's identically responding. The entire point of the chinese room is that from the outside the two things are impossible to distinguish between.
> The argument is not "mostly successfully", it's identically responding.
This is a thought experiment. Thought experiments can involve things that may be impossible. For example, the Star Trek Transporter thought experiment involves an existence of a thing that instantly moves a living being: the point of the experiment is to give rise to a discussion about the nature of consciousness and identity.
Thing not possibly existing is one possible resolution of the paradox. There may be a limitation we are not aware of.
Similarly, in Searle’s experiment, the system that identically responds might never exist, just like the transporter in all likelihood cannot exist.
> The entire point of the chinese room is that from the outside the two things are impossible to distinguish between.
To a blind person, an orange and a dead mouse are impossible to distinguish between from 10 meters away. If you can’t distinguish between two things, it doesn’t mean the things are the same. Ability to understand, self-awareness and consciousness are things we currently cannot measure. You can either say “these things don’t exist” (we will disagree) or you have to say “the systems can be different”.
The Chinese room is setup so that you cannot tell the difference from the outside. That’s the point of it.
> If you can’t distinguish between two things, it doesn’t mean the things are the same.
But it does mean that the differences between them are irrelevant to you by definition.
> Ability to understand, self-awareness and consciousness are things we currently cannot measure. You can either say “these things don’t exist”
Unless you have a way they could be measured but we just lack the technology or skill then your definitions are of things that may as well not exist because you cannot define them. They are vague words you use and are fine if you accept you have three major categories “yes and here’s why, no and here’s why and no idea” that’s fine. I am happy saying I’m conscious and the pillow next to me is not. I don’t have a definition clear enough to say yes/no if the pillow was arguing with me.
I feel like I could make the same arguments about the chinese room except my definition of "understanding" hinges on whether there's a tin of beans in the room or not. You can't tell from the outside, but that's the difference. Both cases with a person inside answering questions act identically and you can never design a test to tell which room has the tin of beans in.
Now you might then say "I don't care if there's a tin of beans in there, it doesn't matter or make any sort of difference for anything I want to do", in which case I'd totally agree with you.
> just like you cannot prove or disprove the claim that consciousness arises from chemical processes.
Like understanding, I haven't seen a particularly useful definition of consciousness that works around the edges. Without that, talking of a claim like this is pointless.
Not at all. The confusion you expressed in your original comment stems from that claim. If you want to overcome that confusion, we have to talk about that claim.
Your statement was that it’s unclear how a bunch of valves doesn’t understand, but chemical processes do, and maybe you have a wrong intuition. Well, it appears that your intuition is to make this claim of causality, that some sort of object (e.g., valves or neurons), which you believe is part of objective reality, is what would have to cause understanding to exist.
So, I pointed out that assumption of such causality is not a provable claim, it is part of monistic materialism, which is a philosophical view, not scientific fact.
Further hinting at your tendency to assume monistic materialism is calling the systems “functionally identical”. It’s fairly evident that they are not functionally identical if one of them understands and the other doesn’t; it’s easy to make this mistake if you subconsciously already decide that understanding isn’t really a thing that exists (as many monistic materialists do).
> Like understanding, I haven't seen a particularly useful definition of consciousness that works around the edges.
Inability to define consciousness is fine, because logically circular definitions are difficult. However, lack of definition for the phenomenon is not the same thing as denying its objective existence.
You can escape the necessity to admit its existence by waving it away as an illusion or “not really” existing. Which is absolutely fine, as long as you recognize that it’s simply a workaround to not have to define things (if it’s an illusion, whom does it act on?), that conscious illusionism is just as unfalsifiable and unprovable as any other philosophical view about the nature of reality or consciousness, and that logically it’s quite ridiculous to dismiss as illusion literally the only thing that we empirically have direct unmediated access to.
> It's not mostly mimicking, it's exactly identical.
> Both cases with a person inside answering questions act identically and you can never design a test to tell which room has the tin of beans in.
If you constructed a system A that produces some output, and there is a system B, which you did not construct and which you don't have an full understanding of how it works, which produces identical output but is also believed to produce other output that cannot be measured with current technology (a.k.a. feelings and understanding), you have two options: 1) say that if we cannot measure something today then it certainly doesn’t matter, doesn’t exist, etc., or 2) admit that system A could be a p-zombie.
Then you could tell the difference and the thought experiment is broken. The whole point is that outside observers can’t tell. Not that they’re too stupid, that there isn’t a way they could tell, no question they could ask.
> but is also believed to produce other output that cannot be measured with current technology
Are you suggesting that Searle was saying that there was a difference between the rooms and that we just needed more advanced technology to see inside them? Come on.
I tried to explain that outside observers may not observe the entirety of what matters, whether due to current technical limitations or fundamental impossibility. In fact, to assume externally observed behaviour (e.g., of a human) is all that matters strikes me as a pretty fringe view.
> Are you suggesting that Searle was saying that there was a difference between the rooms and that we just needed more advanced technology to see inside them
Perhaps you are trying to read too much into what the experiment itself is. I do not treat it as “Searle tried to tell us something this way”. If he wanted to say something more specific he probably had done it in relevant works. The thought experiment however is very clear and describable in a paragraph and is open to possible interpretations, which is what we are doing now. That is the beauty of thought experiments like this.
Second: the philosophically relevant point is that when you gloss over mental states and only point to certain functions (like producing text), you can't even really claim to have fully accounted for what the brain does in your AI. Even if the physical world the brain occupies is practically simulatable, passing a certain speech test in limited contexts doesn't really give you a strong claim to consciousness and understanding if you don't have further guarantees that you're simulating the right aspects of the brain properly. AI, as far as I can tell, doesn't TRY to account for mental states. That's partially why it will keep failing in some critical tasks (in addition to being massively inefficient relative to the brain).
> consciousness and understanding
After decades of this I’ve settled on the view that these words are near useless for anything specific, only vague pointers to rough concepts. I see zero value in nailing down the exact substrates understanding is possible on without a way of looking at two things and saying which one does and which one doesn’t understand. Searle to me is arguing that it is not possible at all to devise such a test and so his definition is useless.
Although for whatever it’s worth most modern AIs will tell you they don’t have genuine understanding (eg no sense of what pleasure is or feels like etc aside from human labeling).
The entire point of the thought experiment is that to outside observers it appears the same as if a fluent speaker is in the room. There aren’t questions you can ask to tell the difference.
This was why I have the tin of beans comparison.
The room has the property X if and only if there’s a tin of beans inside. You can’t in any way tell the difference between a room that has a tin of beans in and one that doesn’t without looking inside.
You might find that a property that has zero predictive power, makes (by definition) no difference to what either room can do, and has no use for any practical purposes (again by definition) is rather pointless. I would agree.
Searle has a definition of understanding that, to me, cannot be useful for any actual purpose. It is therefore irrelevant to me if any system has his special property just as my tin of beans property is useless.
I can imagine a lot of things, but the argument did not go this far, it left it as "obvious" well before this stage. Also, when I see trivial simulations of our biological machinery yielding results which are _very similar_, e.g. character or shape recognition, I am left wondering if the people talking about quantum wavefunctions are not the ones that are making extraordinary claims, which would require extraordinary evidence. I can certainly find it plausible that these _could_ be one particular way that we could be superior to the electronics / valves of the argument, but I'm not yet convinced it is a differentiator that actually exists.
There has to be a special motivation to instead cast understanding as “competent use of a given word or concept,” (judged by whom btw?). The practical upshot here is that without this grounding, we keep seeing AI, even advanced AI make trivial mistakes and requires the human to give an account of value (good/bad, pleasant/unpleasant) because these programs obviously don’t have conscious feelings of goodness and badness. Nobody had to teach me that delicious things include Oreos and not cardboard.
Well, no, that came from billions of years of pre-training that just got mostly hardcoded into us, due to survival / evolutionary pressure. If anything, the fact that AI is as far as it is, after less than 100 years of development, is shocking. I recall my uncle trounce our C64 in chess, and go on to explain how machines don't have intuition, and the search space explodes combinatorically, which is why they will never beat a competent human. This was ~10 years before Deep Blue. Oh, sure, that's just a party trick. 10 years ago, we didn't have GPT-style language understanding, or image generation (at least, not widely available nor of middling quality). I wonder what we will have in 10, 20, 100 years - whatever it is, I am fairly confident that architectural improvements will lead to large capability improvements eventually, and that current behavior and limitations are just that, current. So, the argument is that somehow, intuitively they can't ever be truly intelligent or conscious because it's somehow intuitively obvious? I disagree with this argument; I don't think we have any real, scientific idea of what consciousness really is, nor do we have any way to differentiate "real" from "fake".
On the other end of the spectrum, I have seen humans with dementia not able to make sense of the world any more. Are they conscious? What about a dog, rabbit, cricket, bacterium? I am pretty sure at their own level, they certainly feel like they are alive and conscious. I don't have any real answers, but it certainly seems to be a spectrum, and holding on to some magical or esoteric differentiator, like emotions or feelings, seems like wishful thinking to me.
I'm becoming less sure of this over time. As AI becomes more capable, it might start being more comparable to smaller mammals or birds, and then larger ones. It's not a boolean function, but rather a sliding scale.
Despite starting out from very skeptical roots, over time Ethology has found empirical evidence for some form of intelligence in more and more different species.
I do think that this should also inform our ethics somewhat.
Personally, I'd say that there is a Chinese speaking mind in the room (albeit implemented on a most unusual substrate).
So what’s the physical cause for consciousness and understanding that is not computable? If for example you took the hypothesis that “consciousness is a sequence of microtubule-orchestrated collapses of the quantum wavefunction” [1], then you can see a series of physical requirements for consciousness and understanding that forces all conscious beings onto: 1) roughly the same clock (because consciousness shares a cause), and 2) the same reality (because consciousness causes wavefunction collapses). That’s something you could not do merely by simulating certain brain processes in a closed system.
1) Not saying this is correct, but it invites one to imagine that consciousness could have physical requirements that play in some of the oddities of the (shared) quantum world. https://x.com/StuartHameroff/status/1977419279801954744
Wiki
https://www.insidehighered.com/quicktakes/2017/04/10/earlier...
I'm very certain that issues of justice are complicated, and that allegations of misconduct are not always correct and that allegations in and of themselves must not be immediately treated as substantiated; yet surely, if it is justice we are interested in, we must be careful to ensure our fact-seeking methods do not not unduly rely on testimonies of those accused to the detriment of all other lines of inquiry.
I understand in McGinn's case that actual documents of the harassment are available, and I think that if some academics believe they need to push back against allegations of sexual harrassment they consider wrongful, a person with documented harassment is profoundly inappropriate to be spearheading that.
It includes a letter that starts:
I'm surprised to see the NYT obituary published nearly a month after his death. I would have thought he'd be included in their stack of pre-written obituaries, meaning it could be updated and published within a day or two.There are many people who know a lot about a little. There are also those who know a little about a lot. Searle was one of those rare people who knew a lot about a lot. Many a cocky undergraduate sauntered into his classroom thinking they'd come prepared with some new fact that he hadn't yet heard, some new line of attack he hadn't prepared for. Nearly always, they were disappointed.
But you know what he knew absolutely nothing about? Chinese. When it came time to deliver his lecture on the Chinese Room, he'd reach up and draw some incomprehensible mess of squigglies and say "suppose this is an actual Chinese character." Seriously. After decades of teaching about this thought experiment, for which he'd become famous (infamous?), he hadn't bothered to teach himself even a single character to use for illustration purposes.
Anyway, I thought it was funny. My heart goes out to Jennifer Hudin, who was indispensable, and all who were close to him.
0. https://www.academia.edu/30805094/The_Success_and_Failure_of...
In general, I think he's spectacularly misunderstood. For instance: he believed that it was entirely possible to create conscious artificial beings (at least in principle). So why do so many people misunderstand the Chinese Room argument to be saying the opposite? My theory is that most people encounter his ideas from secondary sources that subtly misrepresent his argument.
At the risk of following in their footsteps, I'll try to very succinctly summarize my understanding. He doesn't argue that consciousness can only emerge from biological neurons. His argument is much narrower: consciousness can't be instantiated purely in language. The Chinese Room argument might mislead people into thinking it's an epistemology claim ("knowing" the Chinese language) when it's really an ontology claim (consciousness and its objective, independent mode of existence).
If you think you disagree with him (as I once did), please consider the possibility that you've only been exposed to an ersatz characterization of his argument.
No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example. But his belief, which he articulates very explicitly in the article, is that you couldn't create a machine consciousness by running even a perfect simulation of a biological brain on a digital computer, neuron for neuron and synapse for synapse. He likens this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building.
Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why. His ideas are very much muddy, and while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate, it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.
> with no clear reason whatsoever as to why
It's not clear to me how you can understand that fire has particular causal powers (to burn, and so on) that are not instantiated in a simulation of fire; and yet not understand the same for biological processes.
The world is a particular set of causal relationships. "Computational" descriptions do not have a causal semantics, so aren't about properties had in the world. The program itself has no causal semantics, it's about numbers.
A program which computes the fibonacci sequence describes equally-well the growth of a sunflower's seeds and the agglomeration of galactic matter in certain galaxies.
A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described. A simulation of fire is, by definition, not on fire -- that is fire.
A simulation is a game to help us think about the world: the ability to derive some descriptive statements about a system without instantiating the properties of that system is a trivial thing, and it is always disappointing at how easily it fools our species. You can move beads of wood around and compute the temperature of the sun -- this means nothing.
What we mean by a simulation is, by definition, a certain kind of "inference game" we play (eg., with beads and chalk) that help us think about the world. By definition, if that simulation has substantial properties, it isn't a simulation.
If the claim is that an electrical device can implement the actual properties of biological intelligence, then the claim is not about a simulation. It's that by manufacturing some electrical system, plugging various devices into it, and so on -- that this physical object has non-simulated properties.
Searle, and most other scientific naturalists who appreciate the world is real -- are not ruling out that it could be possible to manufacture a device with the real properties of intelligence.
It's just that merely by, eg., implementing the fibonacci sequence, you havent done anything. A computation description doesnt imply any implementation properties.
Further, when one looks at the properties of these electronic systems and the kinds of causal realtions they have with their environments via their devices, one finds very many reasons to suppose that they do not implement the relevant properties.
Just as much as when one looks at a film strip under a microscope, one discovers that the picture on the screen was an illusion. Animals are very easily fooled, apes most of all -- living as we do in our own imaginations half the time.
Science begins when you suspend this fantasy way of relating to the world, look it its actual properties.
If your world view requires equivocating between fantasy and reality, then sure, anything goes. This is a high price to pay to cling on to the idea that the film is real, and there's a train racing towards you in your cinema seat.
This is kind of a no-true-scotsman esque argument though, isn't it? "substantial properties" are... what, exactly? It's not a subjective question. One could, and many have, insist that fire that really burns is merely a simulation. It would be impossible from the inside to tell. In that case, what is fantasy, and what is reality?
S is a simulation of O iff there is an inferential process, P, by which properties of O can be estimated from P(S) st. S does not implement O
Eg., "A video game is a simulation of a fire burning if, by playing that game, I can determine how long the fire will burn w/o there being any fire involved"
S is an emulation model of O iff ...as-above.. S implements O (eg., "burning down a dollhouse to model burning down a real house").
You define a 'real' implementation to exclude computational substrate, then use the very same definition to prove that computational substrate cannot implement 'real' implementations. It's circular!
Searle described himself as a "naive realist" although, as was typical for him, this came with a ton of caveats and linguistic escape hatches. This was certainly my biggest objection and I passed many an afternoon in office hours trying to pin him down to a better position.
Saying that the symbols in the computer don't mean anything, that it is only we who give them meaning, presupposes a notion of meaning as something that only human beings and some things similar to us possess. It is an entirely circular argument, similarly to the notion of p-zombies or the experience of seizing red thought experiment.
If indeed the brain is a biological computer, and if our mind, our thinking, is a computation carried out by this computer, with self-modeling abilities we call "qualia" and "consciousness", then none of these arguments hold. I fully admit that this is not at all an established fact, and we may still find out that our thinking is actually non-computational - though it is hard to imagine how that could be.
I associate the key with "K", and my screen displays a "K" shape when it is pressed -- but there is no "K", this is all in my head. Just as much as when I go to the cinema and see people on the screen: there are no people.
By ascribing a computational description to a series of electrical devices (whose operation distributes power, etc.) I can use this system to augment by own thinking. Absent the devices, the power distribution, their particular casual relationships to each other, there is no computer.
The computational description is an observer-relative attribution to a system; there are no "physical" properties which are computational. All physical properties concern spatio-temporal bodies and their motion.
The real dualism is to suppose there are such non-spatio-temporal "process". The whole system called a "computer" is an engineered electrical device whose construction has been designed to achive this illusion.
Likewise I can describe the solar system as a computational process, just discretize orbits and give their transition in a while(true) loop. That very same algorithm describes almost everything.
Physical processes are never "essentially" computational; this is just a way of specifying some highly superficial feature which allows us to ignore their causal properties. Its mostly a useful description when building systems, ie., an engineering fiction.
A computational description of a system is no more and no less rigurous than any other physical model of that system. To the same extent that you can say that billiards balls interact by colliding with each other and the table, you can say that a processor is computing some function by flipping currents through transistors.
No, you cannot.
A hard-drive needs to a have a physical hysteresis. An input/output device needs to transmit power, and be powered, by an electrical field. A visual device needs to emit light on electrical stimulation, and so on.
The only sense, in the end, in which a "computer" survives its devices being changed is just observer-relative. You attribute a "3" to one state and a "1" to another, and "addition" to some process. By your attribution, does that process compute "4".
But it computes everything and computes nothing. If you plug in a speaker to VGA socket, the electrical signal causes an the air to move, sound.
The only sense in which a VGA signal is a "visual" signal is that we attach an LCD to that socket, and we interpret the light from the LCD semantically.
The world is a particular way objects in space and time move, those exhaust all physical properties. Any other properties are non-physical, which is why this kind of computationalism is really dualism.
You suppose it isnt your physical mechanism and its relationship to your environment which constitutes your thinking -- rather it's your soul. A pure abstract pattern which needs no devices with no specific properties to be realised.
Whatever this pattern is, if you played it through a speaker, it would just be vibrations in the air. Sent to an LCD, whitenoise. Only realised in your specific biology is it any kind of thinking at all.
In either case, the door will open if you're in front of it, and close after you've gone. This will happen regardless of whether you undertsand what it represents, it will open for a basic robot as well as for a human or a squirrel or a plant growing towards it very slowly or a rock rolling downhill.
Of course, you can't replace every single piece of hardware with software - you still need some link with the physical world. And of course, there will be many measurable differences between the two systems - for a basic example, the camera-based system will give off a lot more heat than the photo-sensitive diode one. I'm not claiming that they are perfectly equivalent in every way, not at all. I am claiming that they are equivalent in some measurable, observer-independent ways, and that the specific way in which they are equivalent is that they are running the same computation.
https://en.wikipedia.org/wiki/John_Searle
56 more comments available on Hacker News