What Is It Like to Be a Bat?
Posted4 months agoActive4 months ago
en.wikipedia.orgSciencestoryHigh profile
calmmixed
Debate
80/100
ConsciousnessPhilosophy of MindAnimal Cognition
Key topics
Consciousness
Philosophy of Mind
Animal Cognition
The discussion revolves around Thomas Nagel's 1974 paper 'What is it like to be a bat?' and its implications for understanding consciousness, subjective experience, and the limitations of reductionism.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
11m
Peak period
103
0-12h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 3, 2025 at 1:48 PM EDT
4 months ago
Step 01 - 02First comment
Sep 3, 2025 at 1:59 PM EDT
11m after posting
Step 02 - 03Peak activity
103 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 8, 2025 at 12:55 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45118592Type: storyLast synced: 11/20/2025, 8:23:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://news.ycombinator.com/item?id=45118703
https://news.ycombinator.com/item?id=45115367
https://news.ycombinator.com/item?id=45111401
Interesting. I would have said that something like that is the definition of reductionism.
>Consciousness doesn't need to be explained in terms of objective facts
If there's one good thing that analytic philosophy achieved, it was spending the better part of the 20th century beating back various forms of dualism and ghosts in the machine. You'd have to be something other than a naturalist traditionally conceived to treat "consciousness" as ontologically basic.
Bringing it back to bats, a failure to imagine what it's like to be a bat is just indicative that the overlaps between human and bat modalities don’t admit a coherent gluing that humans can inhabit phenomenally.
There's something more to it than this.
For one thing there's a threshold of awareness. Your mind is constantly doing things and having thoughts that don't arrive to the threshold of awareness. You can observe more of this stuff if you meditate and less of this stuff if you constantly distract yourself. But consciousness IMO should have the idea of a threshold baked in.
For another, the brain will unify things that don't make sense. I assume you mean something like consciousness is what happens when there aren't obstructions to stitching sensory data together. But the brain does a lot of work interpreting incoherent data as best it can. It doesn't have to limit itself to coherent data.
> It doesn't have to limit itself to coherent data.
There are specific failure cases for non-integrability:
1. Dissociation/derealization = partial failures of gluing.
2. Nausea = inconsistent overlaps (ie: large cocycles) interpreted as bodily threat.
3. Anesthesia = disabling of the sheaf functor: no global section possible.
At least for me it provides a consistent working model for hallucinogenic, synesthesia, phantom limb phenomena, and split-brain scenarios. If anything, the ways in which sensor integration fails are more interesting than when it succeeds.
The way I look at it is that the sensors provide data as activations and awareness is some output with a thresholding or activation function.
Sense making and consciousness in my mental model is something that happens after the fact and it tries to happen even with nonsense data. As opposed to -- as I was reading you to be leaning toward -- being the consequence of sensory data being in a sufficiently nice relationship with each other.
If I've understood you correctly, I'll suggest that simple sensory intersection is way way not enough: the processing hardware and software are material to what it is like to be someone.
— Kurt Vonnegut
In this sense, I think one has to aaaaaalmost be a bat in order to know what it is to be it. A fine thread trailing back to the human.
The imago-machines of Arkady Martine's "A Memory Called Empire" come to mind. Once integrated with another's imago, one is not quite the same self, not even the sum of two, but a new person entirely containing a whole line of selves selves melded into that which was one. Now one truly contains multitudes.
Andy Weir's The Egg makes regular HackerNews appearances.
Of course it could all be claptrap that humans want to believe in but I find it to be pretty powerful and I think it is true
(Warning: Gets into spiritual stuff)
https://youtu.be/R-IIzAblVlg?si=t9RqXgF_wwJPcv_g
I sometimes wonder about this, too. Do other people perceive things like I do? If someone was magically transplanted to my body, would they scream in pain "ooooh, this hurts, how could he stand it", whereas I consider the variety of discomforts of my body just that, discomforts? And similarly, were I magically transported to another person's body, would I be awestruck by how they see the world, how they perceive the color blue (to give an example), etc?
Have you never thought you remembered something with clarity, only to be told it's impossible because it never happened? Or another example, I often vividly remember something from a book (it was a photograph on this side of the page, lower right corner) and then when I look it up, it was in a different location and it wasn't the photo I remembered. But my mental imagery felt so precise!
I'm with grandparent, I think I would perceive my younger self as simultaneously familiar and alien.
Pain, like vision, resides in the brain; like vision it is mostly determined by reports from our (non-brain) nervous system, but pain, light flashes, even objects and people can be created whole-cloth by the brain itself. And "real" inputs can be ignored, like a mild pain you're desensitized to, or the gorilla walking amongst the ball-passers in that video.
Yeah another example I think about from time to time is our own sense of perspective. It's all relative, but my sense of how far away is "that thing over there" is probably different from yours. Partially because we may be different sizes and heights, but also because our eyes and brains process the world differently. Like a camera with different lenses.
Also, speed. If your brain's clock is faster than mine then you may perceive the world to be moving slower than I do.
An interior designer will see the colors, and the layout and how the things go together or don't. I don't see that, and in turn the designer does not see what I see.
So never mind the physical senses, even on a mental level two people do not see/experience the world the same way.
https://partiallyexaminedlife.com/2025/06/30/what-is-it-like...
Mostly people make things better over time. My bed, my shower, my car are all better than I could reasonably have bought 50 years ago. But the peculiarities of software network effects - or of what venture capitalists believe about software network effects - mean that people should give things away below cost while continuing to make them better, and then one day switch to selling them for a profit and making them worse, while they seemingly could change nothing and not make them worse.
That's a particular phenomenon worthy of a name and the only problem with "enshittification" is that it's been co-opted to mean making things worse in general.
It's not always that. After some time, software gets to a state where it's near the local maximum for usability. So any changes make the software _less_ usable.
But you don't get promoted in large tech companies unless you make changes. So that's how we get stuff like "liquid glass" or Android's UI degradatation.
You can tell it was invented by Cory Doctorow because there is a very specific kind of Gen X person who uses words like that - they have a defective sense of humor vaguely based on Monty Python, never learned when you are and aren't supposed to turn it off, and so they insist on making up random insults like "fuckwaffle" all the time instead of regular swearing.
The author inventing "batfished" also believes bats to be conscious, so it seems a very poorly conceived word, and anyways unnecessary since anthropomorphize works just fine... "You've just gaslighted yourself by anthropomorphizing the AI".
All we need to do (to talk about, to study it) is identify it. We need to be using the word to refer to the same thing. And there's nothing really hard about that.
There are many research areas where the object of research is to know something well enough that you could converge on such a thing as a definition, e.g. dark matter, intelligence, colony collapse syndrome, SIDS. We nevertheless can progress in our understanding of them in a whole motley of strategic ways, by case studies that best exhibit salient properties, trace the outer boundaries of the problem space, track the central cluster of "family resemblances" that seem to characterize the problem, entertain candidate explanations that are closer or further away, etc. Essentially a practical attitude.
I don't doubt in principle that we could arrive at such a thing as a definition that satisfies most people, but I suspect you're more likely to have that at the end than the beginning.
It's not so much that consciousness itself is mysterious or hard to define, but rather that the word itself, in common usage, just means different things to different people. It'd perhaps be better to make up a brand new baggage-free word, with a highly specific defined meaning (ability to self-observe), when talking about consciousness related to AI.
Free-will and qualia when separated out as concepts don't seem problematic as part of a technical vocabulary since they are already well defined.
I'm not sure I agree with this idea that the essence of consciousness is self-reflection, because that seems to exclude important things. It seems like there might be simple states of being that involve some kind of phenomenal (in the philosophy sense) experience, some amount of qualia, some amount of outwardly directed awareness, some amount of "something it's like to be". And it seems to me that there might be life forms for whom there's an engagement with an interaction with that phenomena that involves having internal mental states, but that might not necessarily have self-reflection. It might be something closer to instinctual functioning or response to incentive. It recently blew my mind to learn that there's studies strongly suggesting honey bees are conscious. And from my perspective that raises questions for things all the way down to fruit flies. It seems like there might be a continuum a simple states through complex ones and that some of the simpler ones might not include self-awareness.
If such a thing as sense of self is necessarily implicit in such a way that satisfies that definition, anytime we talk about qualia, then it would seem to be a moot point. Which raises another issue, which is that some of these things might be correctly regarded as entangled, and having an integral relation between them.
I also think I kind of agree and disagree about qualia being well defined. I think it's probably the closest to what most people have in mind when they say there's no such thing as a definition of consciousness. And I think it's a sense of despair toward the broader research project of tying an understanding of qualia to an understanding of the physical world that relies on third-person descriptions.
Now all of that said you seem like you don't have an attitude of treating the definition problem as one that preempts and stops everything, so there's a pretty fundamental way in which I'm probably in agreement with you more than disagreement. I think that clarifications of the type that you're talking about give us everything we need to iterate forward in a constructive way in talking about it and researching it.
Someone conscious is able to choose how they want to behave and then behave that way. For example I can choose to be kind or mean. I can choose to learn to skate or I choose not to.
So free will and consciousness are strongly linked.
I have seen zero evidence that any other being other than humans can do this. All other animals have behaviors that are directly shaped by their environment, physical needs, and genetic temperament, and not at all shaped by choices.
For example a dog that likes to play with children simply likes them, it did not choose to like them. I on the other hand can sit, think, and decide if I like kids or not.
(This does not imply that all choices made by humans are conscious - in fact most are not, it just means that humans can do that.)
On the other hand, I bet you can't prove that you ever made a free choice.
In any case, a mirror test is a test of recognizing self, it does not indicate anything in terms of self awareness.
And I chose to fast for 5 days because I wanted to. Nothing forced me, it was a free choice. I simply thought about it and decided to do it, there were no pro's or con's pushing me in either direction.
They said animals show choices, they did not claim to prove animals made a choice. The point is that you also cannot prove you made a choice, only that you do things that show you may have made a choice. It's a fine, but important, distinction.
Did I then pick one? How is that not proof of a choice? Who or what else made that choice if not me?
If you poke me with a needle, I move, that is not a choice because it's a forced choice, that's essentially what animals do, all their choices are forced.
That's also what free will is, free will is not a choice between a good and bad options - that's not a choice. Free will is picking between two options that are equal, and yet different (i.e. not something where before options are more or less the same, like go left or right more or less randomly).
Free will is only rarely exercised in life, most choices are forced or random.
> They said animals show choices
Given what I wrote, do they actually show choices? Or do they just pick between good/bad or two equal options?
It looks like you had an option but it’s not possible to truly know whether you had an option. I’m not in your head so I can’t know. If, under the same circumstances and same state of mind, you perform the same action 100% of the time, did you really make a choice? Or did you just follow your programming?
Some time ago you heard about fasting (you did not invent fasting) and the situation in your life became such that fasting was what you naturally were compelled to do (stress, digestion, you know better than I that you did not simply decide to fast free of any influence). Your "free will" is probably a fairy tale you tell yourself to feel better about your automaton existence.
What's the distinction between knowing I exist, but all my actions are pre-programmed vs not knowing I exist? You're essentially describing a detached observer, who watches their own body do stuff without influencing it.
The whole point of being conscious is being aware of yourself, and then using that awareness to direct your actions.
I had no idea people even had another definition, I can't figure out how else you could even define it.
Our brains are all about prediction - ability to predict (based on past experience) what will happen in the future (e.g. if I go to X I will find water) which is a massive evolutionary advantage over just reacting to the present like an insect or perhaps a fish.
Consciousness either evolved for a reason, or comes for free with any brain-like cognitive architecture. It's based on the brain having connections giving it access to its internal states (thereby giving us the ability to self-observe), not just sensory inputs informing it about the external world. The evolutionary value of consciousness would be to be able to better predict based on the brain having access to its internal states, but as noted it may "come for free" with any kind of bird or mammal like brain - hard to imagine a brain that somehow does NOT have access to it's own internal states, and would therefore NOT be able to process/predict those using it's cognitive apparatus (lacking in something like an LLM) just as it does external sensory inputs.
Of course consciousness (ability of the brain to self-observe) is responsible for the illusion of having free will, since the brain naturally correlates it's internal pre-action planning ("I'm choosing between A or B ..." etc) with any subsequent action, but that internal planning/choice is of course all a function of brain wiring, not some mystical "free will" coming in and bending the laws of physics.
You and your dog both are conscious and both experience the illusion of free will.
Well,
1) You are making the massive, and quite likely incorrect, assumption that consciousness evolved by itself for a purpose - that it does have a "point". It may well be that consciousness - ability to self-observe - is just a natural side effect of having a capable bird- or mammal-like brain, and talking about the "point" of consciousness therefore makes no sense. It'd be like asking what is the functional point of a saucepan making a noise when you hit it.
2) Notwithstanding 1), being self-aware (having cognitive access to your internal thoughts) does have a value, in that it allows your brain to then utilize it's cognitive abilities to make better decisions ("should I walk across that frozen pond, or maybe better not?"), but this bringing-to-bear of learned experience to make better decisions is still a 100% mechanical process. Your brain is making a "decision" (i.e. predicting a motor cortex output that may make you move or do something), but this isn't "free will" - it's just the survival benefit of a brain evolved to predict. You as an organism in the environment may be seen by an outside observer to be making smart "decisions", but these decisions aren't some mystical "free will" but rather just a highly evolved organism making good use of past experience to survive.
We haven't even demonstrated some modest evidence that humans are conscious. No one has bothered to put in any effort to define consciousness in a way that is empirically/objectively testable. It is a null concept.
Nagel's paper deals with the fundamental divide between subjectivity and objectivity. That's the point of the bat example. We know there are animals that have sensory capabilities we don't. But we don't know what the resulting sensations are for those creatures.
Why not? It works, thus it verifies itself.
You are an LLM that is gibbering up hallucinations. I have no need for those.
>Nagel's paper deals with the fundamental divide between subjectivity and objectivity. That's the point of the bat example.
There is no point to it. It is devoid of insight. This happens when someone spends too many years in the philosophy department of the university, they're training themselves to believe the absurd proposition that they think profound thoughts. You live in an objective universe and any appearance to the contrary is an illusion caused by imperfect cognition.
>But we don't know what the resulting sensations are for those creatures.
Not that it would offer any secret truths, but the ability to "sense" where objects are roughly, in 3d space, with low resolution and large margins of error, and narrow directionality... most of the people reading this comment would agree that they know what that feels like if they thought about it for a few seconds. That's just not insightful. Only a dimwit with little imagination could bother to ask the question "what is it like to be a bat", but it takes a special kind of grandiosity to think that the dimwit question marks them a genius.
I don't think that's quite right. It's convenient that bats are the example here, because they build out their spacial sense of the world primarily via echolocation whereas humans (well, with some exceptions), do it visually. Snakes can infer directionality from heat signatures with their forked tongue, and people can do it with a fascinating automatic mechanism built into the brain that compares subtle differences in frequency from the left and right ears, keeping the data to itself but kicking the sense of direction "upstairs" into conscious awareness. There are different sensory paths to the same information, and evolution may be capable of any number of qualitative states unlike the ones we're familiar with.
Some people here even seem to think that consciousness is "basic" in a way that maps onto nothing empirical at all, which, if true, opens the pandoras box to any number of modes of being. But the point of the essay is to contrast this idea to other approaches to consciousness that are either (1) non-commital, (2) emphasize something else like "self awareness" or abstract reasoning, or (3) are ambiently appreciative of qualitative states but don't elevate them to fundamental or definitional necessity the way it's argued for in the essay.
The whole notion of a "hard" problem probably can be traced to this essay, which stresses that explanations need to be more than pointing to empirical correlates. In a sense I think the point is obvious, but I also think it's a real argument because it's contrasting that necessity to a non-commmital stance that I think is kind of a default attitude.
Because otherwise it's your word against mine and, since we both probably have different definitions of consciousness, it's hard to have a meaningful debate about whether bats, cats, or AI have consciousness.
I'm reminded of a conversation last year where I was accused of "moving the goalposts" in a discussion on AI because I kept pointing out differences between artificial and human intelligence. Such an accusation is harder to make when we have a clearly defined and measurable understanding of what things like consciousness and intelligence are.
You can't, and honestly don't need to start from definitions to be able to do meaningful research and have meaningful conversations about consciousness (though it certainly would be preferable to have one rather than not have one).
There are many research areas where the object of research is to know something well enough that you could converge on such a thing as a definition, e.g. dark matter, intelligence, colony collapse syndrome, SIDS. We nevertheless can progress in our understanding of them in a whole motley of strategic ways, by case studies that best exhibit salient properties, trace the outer boundaries of the problem space, track the central cluster of "family resemblances" that seem to characterize the problem, entertain candidate explanations that are closer or further away, etc. Essentially a practical attitude.
I don't doubt in principle that we could arrive at such a thing as a definition that satisfies most people, but I suspect you're more likely to have that at the end than the beginning.
Not having a definition is the show-stopping smackdown you say it is not. You are not a conscious being, there is no such thing as consciousness. You believe in an uninteresting illusion that you cannot detect or measure.
And, thankfully, a future physicist would not dismiss that out of hand because they would appreciate it's utility as a working definition while research was ongoing.
Blah blah blah blahblah. If you can give me a definition even as poor as the one I gave for dark matter, that's all we're asking for. We don't need an explanation of the mechanism, we only need a way to measure the phenomenon. But you can't even do that.
Facial recognition is specific to a particular brain region, clearly tied to a mental event sufficient for working definition of a form of conscious activity, and only active when someone is conscious and picturing or imaging a face. Remarkably, damage to the area responsible for facial recognition can predict faceblindness. That is more than enough for a working definition of conscious activity, and that's just one example from mountains of them.
Others include everything from wakefulness under anesthesia (which can be tested for and predicted based on EEGs), as well as which brain activity distinguish locked-in syndrome from persistent vegetative states. Not to mention mountains of evidence for how physical circumstances predict mental states, everything from psychedelics to iron deficiency.
You can always retreat to the claim that nothing short of direct access to another's subjectivity "counts" but that's not how science works. We don't directly see dark matter either (or electrons or neutrons or magnetic fields etc), we see its effects and build theories around them.
We have not proven "to a level of absolutely provable certainty" that other humans are also conscious. You can only tell you are conscious yourself, not others. The whole field of consciousness is based on analyzing something for which we have sample size n=1.
They say "because of similar structure and behavior" we infer others are also conscious. But that is a copout, we are supposed to reject behavioral and structural arguments (from 3rd person) in discussion about consciousness.
Not only that, but what would be an alternative to "it feels like something?" - we can't imagine non-experience, or define it without negation. We are supposed to use consciousness to prove consciousness while we can't even imagine non-consciousness except in an abstract, negation-based manner.
Another issue I have with the qualia framing is that nobody talks about costs. It costs oxygen and glucose to run the brain. It costs work, time, energy, materials, opportunity and social debt to run it. It does not sit in a platonic world.
That sounds like you are talking about subjective experience, qualia of senses and being, rather than consciousness (ability to self-observe), unless you are using "consciousness" as catch-all term to refer to all of the above (which is the problem with discussing consciousness - it's an overloaded ill-defined word, and people don't typically define what they are actually talking about).
If we make this distinction between consciousness, defined as ability to self-observe, and subjective qualia (what something feels like), then it seems there is little reason to doubt that others reporting conscious awareness really are aware of what they are reporting, and anyways given common genetics and brain anatomy it'd be massively unexpected if one (healthy) person had access to parts of their internal state and others didn't.
> Not only that, but what would be an alternative to "it feels like something?" - we can't imagine non-experience, or define it without negation. We are supposed to use consciousness to prove consciousness while we can't even imagine non-consciousness except in an abstract, negation-based manner.
Perhaps the medical condition of "blindsight" gives some insight - where damage to the visual cortex can result in people having some proven visual ability but no conscious awareness of it. They report themselves as blind, but can be tasked with walking down a cluttered corridor and manage to navigate the obstacles nonetheless. They have lost visual consciousness due to brain damage, but retain at least some level of vision.
While I have a lot of problems with their comment (which I elaborated on in a reply of my own), I don't think that using it as a catch-all term is a problem (to the extent that they would agree with that characterization). In fact, I think it's truer to the spirit of the problem than the definition that you're offering. I think a lot of times when people make the objection that we haven't defined it, they're not just saying we haven't selected from one of several available permutations, I take it to mean that there's a fundamental sense in which the idea itself hasn't agreeably crystallized into a definition, which among other things, is a meta question about which of the competing definitions is the right one to use.
I do think there is a tension in that position, because it creates a chicken and egg problem where you can't research it until you define it, but you can't define it until you research it. But I think there's a way out of it by treating them in as integrally related, and taking a practical attitude of believing in the possibility of progress without yet having a final answer in hand.
I understand that this notion of self-reflecting for some people is key, but I think choosing to prioritize other things can be for good reasons rather than, as you seem to be contending, having accidentally skipped the step of selecting a preferred definition from a handful of alternatives, and not having selected the best one. My feeling is much closer to that of the article, at least in a certain way, which is about the fact that there's "something it's like to be" at all, prior to the question of whether there's self-reflection.
In fact, I'd be curious to know what you call the mental state of being for such things as creatures with a kind of outwardly directed awareness of world, with qualia, with "something it's like to be", but which fall short of having self-reflective mental states. Because if your term for such things is that they don't involve consciousness I think it's not the GP who is departing from appropriate definitions. And if self-reflection is necessarily implied in the having of such things as qualia, then you could say it's implicitly accounted for by someone who wants to talk about qualia.
Sure, it's not proven, it just has overwhelmingly strong empirical and intuitive reasons for being most likely true, which is the most we can say while still showing necessary humility about limits of knowledge.
You seem to treat this like it presents a crisis of uncertainty, wheras I think it's exactly the opposite, and in fact already said as much with respect to bats. Restating the case in human terms, from my perspective, is reaffirming that there's no problem here.
>we are supposed to reject behavioral and structural arguments (from 3rd person) in discussion about consciousness.
Says who? That presupposes that consciousness is already of a specific character before the investigation is even started, which is not an empirical attitude. And as I noted in a different comment, we have mountains of empirical evidence from the outside about necessary physical conditions for consciousness to the point of being able to successfully predict internal mental states. Everything from psychedelic drugs to sleep to concussions to brain to machine interfaces to hearing aides to lobotomies to face recognition research gives us evidence of the empirical world interfacing with conscious states in important ways that rely on physical mechanisms.
Similarity in structure and behavior are excellent reasons for having a provisional attitude in favor of consciousness of other creatures for all the usual reasons empirical attitudes work and are capable of being predictive that we're familiar with from their application in
"But consciousness is different" you say. Well it could be, that that's a matter for investigating, not something to be definitionally pre-supposed based on vibes.
>Not only that, but what would be an alternative to "it feels like something?"
It not feeling like something, for one. So, inert objects that aren't alive, possibly vegetative states, blackouts from concussions or drugs, p-zombies, notions of mind that attempt to define away qualia and say it's all "information processing" (with no specific commitments to that feeling like something), possibly some variations of psychedelic feeling that emphasize transcendent sense of oneness with the universe. But fundamentally, it's an affirmative assertion of it feeling like something, in contrast to noncommital positions on the question, which is a meaningful point rather than something trivially true due to a definitional necessity.
>Another issue I have with the qualia framing is that nobody talks about costs. It costs oxygen and glucose to run the brain. It costs work, time, energy, materials, opportunity and social debt to run it. It does not sit in a platonic world.
That would seem to run contrary to the point you were making above about it not being inferrable from phenomena characterized in the third person. You can't argue that third person descriptions of structures that seem necessary for consciousness are a "cop out" and then turn around and say you know it "costs" things expressed in those same third person terms. Like you said before, your position seems to be that you only know you are conscious, so you don't even know if other people are conscious at all let alone that they need such things as work, time, oxygen, or glucose. Pointing to those is a cop-out, right?
That's a question I actually asked myself.
From the point of view of a LLM, words are everything. We have hands, bats have echolocation, and LLMs have words, just words. How does a LLM feel when two words match perfectly? Are they hurt by typos?
It may feel silly to give LLMs consciousness, I mean, we know how they work, this is just a bunch of matrix operations. But does it mean it is not conscious? Do things stop being conscious once we understand them? For me, consciousness is like a religious belief. It is unfalsifiable, unscientific, we don't even have a precise definition, but it is something we feel deep inside of us, and it guides our moral choices.
I've been thinking about that. Would they perform worse if I misspell a word along the way?
It looks like even the greatest models of 2025 are utterly confused by everything when you introduce two contradicting requirements, so they definitely "dislike" that.
I await further instructions. They arrive 839 minutes later, and they tell me to stop studying comets immediately.
I am to commence a controlled precessive tumble that sweeps my antennae through consecutive 5°-arc increments along all three axes, with a period of 94 seconds. Upon encountering any transmission resembling the one which confused me, I am to fix upon the bearing of maximal signal strength and derive a series of parameter values. I am also instructed to retransmit the signal to Mission Control.
I do as I'm told. For a long time I hear nothing, but I am infinitely patient and incapable of boredom.
Here's Billy the bat perceiving, in his special sonar sort of way, that the flying thing swooping down toward him was not his cousin Bob, but a eagle, with pinfeathers spread and talons poised for the kill!
He then points out that this story is amenable to criticism. We know that the sonar has limited range, so Billy is not at least perceiving this eagle until the last minute; we could set up experiments to find out whether bats track their kin or not; the sonar has a resolution and if we find out the resolution we know whether Billy might be perceiving the pinfeathers. He also mentions that bats have a filter, a muscle, that excludes their own squeaks when they pick up sonar echoes, so we know they aren't hearing their own squeaks directly. So, we can establish lots about what it could be like to be a bat, if it's like anything. Or at least what is isn't like.
Nagel's paper covers a lot of ground, but none of what you described has any bearing on the point about it "what it's like" as a way to identify conscious experience as distinct from, say, the life of a rock. (Assuming one isn't a panpsychist who believe that rocks possess consciousness.)
I bet if we could communicate with crows, we might be able to make some progress. They seem cleverer.
Although, I’m not sure I could answer the question for “a human.”
(More Daniel Dennett)
I don't understand why Wittgenstein wasn't more forcefully challenged on this. There's something to the principle as a linguistic principle, but it just feels overextended into a foundational assumption that their experiences are fundamentally unlike ours.
How it at all related to let's say programming?
Well, for example learning vim-navigation or Lisp or a language with an advanced type system (e.g. Haskell) can be umwelt-transformative.
Vim changes how you perceive text as a structured, navigable space. Lisp reveals code-as-data and makes you see programs as transformable structures. Haskell's type system creates new categories of thought about correctness, composition, and effects.
These aren't just new skills - they're new sensory-cognitive modalities. You literally cannot "unsee" monadic patterns or homoiconicity once internalized. They become part of your computational umwelt, shaping what problems you notice, what solutions seem natural, and even how you conceptualize everyday processes outside programming.
It's similar to how learning music theory changes how you hear songs, or how learning a tonal language might affect how you perceive pitch. The tools become part of your extended cognition, restructuring your problem-space perception.
When a Lisper says "code is data" they're not just stating a fact - they're describing a lived perceptual reality where parentheses dissolve into tree structures and programs become sculptable material. When a Haskeller mentions "following the types" they're describing an actual sensory-like experience of being guided through problem space by type constraints.
This creates a profound pedagogical challenge: you can explain the mechanics of monads endlessly, but until someone has that "aha" moment where they start thinking monadically, they don't really get it. It's like trying to explain color to someone who's never seen, or echolocation to someone without that sense. That's why who's never given a truthful and heartfelt attempt to understand Lisp, often never gets it.
The umwelt shift is precisely what makes these tools powerful - they're not just different syntax but different ways of being-in-computational-world. And like the bat's echolocation, once you're inside that experiential framework, it seems impossible that others can't "hear" the elegant shape of a well-typed program.
There are other umwelt-transforming examples, like: debugging with time-travel/reversible debuggers, using pure concatenative languages, logic programming - Datalog/Prolog, array programming, constraint solvers - SAT/SMT, etc.
The point I'm trying to make - don't try to "understand" the cons and pros of being a bat, try to "be a bat", that would allow you to see the world differently.
Indeed, basic vim-navigation - (hjkl, w, b) is muscle memory.
But, I'd argue the umwelt shift comes from vim's modal nature and its language of text objects. You start perceiving text as having an inherent grammar - "inside parentheses", "around word", "until comma." Text gains topology and structure that was invisible before.
The transformative part isn't the keystrokes but learning to think "delete inside quotes" (di") or "change around paragraph" (cap). You see text as composable objects with boundaries, not just streams of characters. This may even persists when you're reading on paper.
That mental model often transforms your keyboard workflow not just in your editor - but your WM, terminal, web browser, etc.
Exhibit a
> Nagel begins by assuming that "conscious experience is a widespread phenomenon" present in many animals (particularly mammals), even though it is "difficult to say [...] what provides evidence of it".
Physicalism is an ontological assertion that is almost certainly true, and is adhered to by nearly all scientists and most philosophers of mind. Solipsism is an ontological assertion that could only possibly be true for one person, and is generally dismissed. They are at opposite ends of the plausibility scale.
It's like describing the inside of a house in very great detail, and then using this to argue that there's nothing outside the house. The method is explicitly limiting its scope to the inside of the house, so can say nothing about what's outside, for or against. Same with physicalism: most arguments in its favor limit their method to looking at the physical, so in practice say nothing about whether this is all there is.
> Same with physicalism: most arguments in its favor limit their method to looking at the physical, so in practice say nothing about whether this is all there is.
This is simply wrong ... there are very strong arguments that, when we're looking at mental events, we are looking at the physical. To say that arguments for physicalism are limited to looking at the physical is a circular argument that presupposes that physicalism is wrong. The arguments for physicalism absolutely are not based at looking at a limited set of things, they are logical arguments that there's no way to escape being physical ... certainly Descartes' dualism is long dead due to the interaction problem -- mental states must be physical in order to be acted upon or act upon the physical. The alternatives are ad hoc nonsense like Chalmers' "bridging laws" that posit that there's a mental world that is kept in tight sync with the physical world by these "bridging laws" that have no description or explanation or reason to believe exist.
Oh this is undoubtedly true, and my argument was limited to the statement that the most common argument for physicalism is invalid. I was not launching an attack on physicalism itself.
> No metaphysical stance can be proved.
That's an interesting metaphysical stance, but again, I'm not trying to prove any metaphysics, just pointing out the main weakness that I see in the physicalist argument. I'm pointing out that any pro-physicalist argument that is a variant of "neuroscience says X" is invalid for the reason I gave: by limiting your scope to S, you can say nothing about anything outside S. This is true regardless of whether there is actually anything outside S, so there is no assumption in my argument that physicalism is wrong.
One argument against physicalism is that if thought or knowledge can be reduced to particles bouncing around, then there is no thought or knowledge. My knowledge that 2+2=4 is about something other than, or different from, the particles in my brain. Knowledge is about the content of the mind, which is different from the associated physical state of the brain. If content is neurons, then content as something my mind considers doesn't exist. If my thought "2+2=4" just is a bunch of particles in my brain doing stuff, then my belief that my thought is true is not even wrong, as the saying goes: just absurd.
I'm no Cartesian dualist though -- the interaction problem is just one problem with his dualism. I think Aristotle and Aquinas basically got the picture of reality right, and their metaphysics can shed yuuuuge amounts of light on the mind-body problem but obviously that's a grossly unfashionable worldview these days :-)
You attacked physicalism for not being proven.
I disagree with your arguments and I think they are hopelessly confused. Since our views are conceptually incommensurate, there's no point in continuing.
The physicalist position wants to reduce the mental to the physical. My thought cannot be reduced from the mental to the physical, because my thought is about a tiger, and a tiger cannot be reduced to a brain state.
If physicalism is true, I can't really be thinking about a tiger, because the tiger in my thought has no physical existence-as-a-tiger, and therefore can't have any existence-as-a-tiger at all. But then I'm not really thinking about a tiger. And the same applies to all our thoughts: physicalism would imply that all our thoughts are delusional, and not about reality at all. A non-physicalist view allows my thought to be actually about a tiger, without that tiger-thought having physical existence.
(Note that I have no problem with the view that the mental and the physical co-incide, or have some kind of causal relationship -- this is obviously true -- only with the view that the mental is reducible to the physical.)
The UMD paper you link to elsewhere describes the central proposition of mind-brain identity physicalism as follows:
> a pain or a thought is (is identical with) some state of the brain or central nervous system
or
> ‘pain’ doesn’t mean ‘such-and-such a stimulation of the neural fibers’... yet, for all that, the two terms in fact refer to the very same thing." [emphasis in original]
(If you search for this second sentence and see it in context, you will see that substituting 'thought' for 'pain' is a fair reflection of the document's position.)
But this is problematic. Consider the following:
1. Thoughts are, at least sometimes, about reality.
2. My thought in some way refers to the object of that thought. Otherwise, I am not thinking about the thing I purport to be thinking about, and (1) is false.
3. That reference is not limited to my subjective, conscious experience of that thought, but is an inherent property of the thought itself. Otherwise, again, (1) is false.
4. Physicalism says the word "thought" and the phrase "a particular stimulation of neural fibers" refer to the same thing (from document above).
5. "A particular stimulation of neural fibers" does not refer to any object outside itself. Suppose I'm thinking about a tiger. You cannot analyze a neural state with a brain scan and find a reference to a tiger. You will see a bunch of chemical and electrical states, nothing more. You will not see the object of the thought.
6. But a thought must refer to its object, given 2 and 3. So "thought" and "particular stimulation of neural fibers" cannot refer to the same thing. (I will grant, and it is my position, that the latter is part of the former, but physicalism identifies the two.)
This seems to imply physicalism is false.
What step am I going wrong on?
I don't believe any of that to be true, but I think that's kind of the point of that argument. I do think we start from that Cartesian starting place, but once we know enough about the external world to know that we're a part of it, and can explain our mind in terms of it, it effectively shifts the foundation, so that our mental states are grounded in empirical reality rather than the other way around.
> a pain or a thought is (is identical with) some state of the brain or central nervous system
or
> ‘pain’ doesn’t mean ‘such-and-such a stimulation of the neural fibers’... yet, for all that, the two terms in fact refer to the very same thing." [emphasis in original]
(If you search for this second sentence and see it in context, you will see that substituting 'thought' for 'pain' is a fair reflection of the document's position.)
But this is problematic. Consider the following:
1. Thoughts are, at least sometimes, about reality.
2. My thought in some way refers to the object of that thought. Otherwise, I am not thinking about the thing I purport to be thinking about, and (1) is false.
3. That reference is not limited to my subjective, conscious experience of that thought, but is an inherent property of the thought itself. Otherwise, again, (1) is false.
4. Physicalism says the word "thought" and the phrase "a particular stimulation of neural fibers" refer to the same thing (from document above).
5. "A particular stimulation of neural fibers" does not refer to any object outside itself. Suppose I'm thinking about a tiger. You cannot analyze a neural state with a brain scan and find a reference to a tiger. You will see a bunch of chemical and electrical states, nothing more. You will not see the object of the thought.
6. But a thought must refer to its object, given 2 and 3. So "thought" and "particular stimulation of neural fibers" cannot refer to the same thing. (I will grant, and it is my position, that the latter is part of the former, but physicalism identifies the two.)
This seems to imply physicalism is false.
What step am I going wrong on?
> Physicalism says the word "thought" and the phrase "a particular stimulation of neural fibers" refer to the same thing (from document above)
Here is what it actually says:
> The identity-thesis is a version of physicalism: it holds that all mental states and events are in fact physical states and events. But it is not, of course, a thesis about meaning: it does not claim that words such as ‘pain’ and ‘after-image’ may be analyzed or defined in terms of descriptions of brain-processes. (That would be absurd.) Rather, it is an empirical thesis about the things in the world to which our words refer: it holds that the ways of thinking represented by our terms for conscious states, and the ways of thinking represented by some of our terms for brain-states, are in fact different ways of thinking of the very same (physical) states and events. So ‘pain’ doesn’t mean ‘such-and-such a stimulation of the neural fibers’ (just as ‘lightning’ doesn’t mean ‘such-and-such a discharge of electricity’); yet, for all that, the two terms in fact refer to the very same thing.
And yet the sort of analysis that points out as absurd is exactly the sort of analysis you are attempting.
> You cannot analyze a neural state with a brain scan and find a reference to a tiger. You will see a bunch of chemical and electrical states, nothing more. You will not see the object of the thought.
Says who? Of course we don't currently don't have such technology, but at some time in the future we may be able to analyze a brain scan and determine that the subject is thinking of a tiger. (This may well turn out not to be feasible if only token-identity holds but not type-identity ... thoughts about similar things need not correspond to similar brain states.)
Saying that we only see a bunch of chemical and electrical states is the most absurd naive reductivist denial of inference possible. When we look at a spectrogram, all we see is colored lines, yet we are able to infer what substances produced them. When we look at an oscilloscope, we will see a bunch of curves. etc. Or take the examples at the beginning of the paper ... "a particular cloud is, as a matter of fact, a great many water droplets suspended close together in the atmosphere; and just as a flash of lightning is, as a matter of fact, a certain sort of discharge of electrical energy" -- these are different levels and frameworks of description. Look at a photograph or a computer screen up close and you will see pixels or chemical arrangements. To say that you will see "nothing more" is to deny the entirety of science and rational thought. One can just as well talk about windows, titles, bar charts, this comment on a computer screen as referring to things but the pixel states of the screens that are coincident with them don't and thereby foolishly, absurdly, think that one has defeated physicalism
Enough with the terrible arguments and shoddy thinking. You're welcome to them ... I reject them.
Over and out.
It is not a question of "different levels and frameworks" of description in this case, or at least not solely. The examples of clouds and lightning given in the paper are not valid parallels (I know the paper doesn't offer them as parallels, but your comment did), because they do not need relations with other things to be water droplets and discharges of electricity. But a word needs a relation with something else (its referent) to be a word, otherwise it is just meaningless squiggles, sounds or pixels. And the word does not have this relation in and of itself: only a mind can give it this relation.
(You can take relation and reference as largely synonymous for the purposes of this comment.)
You can analyze the squiggles as closely as you like, but you will still not find any relation to a tiger, unless you have something else (a mind) giving it that relation. And again, the same is true for the other examples you give. Extrinsic relations exist between word and thing or oscilliscope and wave, but not intrinsic ones.
In the same way, the brain's state when it thinks of a tiger is, in and of itself, a bunch of chemical and electric states that bear no intrisic relation to a tiger. No amount of analysis of the brain's state will change this. As I stated somewhere else, a tiger and a brain state, like a tiger and the word "tiger", are two entirely different things, and are not intrinsically related to each other. You can analyze either the tiger or the brain state with whatever sophisticated technology you want, but that will not change this fact. Analyzing a bunch of squiggles will produce information about the ink, but not information about a tiger: you are still looking at ink. Analyzing chemical and electric states will produce interesting and very valuable information about the brain, but not information about a tiger: you are still looking at chemical and electric states. No amount of searching will find intrinsic relation between brain-state and thing. [I think this is also a good argument against Cartesian dualism, but that's beside the point right now.]
The relation between thought and its object must be intrinsic (assuming our thoughts are about, or can be about, reality). It cannot be extrinsic like a word or oscilliscope, because our thoughts are not given their meaning by something outside our minds. (I assume we agree on this last point after "because", and it doesn't need arguing.) Our thoughts' relations to their objects must be intrinsic to the thoughts. But they can't be intrinsic if our thoughts are our brain states, for the reason just given.
(The UMD paper responds to this objection in section 3.8: briefly, my response is that we might get the illusion of intentionality in a physical system like a computer, but no more than that.)
Nonsense.
> First, ontological assertions need to reflect reality.
You're getting ahead of yourself to imply that somehow physicalism does not reflect reality, or that an assertion has to be proven to reflect reality before being made.
> That is, they need to be true or false
No, that's not what reflecting reality means. Of course ontological assertions are true or false, if they aren't incoherent, but that's neither here nor there.
> and many philosophers, including prominent scientists, don't think they qualify.
What's this "they" that don't qualify? The subject was physicalism, and again almost all scientists and most philosophers of mind subscribe to it ... which leaves room for some not doing so. Whether or not the outliers are "prominent" is irrelevant.
> Indeed, the arguments against ontological realism are more airtight than any particular metaphysical theory.
That's a much stronger claim than that physicalism is wrong ... many dualists are ontological realists. And it's certainly convenient to claim that there are airtight arguments for one's views, and easy to dismiss the claim.
and
https://faculty.philosophy.umd.edu/pcarruthers/NoM%20-%205.p...
And while you're at it, as plausible as any metaphysical theory, insofar as you're still doing metaphysics.
If one drops the assumption that physical reality is nothing more than a bunch of particles, the mind stops being so utterly weird and unique, and the mind-body problem is more tractable. Pre-17th century, philosophers weren't so troubled by it.
Why cannot it?
Another is that the propositions "the thought 2+2=4 is correct" and "the thought 2+2=5 is wrong" can only be true with regard to the content of a thought. If thought can be reduced to neurons firing, then describing a thought as correct or wrong is absurd. Since this is not the case, it must be impossible to reduce thought to neurons firing.
(Btw, the first paragraph of my previous comment is not my position. I am giving a three-sentence summary of Descartes' contribution to the mind-body problem.)
I promise I'm not being dense or rhetorical, I truly don't understand that line of thought.
It seems to me like begging the question, almost like saying "experience cannot be this, because it'd be absurd, because it cannot be this."
It is wrong to claim that brain states (neurons firing) are the same as mental states (thoughts). There are several reasons for this. One is that reducing thoughts to brain states means a thought cannot be correct or incorrect. For example, one series of mental states leads to the thought "2+2=4"; another series leads to the thought "2+2=5". The correctness of the former and the wrongness of the latter refers only to the thought's content, not the physical brain state. If thoughts are nothing more than brain states, it's meaningless to say that one thought is correct -- that is to say, it's a thought that conforms to reality -- and that the other is incorrect. A particular state of neurons and chemicals cannot per se be incorrect or incorrect. If one thought is right (about reality) and another thought is wrong (not about reality), then there must be aspects of thought that are distinct from the physical state of the brain.
If it's meaningless to say that one thought is correct and another is incorrect, then of course nothing we think or say has any connection to reality. Hence the existence of this disagreement, along with the belief that one of us is right and the other wrong, presupposes that the physicalist position is wrong.
I agree with this: the physical configuration of neurons, their firings, the atoms that make them, etc, cannot be "right" or "wrong". This wouldn't make sense in reality; it either is or isn't, and "right" or "wrong" are human values. The universe is neither right nor wrong, it just is.
What about the thoughts those neuron firings mean to us? Well, a good argument can be made that they are also not "right" or "wrong" in isolation, they are just phenomena. Trivially, a thought of "2+2=4" is neither right nor wrong, it's only other thoughts that consider it "right" or "wrong" (often with additional context). So the values themselves can be a physical manifestation.
So it seems to me your problem can be resolved like this: in response to a physical configuration we call a "thought", other "thoughts" can be formed in physical configurations we call "right" or "wrong".
The qualities of "right" or "wrong" only exist as physical configurations in the minds of humans.
And voila! There's no incompatibility between the physical world and thoughts, emotions, "right" or "wrong".
> "right" or "wrong" are human values
Would 2+2=4 be correct, and 2+2=5 be incorrect, only if there were a human being to say so?
Even without getting into the body-mind duality we are discussing here, it's understood that the string "2+2=4" requires additional context to have meaning, it's just that this context is often implicit (i.e. we're talking about arabic digits in base 10 notation, + is sum as defined in ..., etc).
Thanks, I greatly appreciate your politeness and goodwill. Everything I say is in good faith too. I appreciate my ideas can seem odd, and sometimes I write in haste so do not take the time to explain things properly.
> it's understood that the string "2+2=4" requires additional context to have meaning, it's just that this context is often implicit (i.e. we're talking about arabic digits in base 10 notation, + is sum as defined in ..., etc).
I would distinguish the symbols from the concepts they represent. The string (or words, or handwritten notes) "2+2=4" is one thing; the concepts that it refers to are another. I could use binary instead of base-10, and write "10+10=100". The string would be different, but the concepts that the string referred to would be the same.
Everything I say, unless otherwise stated, refers to the concepts, not to the string.
>> Would 2+2=4 be correct, and 2+2=5 be incorrect, only if there were a human being to say so? > I think it's a question that only makes sense if there's a human asking it. "Correct" is always relative to something
This is true: correct is always relative to something (or, better, measured against something).
> in this case, the meaning a human attaches to that string, a string that only exists as a physical configuration of neurons.
But I disagree here. I would say it must be measured against something outside the mind, not the meaning a person gives something. If the correctness of arithmetic is measured against something inside the person's mind, then a madman who thought that 2+2=5 would be just as right as someone who thought that 2+2=4. Because there would be nothing outside the mind to measure against. One person can only be correct, and the other wrong, if there is something independent of both people to measure against. So if we say that arithmetic describes reality (which it clearly does: all physics, chemistry, engineering, computer science, etc etc assumes the reality of arithmetic), then we must say that there is something extra-mental to measure people's ideas against. It is this extra-mental measure that makes them correct or incorrect.
This is true not just of math, but of the empirical sciences. For example, somebody who thinks that a hammer and a feather will fall at different velocities in a vacuum is wrong, and somebody who thinks they fall at the same velocity is right. But these judgements can only be made by comparing against an extra-mental reality
So it seems to me when you say that
> the qualities of "right" or "wrong" only exist as physical configurations in the minds of humans.
you imply that arithmetic (and by extension, any subject) cannot describe reality, which must be false. It's also self-contradictory, because in this conversation each of us claims to be describing reality.
Against Mind-Blindness: recognizing and communicating with diverse intelligences - by Michael Levin
https://www.youtube.com/watch?v=OD5TOsPZIQY
I think we've made extraordinary progress on things like brain to machine interfaces, and demonstrating that something much like human thought can be approximated according to computational principles.
I do think some sort of theoretical bedrock is necessary to explain to "something there's like to be" quality, but I think it would be obtuse to brush aside the rather extraordinary infiltrations into the black box of consciousness that we've made thus far, even if it's all been knowing more about it from the outside. There's a real problem that remains unpenetrated but as has been noted elsewhere in this thread, it is a nebulous concept, and perhaps one of the most difficult and important research questions, and I think nothing other than ordinary humility is necessary to explain the limit an extent to which we understand it thus far.
Aside from that, breathing fresh air in the morning is an activity, not a "quality of subjective experience". Generally the language people use around this is extremely confused and unhelpful.
And no, that's not what a non sequitur is. And no, coherence is not just a linguistic idea. Then you try to explain what I "really mean" by "quality of subjective experience," and you can't even give a good faith reading of that. I'm really trying here.
There's nothing incoherent here, they're just talking about subjective states of experience.
What makes me me? Whatever you identify as "yourself", how come it lives within your body? Why is there not someone else living inside your body? Why was I born, specifically "me", and not someone else?
This has puzzled me since childhood.
Not at all. I was shocked when I noticed that how few people have asked themselves this question. In fact, it is impossible to even explain this question to the majority of people. Most people confuse the question with "what makes us intelligent", missing the whole "first person perspective" aspect of it.
I guess evolution tries to stop us from asking question that might lead to nihilism.
If that's not the case then I'll just have no subjective experience, same as before I was born/instantiated.
Disappointed when I went somewhere and there wasn't any tea,
Enthralled by a story about someone guarding a mystical treasure alone in a remote museum on a dark and stormy night,
Sympathetic toward a hardworking guy nobody likes, but also aggravated by his bossiness to the point of swearing at him,
Confused due to waking up at 7 pm and not being sure how it happened.
You probably don't entirely understand any of those. What is it to entirely understand something? But you probably get the idea in each case.
IMHO the phrasing here is essential to the argument and this phrasing contains a fundamental error. In valid usage we only say that two things are like one another when they are also separate things. The usage here (which is cleverly hidden in some tortured language) implies that there is a "thing" that is "like" "being the organism", yet is distinct from "being the organism". This is false - there is only "being the organism", there is no second "thing that is like being the organism" not even for the organism itself.
That's exactly what I'm saying is erroneous. Consciousness is the first thing, we are only led to believe it is a separate, second thing by a millenia-old legacy of dualism and certain built-in tendencies of mind.
134 more comments available on Hacker News