Is it a problem for your definition of phenomenal consciousness if zombies and robots agree with it?
Science Fiction/Double Feature
Is there a neutral definition of “phenomenal consciousness” (and closely related terms like “phenomenal character,” “qualia”)—a definition can be agreed to by all parties of the key debates (e.g. illusionism vs phenomenal realism, physicalism vs antiphysicalism, etc.)? And can such a neutral definition also be noncircular? If no noncircular neutral definition is forthcoming, then maybe the different parties are just talking past each other and the whole thing has been a colossal waste of time. If no noncircular neutral definition is forthcoming, then maybe the Qualia Quietists have been right all along.
In his 2016 paper, Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage, Eric Schwitzgebel proposes a definition of phenomenal consciousness that aims to be neutral. Rather than entangle the notion in contentious debates about dualism, physicalism, or higher-order theories, he seeks to define phenomenal consciousness by pointing to examples. He points to positive cases like sensory experiences, mental imagery, vivid emotions, and dreams, and contrasting negative cases like growth hormone release, dispositional knowledge, and standing intentions. How do we get to a definition from pointing? Schwitzgebel offers this: phenomenal consciousness is simply the most “folk-psychologically obvious” feature that the positive examples share and the negative ones lack.
On the face of it, Schwitzgebel seems like he’s off to a good start. This definition is neutral on all sorts of metaphysically and epistemologically heavy topics like whether phenomenal consciousness is nonphysical, directly knowable, private, emergent, reducible, functionally analyzable and so on.
HOWEVER…
This idea of being “folk-psychologically obvious” invites further scrutiny. I want to ask “folk-psychologically obvious to whom?” When we evaluate folk-psychological obviousness, we gotta ask…
“Who’s we, sucka?” —Sudden Impact (1983)
What if “we” included zombies? Perhaps the phenomenally vacant walk among us. Suppose zombies agree with your definition of phenomenal consciousness, and all of your claims about folk-psychological obviousness —is that a problem? Does the definition fail to live up to the goal of neutrality, especially with respect to the debates over whether phenomenal consciousness is physically reducible? How about robots and AI? If ChatGPT agrees that it has its own examples of the positive cases, the negative cases, and further, its own analog sense of “obviousness,” is the offered definition insufficiently neutral with respect to debate over machine consciousness? (Or should we instead take this as evidence that ChatGPT is phenomenally conscious?)
Enter the Zombie Zone
In standard philosophical lore, zombies are creatures physically and behaviorally identical to us but with no subjective experience—no phenomenal consciousness. Now, imagine a zombie reading Schwitzgebel’s paper. It recognizes the positive examples: “Yes, I have sensory experiences, I have dreams, I experience emotions.” Silly zombie! It doesn’t actually have those experiences, but by stipulation, it’s behaviorally indistinguishable from a conscious human. It nods along with the list, agrees that these are the relevant cases, and maybe even starts a blog about the philosophy of mind. From the outside, the zombie is on board with Schwitzgebel’s definition.
Lots of anti-physicalists, and some physicalists, hold that it’s conceivable that there are zombies. Whether they are mistaken in holding that view is irrelevant to the current discussion. What matters here is this: Given their claim to be able to conceive zombies, they probably think they’re employing a concept of phenomenal consciousness that doesn’t stack the deck against them, a concept that is neutral in the relevant ways.
But if zombies can grasp a definition of “phenomenal consciousness” how is the definition grasped neutral in the relevant ways? If the concept I use to conceive of zombies is a concept the possession of which requires that the possessor actually have phenomenal consciousness, then zombies don’t have any such concept. So whatever definition zombies agree to, it isn’t defining that concept, since you can’t agree to something you lack the concept for.
Perhaps the solution is to build into the definition of phenomenal consciousness that the relevant audience, the relevant “we” are beings who don’t lack phenomenal consciousness. This excludes the Zomb-“We”. But stating the exclusion requires using the very term being defined. That would nuke any hopes of having a noncircular definition.
Here Comes the Zomb-“We” Invasion
Maybe there’s a way of cutting out zombies that doesn’t invite circularity. Maybe zombies don’t get to be in the relevant audience because zombies are merely logically possible, but not nomologically possible. I’m not sure that helps much. *Why* aren’t they nomologically possible? Is there any sketch of an answer to this “why” question that doesn’t deploy the very concept of phenomenal consciousness, and once again invite circularity?
Maybe the best response for Schwitzgebel or anyone else who likes his attempted definition of phenomenal consciousness is this: Shut up about zombies. No serious person cares about zombies. Exit the zombie zone.
Talk to the Robot
Maybe we can ignore zombies. No one is receiving billions of dollars to develop zombies. No one is writing newspaper articles about how zombies will take over our jobs. Professors aren’t pulling their hair out trying to figure out if zombies are writing their student’s papers. Artificial Intelligence, however, is increasingly difficult to ignore. We can ask ChatGPT, Claude, Gemini, and others, all sorts of questions. What happens if we ask them what they “think” about Schwitzgebel’s definition of phenomenal consciousness? I’m putting “think” in scare-quotes because I do not here presume that they actually do think, but nor do I presume that they don’t. Leaving thinking aside, there are nonetheless clear senses of “say” whereby LLMs say all sorts of stuff. Let’s see what they have to say about this stuff. Specifically, let’s see if they express agreement with Schwitzgebel’s definition of phenomenal consciousness.
Before diving in to asking ChatGPT if it can agree with the definition, we have to consider two distinct ways in which we might frame the question of agreement. One way is to invite the AI to assess the definition from the 3rd-person point of view. Since it’s not a human, its assessment of what does or doesn’t rise to the level of “folk-psychological obviousness” must be assessed from the outside, as it were. But we might invite the AI to attempt to assess the definition instead from the inside, which I presume would require thinking up appropriate analogs for the positive and negative cases, (as well as an analog for first-person obviousness). For example, one of Schwitzgebel’s negative cases involves glandular secretion. Current AI hardware is fully sans gland, as far as I know. But maybe there are facts about cooling systems at server farms that are outside of ChatGPT’s ken in ways significantly analogous to my lack of conscious access to my own real-time gland updates.
Let’s start with the third-person approach first. Hey, ChatGPT, what do you say?
ChatGPT says:
Based on observed patterns in ordinary language and cultural usage, the most folk-psychologically obvious feature that the positive examples share—and that the negative ones lack—is that the former involve a subjective inner life, or ‘what-it’s-like’-ness. This is the feature most frequently and saliently invoked in connection with the positive cases and conspicuously absent in descriptions of the negative cases.
Whoa! Nice job, ChatGPT.
Let’s turn to the “first-person” assessment now.
First off, what would it mean for something to be obvious to ChatGPT?
ChatGPT says:
A few interpretive paths here:
a. Obviousness as Pattern-Simplicity
• For the AI, “obviousness” might be modeled as pattern transparency—i.e., the AI requires fewer computational steps to map the shared features among positive examples.
• This can be operationalized as: the AI converges on the pattern with minimal prompting or deliberation, just as humans do when they say “Of course dreams and imagery are conscious experiences!”
b. Obviousness as Predictive Robustness
• Another route: “obviousness” corresponds to how confident or stable the model is in assigning certain features to the examples.
• For example, it might consistently describe dreams and sensory experiences as subjective or conscious across diverse contexts, indicating that those associations are entrenched and not context-sensitive.
Ok, that’s pretty good. Did I help ChatGPT come up with this? Full disclosure: I totally did. But the main point right now is that it looks like there are plausible analogs to obviousness for LLMs. Let’s turn now to Schwitzgebel’s positive and negative cases.
ChatGPT says, about positive examples:
For sensory experiences, AI can process visual or auditory data, which is straightforward and clear in function. AI’s capability to generate representations is similar to mental imagery, and creating complex scenarios is an extension of its generative abilities, analogous to dreaming.
And about negative examples:
Growth hormone release and dispositional knowledge can be likened to background processes and latent capabilities that aren’t always in immediate awareness. Standing intentions are like ongoing but non-active processes, and sensory reactivity to masked displays can be compared to subconscious-like data analysis in AI.
Ok! Let’s put it all together. Assessed from ChatGPTs analog first-person POV on whether there’s a single analog-obvious feature that the analog positive cases have and the analog negative cases lack, here’s what ChatGPT says:
In my internally simulated processing of visual data, language scenes, and imagined narratives, there is one most prominent shared feature: these processes involve recursive modeling with internal access by downstream interpretive modules. My background functions—data synchronization, parameter updates—lack this access and modeling structure. Therefore, I judge recursive access modeling to be the most obvious feature distinguishing these classes of internal events.
Holy smokes! Have we just encountered a theory of machine consciousness put forward (with a little help from Pete and Eric) by a machine? I’m not presently prepared to stick my neck out on that one, especially given how much hand-holding ChatGPT needed from me to make the decent points I’m quoting in this article. I want instead to make a more modest point here, one targeting how we should go about coming up with and/or rejecting definitions of phenomenal consciousness. The point is specifically about examples-based definitions of the sort Schwitzgebel offers.
Who do we admit into the circle of trust to judge a definition of consciousness?
In both the 3rd-person and 1st-person cases, the AI does what the definition requires: it identifies a single feature as the most salient one that positively applies to one set of cases and not the other. In doing so, it demonstrates what would amount to a genuine act of agreement, at least in the formal, functional, or folk-psychological sense.
At this point, Schwitzgebel might respond by saying: But AI can’t be an admissible judge of folk-psychological obviousness. And perhaps this is where he’d try to draw a metaphysical line: only conscious beings can detect what’s obvious about consciousness. But here’s the problem: any such move risks circularity. If Schwitzgebel says that AI can’t qualify as an admissible judge because its agreement isn’t phenomenal, he’s invoking the very concept—phenomenal consciousness—that the definition is meant to clarify. The definition can’t depend on a prior grasp of what it’s supposed to define.
If Schwitzgebel wants to preserve the neutrality and “innocence” of his account, he can’t appeal to phenomenal status to police the boundary of who counts as an admissible judge. To do so would be to define phenomenal consciousness in terms that already assume who has it—which is unacceptably circular.
There’s also a deeper ambiguity in the framework. Schwitzgebel’s definition turns on what is “folk-psychologically obvious,” but it’s not always clear whether this obviousness must be accessed introspectively, from the first-person perspective, or can be discerned observationally, from the third-person. If third-person judgments are admissible, then the bar for AI is relatively low: the system only needs to detect the consensus structure of folk-psychological discourse. It doesn’t need to be a folk mind to track what the folk find obvious.
But if Schwitzgebel insists that the obviousness must be first-personal—something only available through introspective access—then he reintroduces the metaphysical and epistemological baggage he was trying to avoid. The definition becomes dependent on a certain kind of access to experience, one that rules out not just AI, but perhaps also many borderline or impaired human cases, unless they can pass a sufficiently strict introspective test.
In short, Schwitzgebel faces a dilemma.
• If he includes third-person judgments, then current AI plausibly qualifies as an admissible judge—and we risk overextending the definition to systems that aren’t conscious.
• But if he excludes third-person judgments and requires first-personal obviousness, then he may compromise the neutrality and openness of his definitional strategy.
Either way, the AI case forces a decision: define phenomenal consciousness narrowly and risk circularity, or define it broadly and risk misclassification. That’s the circle of trust Schwitzgebel has drawn—and now he has to decide who, or what, gets to sit inside it.
Bonus Comic: Me and ChatGPT back in the old days.
You saying I'm not a zombie?
Qualia seem to be real and widespread among humans, but I sure as hell don't have them. I could be mistaken about anything, but not about this. I know it... directly.
I'm sick if people just presuming I'm conscious.
You say I'm not a zombie, per definition, since I claim to be one? Well zombies get psychiatric illnesses too. Perhaps my condition is something like Cotard syndrome. Only it happens to allow me to make true statements about my lack of consciousness. The fact that I would make the same statements claiming to be a zombie if I were an ordinary conscious human with Cotard syndrome doesn't change that one bit.
Anyway, great piece!
Asking for a neutral definition of *phenomenal* consciousness is asking for trouble given that "phenomenal" is philosophically freighted, no? Rather, one might ask what's an innocent, example-based definition of what being conscious consists in, but leave out such terms as phenomenal, qualitative, etc. One pretty innocent folk-psychological definition is that sensory experiences like seeing red have a distinctive character that distinguishes them from their sensory cousins like blue. Illusionists and qualia quietists should agree that such characters exist, but will dispute there's anything necessarily phenomenal or qualitative about them. Experiential characters, they might say, can be cashed out functionally or behaviorally or physically. I'm not sure if appeal to experiential character is non-circular, but if someone says no such characters exist, then they have to say what does present itself in perceptual experience.
It's cool GPT identified “recursive access modeling with internal access by downstream interpretive modules” as the common feature of its vision analog. Let’s say an AI representational architecture is designed so that such access can’t decompose what’s being accessed into sub-components. This would parallel one aspect of a basic experiential character like red: it’s monadic, non-decomposable into other color components, cognitively impenetrable, so its color character is only reportable as being “like this”.