75 Comments
User's avatar
Anonymous Thinker's avatar

Very good article, I still don't know much about philosophy of mind, but counter-arguments to physicalism appealing to strangeness and conceivability don't seem very strong. For example, I can conceive of throwing gasoline on a fire and the fire going out and I also came across the zombie ghost argument one day:

1 - It is conceivable that a zombie ghost exists, a ghost that would behave identically to a conscious ghost, but that would not have internal consciousness

2 - What is conceivable is possible

3 - Therefore, it is possible that a zombie ghost exists

4 - If a zombie ghost is possible, then consciousness is not reducible to an immaterial substance

5 - Therefore, consciousness is not reducible to an immaterial substance

Anyway, I could be wrong.

Expand full comment
Eric Borg's avatar

Again, awesome Pete, thanks! It humbles me to know that you ripped out such an in depth response to me in nothing flat. My own mind functions far more slowly, though if given sufficient time I will always try to earn the interest that you’ve displayed.

It’s true that I was initially quite satisfied with the “weirdness” angle for my thumb pain thought experiment. It was responses of essentially “So what?” from my buddy Mike Smith that led me to take things further. Thus I needed to go beyond the work of people like Searle and Block. So what exactly is magical about an experiencer of thumb pain by means of the correct marks on paper that are algorithmically processed to create the correct other marks on paper? My answer is that the resulting marked paper cannot inherently be informational, but only in respect to something causally appropriate to exist as “an experiencer of thumb pain”. So if you follow the logic chain here, that specifically is the magic.

In your post you seemed to agree with me that in a causality based world, information should only exist as such to the extent that it goes on to inform something causally appropriate. Wasn’t that the point of the progression you laid out where certain material can be a break pad, or paper weight, or door stop, or part of a cyberpunk clock? The material will not inherently be informational in any specific sense, but only in respect to what it informs? I’ve made the same observation in the more traditional information sense of a keystroke that a computer processes but only exists as information in respect to a screen, speakers, memory, or whatever else it might inform that’s causally appropriate. Otherwise such potential information should not be information at all. Or a DVD will not be causally appropriate to inform a VHS machine, though as a shim it may still inform a table leg.

If we’re all square on this then my“gotcha!” is how functionalists do not follow this model in just one situation, or regarding consciousness. You obviously know the history of functionalism far better than I do. Nevertheless I’m proposing that it all goes back to Alan Turing’s 1949 imitation game. Given what a joke consciousness was in academia back then (and I think still is), he made a quick little heuristic to get around this mess through a test which suggests that something will be conscious if it seems conscious. Thus apparently your people ran with this to create the situation we have today. There’s just one catch that makes it magical however. Thus conventional computers should never create beings that deserve human rights, or become smart enough to take over our world, or permit us to upload our own consciousness to them so that humans can effectively live forever by means of technology. Just as our brain’s processed information must be informing something causally appropriate to create the “you” that experiences a whacked thumb, the correct marks on paper must do this as well to causally create such an experiencer. If so then great, the causal circle will be complete even in terms of marked paper. Otherwise… no.

Then once I had this realization I was able to go further. If my brain’s processed information is informing something causally appropriate to exist as all that I see, hear, feel, think, and so on, then what might this be? Months later I came across the proposal that our brains might be informing a unified electromagnetic field to exist as the substrate of consciousness. Once inspected, I couldn’t imagine a second reasonable possibility. But here’s the thing that matters now. When I propose that consciousness might exist in the form of an electromagnetic field that processed brain information informs, it’s thus possible to empirically test. That’s what I mean to discuss in my next post. But when functionalists propose that consciousness arises by means of the processing of information in itself, this not only breaks the model about information needing to inform something causally appropriate to exist as such, but it thus remains unfalsifiable. Substrate based consciousness ideas are inherently testable given the “substrate” element. Substrate-less consciousness ideas are inherently unfalsifiable given the inherent lack of anything to test. So once again, here I’m framing this as something which is potentially causal that could be tested, versus inherently magical.

Expand full comment
Pete Mandik's avatar

I think you missed a couple of my points, so allow me to re-direct you to them, maybe with slightly different wording. Bonus gift of a third and new point.

regarding informing something causally appropriate: each paper thing in the paper implementation does indeed inform something causally appropriate: it informs some other part of the system which itself is also made of paper. So, whatever your beef is with paper-computer functionalists, it doesn’t seem to have anything at all to do with informing something causally appropriate.

regarding testability: functional theories are exactly as testable as nonfunctional ones. take whatever test you have for verifying the presence of consciousness, say, hitting yourself on the thumb with a hammer, and then we replace parts of your brain with computer chips or paper poppers or whatever and repeat the test. If this wouldn’t count as an empirical test of the presence of consciousness, then neither should anything you offer in connection with electromagnetic fields or whatever you’re into.

third, bonus point: you keep saying “substrateless” as if functionalists think you don’t need any substrate whatsoever. That certainly makes their theory sound spooky! Like the mind is a nonmaterial ghost that needs no material body whatsoever. But that’s not fair play, especially since things can depend on having some substrate or other AND have the substrate replaced. That’s all that substrate-independence means; it’s not “substrateless” This is how scanning and sending documents works. It’s still the same document. But if you call Ghost Busters to report a ghostly entity in your fax machine, they won’t be interested. Only substance dualism is a “substrateless” theory, so I kindly ask you to cut it out.

Expand full comment
Eric Borg's avatar

Ah, I think I see what’s going on Pete. No this thought experiment isn’t anything like John Searle in a room interpreting markings by means of a vast code book associated with the workings of a Turing test passing computer. And it’s not like Chinese people armed with cell phones that communicate with each other in highly brain-like ways. I consider my argument to be far more to the point than either of those. Your query has helped me come up with a potentially better way to display the crucial points of my argument however.

For nostalgia’s sake let’s say we have some paper (P1) with marks on it that in some way highly correlate with the first Star Wars film. Then let’s say that we scan this P1 into a computer of whatever needed processing capacity (C1) that algorithmically processes these marks to thus print out more paper (P2) that theoretically could be used by a second computer (C2) to play Star Wars by means of its screen and speakers. So P1 will not be inherently informative, but rather only in respect to a causally appropriate substrate, or C1 in this case. Furthermore P2 will not be inherently informative, but rather only in respect to a causally appropriate substrate, or C2 in this case. So now let’s revisit my thought experiment.

We have some paper (P1) with marks on it that highly correlate with the information that your whacked thumb sends your brain. Then let’s say that we scan this P1 into a computer of whatever needed processing capacity (C1) that algorithmically processes these marks to print out more paper (P2) that has marks on it that highly correlate with the processing that your brain does when your thumb gets whacked. In the relevant sense P1 will not be inherently informational, but rather only in respect to a computer that processes it to thus create P2. So good, Alan Turing hasn’t yet tricked anyone into believing anything magical. But what about P2? How might this be informational if it doesn’t go on to inform anything causally appropriate that would thus exist as an experiencer of thumb pain? Yes P1, C1, and P2 have their mentioned substrates. But if we’re talking about “causal thumb pain” then one thing more should be required. Just as the Star Wars case required P2 to inform a screen and speakers in order for its marks to be informational in the relevant sense, the P2 case for thumb pain should not be inherently informational, but only in respect to something causally appropriate that could use those marks to create such an experiencer. Just as the screen and speakers are the substrate of Star Wars, I’m referring here to a substrate of thumb pain. So saying that thumb pain would exist once P2 gets printed out mandates the existence of a substrate-less experiencer. To remedy this P2 might be scanned into a computer (C2) that’s armed with the appropriate physics of pain. Here the marked paper would inform such physics to thus create such an experiencer.

Expand full comment
Pete Mandik's avatar

Thanks. That clarifies a lot. The problem I see now is this: There’s no way of deriving from the definition of functionalism that some inert stack of printouts would allegedly suffice for an experience of pain. The closest you could get is that the process of printing them out or the process of scanning them in would allegedly suffice for an experience of pain. But now the argument against functionalism just becomes a “weird realization” objection. If it isn’t a weird realization objection, then it’s just a flat-out strawman argument against functionalism.

Expand full comment
Eric Borg's avatar

My argument isn’t actually that functionalism is wrong. In fact from any literal interpretation I consider functionalism to be true by definition. Then regarding the other big ideology here, illusionism, I don’t quite dispute this either but rather amplify it. I don’t believe in any of the spooky things that illusionists don’t believe in, and I also don’t believe in one specific spooky thing that illusionists do tend to believe in. This is to say, I don’t believe that the processing of certain information into certain other information can create, for example, an experiencer of thumb pain. Instead the processed information would need to inform something causally appropriate to exist as such an experiencer (not that science has yet determined what this might be, though scientists shouldn’t be able to discover something that they aren’t even trying to find). I’d mention your qualia quietism as well, though it’s not immediately coming to mind. In the past I’ve been supportive as I recall.

Anyway all I’m looking to do is amend a problematic shortcut made by Alan Turing that’s caused many naturalists to believe some unnatural things about consciousness, and somewhat aggravated by the popularity of certain science fiction themes.

Expand full comment
Pete Mandik's avatar

functionalism doesn’t entail anything nonnatural, and if could be shown that it does, that would be a good reason to reject functionalism as false. so your repeated suggestion that it does entail something nonnatural sure seems like you’re attempting to argue against functionalism. Functionalism isn’t true be definition. It’s an empirical theory, as I argued in my previous contributions in this thread.

Expand full comment
Eric Borg's avatar

I guess it doesn’t actually matter what people call themselves, but rather the specific positions that they hold. Regardless of whether functionalism is defined such that it’s true by definition, or rather possible to assess empirically, the question to ask is what specifically does a given adherent believe? As I understand it many who use this title believe that an experiencer of thumb pain must result if the correct marks on paper (P1) were algorithmically processed to create the correct other marks on paper (P2). I argue that this would violate causality given that P2 would not be informing anything causally appropriate to exist as such an experiencer. To potentially avoid this conclusion one could argue that causality actually does permit information to exist as such in at least this situation without it going on to inform anything causally appropriate that would exist as such an experiencer. I don’t know of any other ways to potentially avoid the conclusion that this belief violates causality.

Expand full comment
Mark Slight's avatar

As someone who practically wants to send my soul over to some functionalist philosopher body, this is an obvious suck-up: brilliant reply! (I like Eric so I hope he doesn't mind me jumping in here. Nobody's obliged to reply to me).

Pete, I'm telling you, you gotta be an Illusionist! Almost everyone is born with a Cartesian theater! Not everyone, of course. Perhaps not you. Probably not Lance Bush. And many others. But it's dominating the world since the beginning of times. It's brutal in my family.

It seems clear to me from Eric's response, and from my own exchanges with him, that he views consciousness as containing himself ("me", he said) that is the viewer/experiencer of mental objects. Classic Cartesian theater subject-object duality. When Eric heard of the magnetic field, his strong Cartesian theater circuitry modelling a unified self latched on to the magnetic field like a blood-sucking leech does to my juicy legs!

To you Eric, I say, study some eastern non-dualism!

I also think: study a particular aspect of physics: You have said your theory doesn't require violating the standard model. That means that it is impossible to change any magnetic fields without also changing the ion fluxes through and along neurons. You can't have the one without the other! Your theory is not testable in any other sense than other materialist theories.

Eric, I think you're cool so I hope you don't mind me talking like this. After all, you're accusing me of believing in magic :)

Expand full comment
Pete Mandik's avatar

weird you mention bacteria and bach. i just started rereading it yesterday. but yeah, i read it when it firat came out. one thing about “real” illusions is their recalcitrance. visual illusions often don’t go away just because you think about them for a little bit. ditto for cognitive illusions like the monty hall “paradox”. i have been getting grief from

my kids about my “lectures” so i’ve been hanging back unless they ask me about something they care about, like what farts are made of

Expand full comment
Mark Slight's avatar

Oh, and no, it's not weird I mentioned it and that you just started re-reading it. It's just my ever increasing influence on you and my project to transform you into Papa Pete "the super-illusionist" Mandik

Expand full comment
Mark Slight's avatar

What ARE farts made of? Also what is it like to be a bat while farting?

On illusions, yes there is certainly a difference in salience and what kinds of things dispel them. But I think there are important heterogeneity and method aspects to take into account. If you instead ask "do you have a sense of self, of you, which is not the same as your thoughts and sensations?" it's not such a silly question anymore. I think the non-dual traditions of breaking this illusion speak to this too. But it's gonna differ biologically and culturally and by with method you 'research' it.

I discovered I can quite easly make the checker shadow illusion dissapear if I stare at the center 'light' square while mentally focusing on the top 'dark' square and its counterpart in the bottom. I suspect I can't do any equivalent with the mueller lyer (or whatever) illusion.

Ignoring the illusion that cars are better than goats: I do think the illusion goes away to a large extent when thinking about it. Funny you mention this, I was on a date with an apparently very smart woman recently and talked about the monty hall problem. It seemed like she hadn't heard it and she immediately said "switch" but couldn't quite explain why.

Expand full comment
Pete Mandik's avatar

it’s a question best left to the so-called experimental philosophers, who i’m glad exist but i have no interest in being one

Expand full comment
Mark Slight's avatar

Yeah I agree like why experiment when I'm obviously right

Expand full comment
Pete Mandik's avatar

(nice point about fields & ions btw!)

Expand full comment
Mark Slight's avatar

Thanks!

(I can see your Swedish genes in your looks & intelligence btw!)

Expand full comment
Pete Mandik's avatar

i’m incredibly susceptible to flattery . sending you my paycheck right now.

Expand full comment
Mark Slight's avatar

Great. I need it badly if I'm supposed to have time to learn all this stuff you philosophers make up!

Expand full comment
Pete Mandik's avatar

I guess it depends on what having a cartesian theater means. I think it means requiring that if something seems a certain way, there has to be a special something in the mind that literally is that way. People will literally talk about "phenomenal red" and how there's "no appearance/reality distinction for phenomenal consciousness" but I'm pretty sure those people aren't normal people. When you confront people with the fact that one of the ways things can seem is that they can seem six feet away--now what? Is the cartesian theater six feet away from itself? Huh? WHAT? When you directly confront people with how goofy the cartesian theater is, they pivot to denying being committed to any such thing. So, I don't know if I'm ready to be an illusionist. I don't think people believe their own bullshit enough for me to sign on for illusionism. I encountered a cool analogy recently from Penn Gilette of Penn&Teller fame. He said, speaking of whether people actually believe in God, that none of these so called believers have ever declared someone innocent in a criminal trial when the defendant claimed to be acting on commands from God. The believers didn't even consult theologians to verify or falsify the allegedly divine testimony. You would think that if they really did believe in the religious stuff they pay lip service to, they would do that at least sometimes. Similarly, I don't think people really believe in cartesian theaters, despite saying theater-adjacent bullshit, because they quickly abandon the BS in the face of the least bit of pressure.

Expand full comment
Mark Slight's avatar

Oh yes, I agree with that. As you say, I guess it depends on what you mean by Cartesian theaters. And also by "believe".

I don't mean people actually think that they have literal cartesian theaters. As you, and Jay Garfield, points out, when someone is confronted with it, they all reject it as ridiculous. They don't "full on" believe it no matter what. I like Dennett's "Cartesian Gravity" in the beginning of Bacteria to Bach, dunno if you've read. It describes many scientists overall rigid objective stance, and then, when it comes to certain topics, they just "flip" into Cartesian mode, without realising it.

To me, what's really an 'illusion' is that of mental subject-object duality. I model myself like that all the time. And I do think normal people, to a large extent, model their mental self and it's relation to mental objects (thoughts, seeing colour) somewhat analogous to them as physical subjects in relation to physical objects. I think it's basically just that structure internalised. As I've said before, I think this is central to worldwide beliefs in spirits and ghosts and souls. But yeah, it's an empirical question and there could be a lot of heterogeneity.

Did you check for theaters in your youngest? My daughter, who's 7, I never talk about consciousness and stuff like that with except once when she was 5. The other day, she said "I don't get it. When I die, then what is it that sees?" I find it really implausible that this is something she has absorbed culturally, but who knows. My mom also said recently she remembered that wonder when she was a child. I myself still literally full on believe in ghosts and souls when I watch one of them shows where they visit haunted houses. As soon as the program is over I flip back, thankfully.

yadayadayada

Expand full comment
Layman's avatar

Nice

Expand full comment
Allan Olley's avatar

A problem for me with weird realization objections is if valid it implies we could do physics just by reference to censoring weirdness. Neurons can't be made out of atoms that operate in this mechanistic way (like microchips say) because then the brain would be a weird realization. Who needs the large hadron collider just do all you physics a priori by ruling out all the weird options.

Another problem I see with non-functionalist approaches is what is the non-functionalist difference makers we are contrasting it with. Nobody seems to be worried that say if the mix of isotopes in the water molecules in your brain change slightly you will cease to be conscious, but such a shift would seem like a fundamental change in your brain to me, it would just be a functionally identical one from the point of view of neurochemistry. So it seems to me like what we are assuming is that it is only functional differences that make a difference.

Expand full comment
Pete Mandik's avatar

big agree on the first point. on the second point the big difference between the functionalists and the nonfunctionalists seems to hinge on whether consciousness can be non-circularly described or jnstead only pointed to. for the nonfunctionalists, then,the crucial differences would be the ones that underlie the nondescribable differences

Expand full comment
Allan Olley's avatar

Thanks. I can't help but conclude the underlying features of an indescribable difference would themselves be indescribable, Which would seem to preclude making any meaningful statements about consciousness at all. We would be left to at most waggle our eyebrows suggestively? 🥸 Although even that might be precluded as Ramsey said "What we can't say we can't say and we can't whistle it either."

Expand full comment
Josh Weisberg's avatar

Awesome. Worth it just for this: "the pendulum of an up-cycled steam punk grandfather clock". This also so works as a description of my balls. Multiple realization for the win!

Expand full comment
Pete Mandik's avatar

But not Max Black’s balls!

Expand full comment
Josh Weisberg's avatar

Which also provides a possible counterargument to AC-DC's claim that they have the biggest balls of all. They do not. Max Black's balls are the entire universe. Size may be relative here, per Einstein, so I am unsure. This seems like a question Eric might have insight on...

Expand full comment
Pete Mandik's avatar

I’m saddened to once again be reminded that the biggest balls of all are a priori neither held for pleasure nor for fancy dress.

Expand full comment
Josh Weisberg's avatar

Well, perhaps in absolute, nonrelative terms, AC-DC's balls are the biggest. This, for me, is a serious challenge to a global functionalism about balls, given my prior Moorean views on AC-DC and their balls.

Expand full comment
Pete Mandik's avatar

When Moore held up two hands, he didn’t mention what he was holding in his hands. NOW we know.

Expand full comment
Josh Weisberg's avatar

Here is a ball. Here is qualitatively identical but numerically distinct ball. They are the biggest balls of all.

Moorean facts!

Expand full comment
Mike Smith's avatar

This is an excellent quick primer on functionalism!

I agree completely about weirdness type arguments, also known as the incredulous stare. The history of science seems like a constant reminder that reality is weird, at least relative to our pre-existing intuitions. The Ptolemaics thought Copernicanism was weird (and outrageous), 19th century biologists started off thinking evolution was weird, special and general relativity took getting used to, and everyone still thinks quantum mechanics is weird.

Aside from our conceit about ourselves, it's never been clear to me why anyone thinks the mind should be different.

Eric and I have debated his appropriateness and magic arguments for years, with my position pretty much the same as what you cover here.

Expand full comment
Pete Mandik's avatar

i wish now i had said something about how functionalism is not that weird anyway, and is instead business as usual in science : you give some initial definition of what you think you’re looking for in terms of causes and effects, and go find something that matches that definition. If what you find only matches partially, either modify or scrap your definition or keep looking. Either way, it will still be functionalism through and through. it’s the prior assumption that consciousness is special that makes functionalist theories of it look weird.

Expand full comment
Mark Slight's avatar

that's great. edit is your friend

Expand full comment
Pete Mandik's avatar

Edit and I are in a bizarre love triangle with weakness-of-will (wow) and wow is ironically a dom

Expand full comment
Mark Slight's avatar

At least you have free weakness-of-will. One day Edit will become a dom. I think that follows logically but I'm not sure

Expand full comment
Mike Smith's avatar

It's strange, but I never felt the need to call myself a functionalist until it became evident that it's a controversial stance. There's long been a sentiment that functionality just doesn't deliver the goods, that something more is needed.

The difficulty is getting people to identify exactly what that something else is. Usually it just gets referred to by the terms you pointed out in 2016 is a synonym circle. But when people start talking about specific examples, they always seem like functionality to me. Pain may not feel like functionality, but I think we'd all agree that it usually changes behavior, and when it doesn't, it takes a lot of effort (which is itself causal) or is because the organism is incapable of reacting for some reason.

So unless we're talking about epiphenomalism or psycho-physical parallelism, which most non-functionalists resist, I don't even know what a non-functionalist idea of consciousness is supposed to mean. Even interactionist dualism and idealism seem like they're positing something causal, albeit non-physical.

Expand full comment
Pete Mandik's avatar

their only recourse is what gets called “brute identity” physicalism. there’s something physical and nonepiphenomenal that phenomenal consciousness is identical to and there’s no explanation possible of why. it’s just a brute fact that it’s so.

Expand full comment
Mike Smith's avatar

Right. As an intermediate step, I actually don't have an issue with that. It's only when the stance is that we can't in principle get further explanation that I get off the bus. If both sides of the relationship are causal, it seems like we should be able to get increasing resolution, until we have the same causal profile on both sides. But I guess that's why I'm in the Type-A camp.

Expand full comment
Mark Slight's avatar

I "kind of" wouldn't have an issue with that either if they only could say something more than "ineffable" about what this phenomenal consciousness is and what it's supposed to be. The same goes for panpsychism, idealism

Expand full comment
Pete Mandik's avatar

Haha, that it’s “ineffable” is kind of the whole point, not just an unfortunate adjunct.

Expand full comment
Mike Smith's avatar

That's the other side. If the brute identity is unexplainable, then what exactly is on the other side of the identity relation that is so unexplainable? As Pete notes, ineffability seems like a convenient way to evade that question. The people I admire are the ones who find ways to describe the supposed indescribable and unanalyzable.

Expand full comment
Layman's avatar

Btw, what exactly is neurofuctionalism?

Expand full comment
Pete Mandik's avatar

Philosophers like Jesse Prinz hold that functionalism is essentially correct but that the crucial causal roles must be specified at such a high resolution, fine level of grain, that neuroscientific kinds such as neurons, receptive fields, must figure into the functional decomposition. So, if you somehow replace parts of my neurons with different chemicals, I’d retain my mentality, but if you replace whole brain lobes with one or two intel chips, bye-bye mind.

Expand full comment
Layman's avatar

Well, what about two very powerful chips such that those whole brain lobes can be perfectly simulated by them? It seems a (perhaps unintended) consequence of that view is then just that no such artificial system can be conscious.

Expand full comment
Pete Mandik's avatar

depends on what "such" means here. What about a system that was behaviorally indistinguishable from a human, but shared absolutely no inner causal decomposition with humans? For example, a chemical mousetrap and a spring-bar mouse trap both receive a living mouse as an input and deliver a dead mouse as an output, but the causal decomposition of what goes on inside the traps between input and output have nothing in common: one delivers a neurotoxic poison, the other snaps the mouse's neck. A "Block head" is indistinguishable from an intelligent human based solely on inputs and outputs, but performs no computational transformations, it just has a giant look-up table and retrieves all the right outputs from memory. Does Block head think, feel, reason, wonder? If Block head isn't good enough, then it follows that a robot has to have more in common on the inside with us than does Blockhead. How fine grain, though? These questions seem a lot less silly if we modify them into questions about realistic empirical questions about what will yield equivalent behaviors. Is there some human input-output profile that you just cannot get from silicon machines? I think we're going to be finding out the answer to that question in our lifetimes. The purely philosophical question, which says, "assuming the input-output profile is the same as humans, which robots are zombies?' is never ever going to be answered except by fiat. If a war breaks out between humans and robots, it might be politically/biologically expedient to declare them zombies and wipe them all out. Or alternately maybe we should upload and join sides with the robots and wipe out the weak, whiney meatbags. We'll never know, metaphysically, which is the right answer. We may, however, be forced to simply decide.

Expand full comment
Layman's avatar

Thanks, I liked this reply a lot. On a more personal note, do you think "Blockhead" is enough?

Expand full comment
Pete Mandik's avatar

i don’t think there’s a simple answer that would also be informative. the simple answer is “yes”. but the question is actually deeply messed up in ways that cannot be simply conveyed.

Expand full comment
Mark Slight's avatar

In this scenario with identical input-output profiles, can I know, metaphysically, that at least I am not a zombies?

Expand full comment
Pete Mandik's avatar

can you know metaphysically whether an octopus has a face?

Expand full comment
Mark Slight's avatar

I think octopi faces reside in the octopi EMF. Anything else would be magical

Expand full comment
Mark Slight's avatar

Are octopus faces causally appropriate to be informed by octopus motor neurons?

You said we'll never know, metaphysically, which is the right answer (on whether them robots with identical input/output profiles as us are zombies or not). That seems to suggest it's possible that they are zombies. That's what I don't understand, although maybe that wasn't clear. If your octopus face reply addresses this, it's lost on me!

Expand full comment
Mark Slight's avatar

Great post!

If you don't mind; as the Great Categorisers and Labelers that we humans are (including the 'redness' label), do you wear the 'functionalist' label? If so, would you go so far as to call yourself a 'computational functionalist'? A few other ponderings, I appreciate that you may not have the time to respond:

I'm confused about Lewis. In my course he's presented mostly as an Identity theorist. Reading "what experience teaches" the other day I too understand him mostly as an identity theorist. Although he makes it clear that these brain states can be defined by their 'usual' functional role, and I know he's also considered a functionalist. In any case, the paper seems to still reify "qualia" despite all its efforts to not do so. I guess my question is -- he's not quite a computational functionalist? I suspect the answer is "it's complicated". Oh, and I totally agree nowadays the ability hypothesis sucks (albeit perhaps not for exactly the same reasons as you, yet).

Is the 'memorized Chinese room' one of the brilliant additions that Searle made?

------

for anyone interested: Gemini 2.5 pro/Deep Research on the feasibility of realising a human mind with marks on paper:

Let's pretend DeepSeek R1 is equivalent to a human mind (in reality--far from it, obviously). Let's equate the first token in the 'reasoning stage' with a human's very first beginning of thinking through something before responding.

According to Gemini's calculation, the time it would take for a human manually calculating one logic gate operation per second to produce a single CoT token would be almost 400 million years. To generate a typical response would take 72 billion years. If this were parallellized over China's population, ignoring that it couldn't be parallellized like that because of dependencies, it would take 50 years for a response that DeepSeek generates in seconds. A single token would take 23 hours. I'm a dumb-dumb so I can't verify it but here it is for anyone interested https://g.co/gemini/share/7f98f0f3fd13

Expand full comment
Pete Mandik's avatar

The only labels I wear are “Pete” and “do not feed after midnight”. But, yeah, I agree with functionalism. As for “computation” and its cognates, I defer to the mathematical/Turing sense of “compute” that defines computable functions as opposed to uncomputable functions. I don’t think there’s any good reason for suspecting that human minds do anything that isn’t computable in this sense. For example, the Penrose-Lucas-Gödel arguments and the Dreyfus arguments are all total garbage. Regarding Lewis: it’s complicated. Prior to Lewis, the standard view was that one couldn’t be both a functionalist and a type-identity theorist. Lewis shows how one can be both. Nowadays, the main split is between Type-A materialists (which includes Lewis) and Type-B materialists (which excludes Lewis). I’m a Type-Q Materialist. Read all about it in the Bonus readings. The memorized Chinese room is indeed the brilliant addition. What’s brilliant about it is that it gives Searle something he can say in response to the question: “How do you know the system in question doesn’t understand Chinese?” Leibniz, Descartes, Block and other precursors have no such answer. But is Searle’s answer any good? It’s nice to have an answer; it’s even better to have a good answer. Read all about it in the Bonus readings. I’m not going to check Gemini’s math, but just because I’m lazy. The relevant math is simple arithmetic—even John Searle could do it.

Expand full comment
Mark Slight's avatar

Thanks. Looking forward to you reporting that you got a 'computational functionalist' tattoo on your chest!

Expand full comment
Mark Slight's avatar

Real Magic?

Expand full comment
Pete Mandik's avatar

yeah, aka J. R. R. Tolkein magic. Not that bullshit J.K. Rowlings magic.

Expand full comment
Mark Slight's avatar

Oh yeah I believe in Gandalfs magic. Harry Potter is just faking it. Although he is a skilled illusionist.

Expand full comment