Non-AI art by non-AI Pete Mandik
Thanks for taking my bait, Eric. I was hoping to trick you into engaging in a conversation in which you attempted to answer the sincere questions I have about your view (which i legitimately find interesting), and you fell for it! MUWAHAHAHA!
One thing your response has helped me to see is that the stuff about “appropriateness” and the stuff about “magic” are actually two distinct and (unbeknownst to you) unrelated arguments. Regarding appropriateness, the point of that line of thought seems to be something that a functionalist not only is able to agree with, it’s pretty much just the whole point of functionalism. One standard way of defining functionalism—and this applies to all flavors of functionalism e.g. computational functionalism, neurofunctionalism, teleofunctionalism—is that what makes some state of a system of states a MENTAL state (e.g. a belief, a desire, a sensation of pain) is not any intrinsic property of the state (e.g. having 22 billion sodium ions in it), but a set of causal relations that state bears to other states in the system. So, for example, a functionalist will hold that what makes a state a desire to drink beer is one that will cause, in concert with a belief that there’s beer in the fridge, an intention to go open the fridge, and so on. An analogy to such relational specifications is the way machine parts have their functions. What makes some hunk of metal a brake pad isn’t the specific material that it is made of, but the causal relations that it bears to the brake rotor, wheel, etc. That same hunk of metal might be put into a different set of causal relations, and now it’s not a brake pad any more, but instead a paper weight, a door stop, the pendulum of an up-cycled steam punk grandfather clock, what have you. For mental states that are supposed to be information bearing, for example, pain sensations that presumably carry information about tissue damage and disturbance, the functionalist says that a state of activation in a nociceptor cell doesn’t carry that information because of its intrinsic properties, but because of its causal relations to the states of the rest of the system, the states that count as short term memory that there’s been damage in the ankle, the desire to prevent further ankle damage, the intention to get an ice pack from the freezer, etc. Any worries that arise about such functionalist theories being circular, since they seem to be defining mental states by relations to other mental states, has been put to rest by the logical technique of Ramsification, so-named after its inventor Frank Ramsey, and employed by functionalist philosopher David Lewis to give rise to what we now call Lewis-Ramsey functionalism. To make a long story short, each of the defined mental state types can receive a unique definition in terms of the other state types, and so all of the resulting definitions are provably non-circular. (Suzi Travis has blogged about Ramsification in her recent post about LLMs and Predictive Processing theories of consciousness.)
So, anyway the story so far: the best sense to be made of your “appropriateness” just is the core point that all functionalists would endorse: no part of a system of states is going to carry information for the rest of the system all by itself; it is only in virtue of its relations to the rest of the system that it counts as carrying information that your ankle is sprained. None of this is magic, unless it seems magical to say that a hunk of metal cannot all by itself be a brake pad.
So, what’s going on with your accusations that computational functionalists are committed to magic or the supernatural? As best as I can tell, your paper-computer argument is just another version of the “weird realization” objections to functionalism like Ned Block’s Chinese Nation argument, which is an heir of Leibniz’s “mill” argument from section 17 of his Monadology, and inspiration of Searle’s Chinese Room argument (which has some brilliant additional elements that make it more than just a “weird realization” objection). Such arguments are also close cousins of explanatory gap arguments against physicalism.
Weird realization arguments exploit the apparent multiple-realizability entailment of functionalist theories: Since it’s the relations between physical states that matter, it’s left open that some other physical states could play the same causal roles. This, of course, is a big IF. IF microchip states could play the same causal roles as neuron states, THEN you could replace my neurons with microchips and I would feel pain. IF Chinese people with walkie-talkies could play the same causal roles as my neurons, then a Pete consciousness could be realized by a suitably networked nation of China. IF a bunch of slips of paper being passed around could play the same causal roles…you can see where this is going. The functionalist doesn’t assume and doesn’t assert that, e.g. a bunch of papers shuffled COULD play such causal roles, they are just saying that IF they did, then they would, by the lights of their theory, give rise to pains, hopes, fears, the thought “I am Pete” etc.
Many functionalists have, of course, taken the “bait” and said, yeah, sure, ok. What’s the problem? Let’s say, yeah, the whole Chinese room system, with papers, book, reader, understands Chinese. Yeah, the whole Chinese nation communicating via walkietalkie instantiates a group mind that thinks it’s the red-blooded all-American hero, Pete Mandik. Yeah, the paper computer feels pain. Why not? What’s the problem?
Here, the anti-functionalist seems to not have anything to say beyond “it’s weird!”. It’s unintuitive! “I don’t see how a bunch of shuffled papers could give rise to a feeling of pain, or an experiencer of said pain, or an understander of the Mandarin word for pain, or whatever”.
Besides being an exceedingly weak argument, the problem with “it’s weird”-style objections to functionalism are their adjacency to explanatory gap aka hard problem arguments (and their cousins the zombie arguments, the Mary arguments) against physicalism. If the failure to see why or how a bunch of physical anythings can give rise to an experienced of pain is a good reason for thinking pain wouldn’t arise under those conditions, then it doesn’t matter which physical thing you plug into the theory. Explanatory gap arguments can’t be selectively wielded against just one particular physicalist theory (like a paper computer theory), they can be wielded against all physicalist theories. Suppose someone were to propose some allegedly nonfunctionalist physicalist theory, like, that a state of experiencing pain just was a fluctuation in a certain kind of electromagnetic field. An explanatory-gap anti physicalist could just say “that’s weird! I don’t see how a fluctuation in an electromagnetic field could be an experience of pain”. A zombie-conceived could say, for pretty much the same “reasons” “I can conceive to the field giving rise to no conscious experiences whatsoever. “You physicalists ALL believe in magic, as far as we can tell,” say the anti-physicalists.
One common retreat among physicalists who reject functionalism is to lean on the a posteriori/empirical nature of their theory. Such strategies are what Chalmers calls Type-B Materialism. They concede that there will be no transparent explanation of WHY their favorite physical state gives rise to an experience of pain, but they console themselves with the possibility of empirically proving that the particular physical state type goes along with the particular mental state type. It can’t be magic if it’s empirically supported!
This is cool as far as it goes. Something that happens as a matter of empirical regularity just is what it means for something to be non-magical or non-miraculous, as Hume point out in his “On Miracles”. But it cannot go so far as to cut any ice against the OG functionalists (who Chalmers lumps in with Type-A Materialism). As Chalmers himself will gladly allow, you can have hypothetical empirical evidence for any functionalist theory you want, via a “gradual replacement” scenario. Take someone who is clearly a conscious experiencer, and gradually replace each of their neurons with microchips, or Chinese people with walkie-talkies, or automatic paper shufflers. Whatever empirical evidence you have at the beginning of the process that the person has conscious experience is evidence that you will continue to have after all of their neurons have been replaced with, e.g. microchips, even if that person is you yourself!. Regarded strictly as empirical theories, as opposed to philosophical ones, functionalism has no disadvantage against its nonfunctional physicalist rivals. The differences only show up when we presume that what we are trying to do is purely philosophical—armchair analysis of what “conscious” means. In the armchair, the rules of the game include “if you can conceive it without conceiving it giving rise to consciousness, then you win and it loses”.
Bottom line: “it’s weird” objections only cut ice if you’re playing a non-empirical game. If you want to play an empirical game, you need to lay off the thought experiments. No one has built a paper computer and called it conscious. Probably no one ever will. And if they did, you’d have to apply strictly empirical reasons for rejecting their claims. “It’s weird!” won’t cut any ice.
Bonus Reading Recommendations:
Mandik, Pete. (2017). Robot Pain. In: Corns, J. (ed.). The Routledge Handbook of Philosophy of Pain. New York: Routledge. (pp. 200-209). https://philpapers.org/archive/MANRP-4.pdf
Mandik, Pete and Weisberg, Josh. (2008). Type-Q Materialism. In Chase Wrenn, (ed.), Naturalism, Reference, and Ontology: Essays in Honor of Roger F. Gibson, New York: Peter Lang Publishing. pp. 223-246. https://philpapers.org/archive/MANTM.pdf
Very good article, I still don't know much about philosophy of mind, but counter-arguments to physicalism appealing to strangeness and conceivability don't seem very strong. For example, I can conceive of throwing gasoline on a fire and the fire going out and I also came across the zombie ghost argument one day:
1 - It is conceivable that a zombie ghost exists, a ghost that would behave identically to a conscious ghost, but that would not have internal consciousness
2 - What is conceivable is possible
3 - Therefore, it is possible that a zombie ghost exists
4 - If a zombie ghost is possible, then consciousness is not reducible to an immaterial substance
5 - Therefore, consciousness is not reducible to an immaterial substance
Anyway, I could be wrong.
Again, awesome Pete, thanks! It humbles me to know that you ripped out such an in depth response to me in nothing flat. My own mind functions far more slowly, though if given sufficient time I will always try to earn the interest that you’ve displayed.
It’s true that I was initially quite satisfied with the “weirdness” angle for my thumb pain thought experiment. It was responses of essentially “So what?” from my buddy Mike Smith that led me to take things further. Thus I needed to go beyond the work of people like Searle and Block. So what exactly is magical about an experiencer of thumb pain by means of the correct marks on paper that are algorithmically processed to create the correct other marks on paper? My answer is that the resulting marked paper cannot inherently be informational, but only in respect to something causally appropriate to exist as “an experiencer of thumb pain”. So if you follow the logic chain here, that specifically is the magic.
In your post you seemed to agree with me that in a causality based world, information should only exist as such to the extent that it goes on to inform something causally appropriate. Wasn’t that the point of the progression you laid out where certain material can be a break pad, or paper weight, or door stop, or part of a cyberpunk clock? The material will not inherently be informational in any specific sense, but only in respect to what it informs? I’ve made the same observation in the more traditional information sense of a keystroke that a computer processes but only exists as information in respect to a screen, speakers, memory, or whatever else it might inform that’s causally appropriate. Otherwise such potential information should not be information at all. Or a DVD will not be causally appropriate to inform a VHS machine, though as a shim it may still inform a table leg.
If we’re all square on this then my“gotcha!” is how functionalists do not follow this model in just one situation, or regarding consciousness. You obviously know the history of functionalism far better than I do. Nevertheless I’m proposing that it all goes back to Alan Turing’s 1949 imitation game. Given what a joke consciousness was in academia back then (and I think still is), he made a quick little heuristic to get around this mess through a test which suggests that something will be conscious if it seems conscious. Thus apparently your people ran with this to create the situation we have today. There’s just one catch that makes it magical however. Thus conventional computers should never create beings that deserve human rights, or become smart enough to take over our world, or permit us to upload our own consciousness to them so that humans can effectively live forever by means of technology. Just as our brain’s processed information must be informing something causally appropriate to create the “you” that experiences a whacked thumb, the correct marks on paper must do this as well to causally create such an experiencer. If so then great, the causal circle will be complete even in terms of marked paper. Otherwise… no.
Then once I had this realization I was able to go further. If my brain’s processed information is informing something causally appropriate to exist as all that I see, hear, feel, think, and so on, then what might this be? Months later I came across the proposal that our brains might be informing a unified electromagnetic field to exist as the substrate of consciousness. Once inspected, I couldn’t imagine a second reasonable possibility. But here’s the thing that matters now. When I propose that consciousness might exist in the form of an electromagnetic field that processed brain information informs, it’s thus possible to empirically test. That’s what I mean to discuss in my next post. But when functionalists propose that consciousness arises by means of the processing of information in itself, this not only breaks the model about information needing to inform something causally appropriate to exist as such, but it thus remains unfalsifiable. Substrate based consciousness ideas are inherently testable given the “substrate” element. Substrate-less consciousness ideas are inherently unfalsifiable given the inherent lack of anything to test. So once again, here I’m framing this as something which is potentially causal that could be tested, versus inherently magical.