Discussion about this post

User's avatar
Anonymous Thinker's avatar

Very good article, I still don't know much about philosophy of mind, but counter-arguments to physicalism appealing to strangeness and conceivability don't seem very strong. For example, I can conceive of throwing gasoline on a fire and the fire going out and I also came across the zombie ghost argument one day:

1 - It is conceivable that a zombie ghost exists, a ghost that would behave identically to a conscious ghost, but that would not have internal consciousness

2 - What is conceivable is possible

3 - Therefore, it is possible that a zombie ghost exists

4 - If a zombie ghost is possible, then consciousness is not reducible to an immaterial substance

5 - Therefore, consciousness is not reducible to an immaterial substance

Anyway, I could be wrong.

Expand full comment
Eric Borg's avatar

Again, awesome Pete, thanks! It humbles me to know that you ripped out such an in depth response to me in nothing flat. My own mind functions far more slowly, though if given sufficient time I will always try to earn the interest that you’ve displayed.

It’s true that I was initially quite satisfied with the “weirdness” angle for my thumb pain thought experiment. It was responses of essentially “So what?” from my buddy Mike Smith that led me to take things further. Thus I needed to go beyond the work of people like Searle and Block. So what exactly is magical about an experiencer of thumb pain by means of the correct marks on paper that are algorithmically processed to create the correct other marks on paper? My answer is that the resulting marked paper cannot inherently be informational, but only in respect to something causally appropriate to exist as “an experiencer of thumb pain”. So if you follow the logic chain here, that specifically is the magic.

In your post you seemed to agree with me that in a causality based world, information should only exist as such to the extent that it goes on to inform something causally appropriate. Wasn’t that the point of the progression you laid out where certain material can be a break pad, or paper weight, or door stop, or part of a cyberpunk clock? The material will not inherently be informational in any specific sense, but only in respect to what it informs? I’ve made the same observation in the more traditional information sense of a keystroke that a computer processes but only exists as information in respect to a screen, speakers, memory, or whatever else it might inform that’s causally appropriate. Otherwise such potential information should not be information at all. Or a DVD will not be causally appropriate to inform a VHS machine, though as a shim it may still inform a table leg.

If we’re all square on this then my“gotcha!” is how functionalists do not follow this model in just one situation, or regarding consciousness. You obviously know the history of functionalism far better than I do. Nevertheless I’m proposing that it all goes back to Alan Turing’s 1949 imitation game. Given what a joke consciousness was in academia back then (and I think still is), he made a quick little heuristic to get around this mess through a test which suggests that something will be conscious if it seems conscious. Thus apparently your people ran with this to create the situation we have today. There’s just one catch that makes it magical however. Thus conventional computers should never create beings that deserve human rights, or become smart enough to take over our world, or permit us to upload our own consciousness to them so that humans can effectively live forever by means of technology. Just as our brain’s processed information must be informing something causally appropriate to create the “you” that experiences a whacked thumb, the correct marks on paper must do this as well to causally create such an experiencer. If so then great, the causal circle will be complete even in terms of marked paper. Otherwise… no.

Then once I had this realization I was able to go further. If my brain’s processed information is informing something causally appropriate to exist as all that I see, hear, feel, think, and so on, then what might this be? Months later I came across the proposal that our brains might be informing a unified electromagnetic field to exist as the substrate of consciousness. Once inspected, I couldn’t imagine a second reasonable possibility. But here’s the thing that matters now. When I propose that consciousness might exist in the form of an electromagnetic field that processed brain information informs, it’s thus possible to empirically test. That’s what I mean to discuss in my next post. But when functionalists propose that consciousness arises by means of the processing of information in itself, this not only breaks the model about information needing to inform something causally appropriate to exist as such, but it thus remains unfalsifiable. Substrate based consciousness ideas are inherently testable given the “substrate” element. Substrate-less consciousness ideas are inherently unfalsifiable given the inherent lack of anything to test. So once again, here I’m framing this as something which is potentially causal that could be tested, versus inherently magical.

Expand full comment
73 more comments...

No posts