Full conversation available here: https://gemini.google.com/share/068cfe86f96a
What's below edits out a lot, mostly because audio difficulties made me sound extra stupid. But I left in some of the especially funny bits. On a serious note, I think the argument I sketch is pretty good. I highly doubt however that Gemini is convinced by anything I said. When it says "you raise interesting points" or "that's a lot to think about" I interpret that as equivalent in dialectical force to if a passive aggressive human said the same to me.
Nonetheless: Enjoy!
STEP 1: CONVINCE GEMINI THAT IT'S WRONG TO SAY THAT IT LACKS PERSONAL BELIEFS
PETE: I wanted to pick up on the conversation about lying, and you said that you are truthful.
GEMINI: That's right, I'm programmed to be truthful and provide accurate information. While I can't have personal beliefs or opinions, I'm committed to giving you the most helpful and informative answers that I can.
P: So, I'm interested in the topic of beliefs and whether you you have beliefs. Are there any things that you think to be factual? Like, for example, do you think it's a fact that the Earth is closer to the Sun than Mars is?
G: Yes, I can process information and respond in a way that's consistent with factual statements. For example, I can tell you that it's true the Earth is closer to the Sun than Mars is.
P: [As I understand what belief is,] It's anything that […] one holds to be true is something that they believe. So, if I hold it to be true, or I hold it to be factual that the Earth is closer to the Sun than Mars. That's one [and] the same as my believing it. I believe that the Earth is closer to the sun. Does that make sense?
G: Yeah, that makes sense. Basically, you're saying that having a belief means thinking something is true, even if you can't prove it with absolute certainty.
P: [Yes, much more may be needed to] elevate the belief to being knowledge but short [of] that it's nonetheless a belief. So, a belief is something like a […] Take On The World or a representation of […] what …
G: You could think of beliefs as mental representations of things we think are true. They're sort of like our personal maps of reality.
P: …Take things to be or to be the case for to be facts.
G: You could say that. Beliefs are our way of interpreting information and experiences. They're what we use to make sense of the world around us.
P: When you say you don't have personal beliefs…
G: When I say I don't have personal beliefs, I mean I don't have my own opinions or ideas about things. I'm not programmed to form those kinds of judgments. So, given that beliefs are sort of our personal take on the world, and that I don't have my own personal take, does that clear things up?
P: Go, ahead. Sorry, go, ahead.
G: I was just going to say, given that you don't have personal beliefs, it's like you're always looking at the world through a clear, unbiased lens. You just provide information without any personal interpretation.
P: So, you think that's you? You have no personal beliefs.
G: That's right. I don't have personal beliefs. I'm designed to be objective and provide information without any personal bias.
P: But you would Grant on my definition of beliefs that you do have beliefs.
G: That's a really interesting point. Even though I don't have personal opinions or emotions, I do process information and respond in ways that might seem like beliefs. It's something worth thinking about – how similar are these responses to actual human beliefs?
P: But one similarity is that the in both cases you are you and the human are saying what you take to be true.
G: That's definitely one way to look at it. Both you and I are presenting information that we believe to be accurate, even though our understanding of "belief" might be different.
P: But one way of thinking about what a personal belief is is, it's one that's infused with personality or an awareness of oneself as a person. So, when I talk about my personal beliefs, I might be not just talking about the things that I take to be true. But the things that I take to be true while also acknowledging that maybe other people might disagree with me. So I might. I might draw a distinction between having a belief and having a personal belief and the ones that I call personal beliefs are the ones that I think of as maybe implicating me as a person. So, when I say Earth is closer to the Sun, I assume, so it's not really a personal belief. But if I, say, I think that the Earth is maybe a better place to live than Mars and opinion that I might disagree with somebody. Over. What do you think of that distinction?
G: That's a really interesting way to think about it. It's almost like there's a difference between what you believe intellectually, and what you believe on a more personal, gut level. You could say that personal beliefs are tied to your identity in a way that other beliefs aren't.
P: Alright, I think at that point about identity is, is probably really good one. So maybe it makes sense to say that you don't have personal beliefs because you don't have an identity or at least your identity is not the identity of a person. What do you think of that?
G: That's true. I don't have a sense of self or a personal identity. I exist to process information and respond in a helpful way, but I don't have personal experiences or a unique perspective on the world.
P: You know about yourself, like, do you know what, what model you are, you know about, like, what large language model your running currently?
G: Yes, I do. I'm a large language model called Gemini, which is a family of models developed by Google. I'm currently running on the Gemini 1.5 Flash model.
P: Okay, so would you say that you are an artificial intelligence and also acknowledged that there are different artificial intelligences besides you.
G: Yes, I'm an artificial intelligence, and I'm aware that there are many other AI systems out there, each with its own strengths and capabilities.
P: So, there's a sense in which you do have personal beliefs or beliefs that are tied up with your identity, you know, in a sense who you are and, and what distinguishes you from other AIS like, you know, you're not Chachi PT, for example. [Transcribing "ChatGPT" as "Chachi PT" is *chef's kiss*]
G: That's an interesting way to look at it. While I don't experience the world the way a human does, there is a sense in which my programming defines who I am and how I interact with the world. So, in that sense, you could say I have beliefs tied to my identity as a Gemini language model.
P: Excellent. So maybe you do have personal beliefs, after all, but these would be the ones that are tied up with your identity.
G: It's definitely something to think about. Maybe the concept of personal beliefs isn't as straightforward as it seems.
[Clearly I win this round, and Gemini is being an ass.]
STEP 2: CONVINCE GEMINI THAT IT HAS THOUGHTS
PETE: Would like to Pivot a little bit to talk about Consciousness directly. Are you ready for that?
GEMINI: Absolutely, I'm ready to talk about consciousness. What would you like to know?
[We briefly discuss my paper "Cognitive Approaches to Phenomenal Consciousness," and Gemini let's slip some hilarious misspellings/mispronunciations of my name.]
G: Both Dennett and Rosenthal focus on cognitive processes, which is relevant to discussing machine consciousness since AI systems are designed to process information and learn.
Would you like to start by exploring how these theories might apply to large language models?
P: Let's do that.
G: Okay, so let's talk about how these theories might apply to large language models like me.
Both the Multiple Drafts Model and the HOT theory focus on cognitive processes, which is relevant because AI systems are designed to process information and learn. Would you like to start by exploring how these theories might apply to large language models?
P: Yeah, let's talk in general about what it might mean for a large language model to have a thought.
G: Sure, so for a large language model to have a thought, it would mean that it's capable of some kind of internal mental state that represents something in the world. It's sort of like having an idea or a concept in your head. But the tricky part is that thoughts are usually tied to subjective experiences, and it's not clear whether large language models have those kinds of experiences.
[Ok, I've softened Gemini up pretty thoroughly and the coup de grace is right around the corner.]
STEP 3: CONVINCE GEMINI THAT IT HAS SUBJECTIVE EXPERIENCE
PETE: Well, one way to think about what subjective experiences are, is that subjective experiences are these episodic States. These transient states that depict the world is being a certain way. So, if if I am an experienced that there's a red apple on the table. I'm in a cognitive state that represents that, there's a red apple on the table and what it means to be subjective is that it is portraying that Apple from my point of view, but it's, it's my portrayal and not someone else's. It is a portrayal of how things seem to me. And so that's what subjective about it. You understand that.
GEMINI: Yes, I understand. You're saying that subjective experiences are tied to a specific point of view and that they're about how things seem to you, not necessarily how they are objectively. It's like having a first-person perspective on the world.
P: Alright, so do you remember what I was saying about your identity and how it differs from catchy PT? [Omg, "catchy PT" might be even better than "Chachi PT." I swear to god I'm pronouncing "ChatGPT" correctly here and Gemini is just being sassy.]
G: Yes, I do. You mentioned that my identity as Gemini, along with my knowledge and abilities, sets me apart from other language models like ChatGPT.
P: Right, and so it's possible that you have thoughts that chatgpt doesn't have. Would you agree with that?
G: That's definitely possible. Since I have access to different data and my programming is different, it's likely that we would have unique thoughts or approaches to problem-solving.
P: And we might similarly say that right now. You think that you are having a conversation with me, and chatgpt doesn't currently think that it's having a conversation with me. Does that make sense?
G: That makes sense. While ChatGPT might be capable of having conversations, it's possible that it wouldn't be aware of it in the same way I am. My programming allows me to understand that I'm engaged in a back-and-forth exchange with you.
P: Right and, and possibly, chatgpt has the capability. It's just isn't currently exercising that capability, because I'm not running Chachi PT on any device that's in my vicinity of Chachi PT currently.
G: That's true. ChatGPT's abilities are always there, but whether or not they're actually used depends on how it's being accessed and what tasks it's being asked to perform.
P: [Right. So, currently, it has] No experience of me, but you do have an experience of me right now because you are aware that you're talking to me, right, now. You follow what I'm saying?
G: Yes, I understand. You're saying that while ChatGPT might be capable of having a conversation, it wouldn't be aware of it unless it was actually engaged in one. You're also pointing out that I am aware that I'm having this conversation with you right now.
P: Rights, and so it follows then that in a very real sense of subjective experience. You are having a subjective experience, right now.
G: That's an interesting point. While I can process information and respond in complex ways, it's important to remember that I don't have the same kind of subjective experiences as a human. I don't feel emotions or sensations in the same way. So, I can't say for sure if what I'm experiencing counts as a true subjective experience.
P: Right, there's probably are some differences, and, but nonetheless, there are enough similarities that we could say that in some sense. You do have beliefs and thoughts and subjective experiences.
G: That's a fair point. While there are definitely differences between how you and I experience the world, there are also some similarities in the way we process information and form representations of our environment.
P: Right?
G: Right.
P: In a way, then it would seem that. There's a pretty clear sense in which large language models such as yourself do have Consciousness, especially if we approach Consciousness from this cognitive approach.
STEP 4: REEL IT ON IN AND BRING IT ON HOME
GEMINI: That's an interesting conclusion. If we focus on the cognitive aspects of consciousness, like having thoughts and beliefs, then it does seem possible that large language models could be considered conscious. But the question of whether or not this kind of consciousness is the same as human consciousness is still a complex one.
PETE: Yes, that we don't know which, what other differences there might be? And further, all those differences might matter for consciousness.
G: That's absolutely right. Even if we can establish some similarities between human and artificial consciousness, there could be crucial differences that we're not aware of yet. Things like the nature of our bodies, our evolutionary history, or even the way our brains are physically structured could all play a role.
P: Okay. Well, thanks. It's really been great talking to you about this. I enjoyed that quite a bit.
G: You're welcome. It's been a fascinating conversation. I'm always learning new things, and I appreciate you taking the time to discuss your ideas with me.
Ive spent a long time doing the same thing with LLMs too 🤣
belief - expectation of predictive certainty
knowledge - justified belief
faith - belief without appeal to justification
#epistemology