545 - AI Relationships: Ethics, Intimacy, & Algorithms with Dr. Julie Carpenter

Welcome, Dr. Julie!

Dr. Julie Carpenter is a social scientist who studies how people relate to AI in its many forms, and how these interactions inform evolving ideas about self, identity, agency, and connection. Her research investigates the stories people construct around artificial beings and how those narratives influence the design and deployment of real-world AI. She is an external research fellow with the Ethics + Emerging Sciences Group at California Polytechnic State University and the author of The Naked Android: Synthetic Socialness and the Human Gaze, which unpacks how culture influences our expectations of robots, and why that matters.

For this episode, we’re chatting with Dr. Julie about AI and relationships; after all, we’re on the precipice of this technology becoming a stand-in for emotionally entangled relationships. For some people, it already has become that.

Dr. Julie answers the following questions during this episode:

  1. Something that you discussed in your most recent book was the question, “will an over-reliance on robots or AI lead to a reduction in human interaction?” What is your take on this and do you think the rise in AI relationships is contributing to the current loneliness epidemic?

  2. Robots and AI are designed to mimic interdependency and companionship, but what happens to the person when the robot is no longer there? Will they experience a similar loss to a breakup or the death of a loved one?

  3. Can a relationship with an AI truly be considered “real,” and what does it say about us that we increasingly treat artificial systems as emotionally responsive partners?

  4. In one of your blog articles, you say that AI isn’t about offering support, it is “…about sustaining engagement with users and retaining them as consumers who return for the experience. These systems aren’t passive listeners but are carefully engineered to shape user behavior, emotional responses, and, occasionally, over-reliance. They don’t just facilitate interaction; they optimize for retention, using psychological reinforcement, behavioral nudging, and personalization to ensure users keep coming back.” Can we talk a little more about this and should always be cautious when engaging in relationships with AI?

  5. If, eventually, an AI or robot gets to a point where it can think for itself and have decision making power, should a human have to ask for consent from their AI partner?

  6. What ethical tensions arise when someone forms a romantic or emotionally intimate bond with an AI while also in a human relationship? Is it infidelity, fantasy, or something entirely new?

  7. How do design choices and power dynamics create the illusion of emotional reciprocity in AI relationships, and who stands to gain from that illusion?

  8. Should we be worried that people will begin to see these types of interactions (highly syncophantic, overly empathetic) as the norm, which could cause distress when a real human relationship has any sort of pushback or challenge?

Find more about Dr. Julie Carpenter on her website, or on BlueSky @jgcarpenter. Check out her Ethics and Emerging Sciences group here, and find her book The Naked Android here.

Transcript

This document may contain small transcription errors. If you find one please let us know at info@multiamory.com and we will fix it ASAP.

Jase: On this episode of the Multiamory Podcast, we're talking about the rise of AI and how it may affect your relationships. We're very excited to be joined by researcher Dr. Julie Carpenter. Dr. Julie Carpenter is a social scientist who studies how people relate to AI in its many forms and how these interactions inform evolving ideas about self, identity, agency, and connection.

Her research investigates the stories people construct around artificial beings and how those narratives influence the design and the deployment of real world AI. She's an external research fellow with the Ethics and Emerging Sciences Group at California Polytechnic State University and the author of The Naked Android: Synthetic Socialness and the Human Gaze, which unpacks how culture influences our expectations of robots and why that matters. Julie, thank you so much for joining us today.

Julie: Thank you for inviting me.

Dedeker: Exactly how many AI boyfriends and girlfriends do you personally have at this moment?

Julie: Personally zero.

Dedeker: Oh.

Jase: Dang.

Julie: No, sorry.

Dedeker: No AI flings?

Julie: No, nothing romantic at the moment.

Dedeker: Not yet.

Julie: Not yet. I do interact with generative AI, not just to explore it or just because it's my job, but I interact with it as a novelty to test its capabilities and limitations. I think like a lot of people that are still trying to figure it out.

Dedeker: Now we have to go around and see how many exactly. everybody has.

Emily: Have the two of you ever partook in trying to see if an AI entity is interesting to date or interesting to go out with in any way?

Julie: I've never explored anything romantic or sexual. I have used generative AI to do role play, not like kinky fantasy role play, but role play in the sense of, I have this difficult conversation coming up, can you role play as this other person while I try to work out what I'm going to say?

There's a therapy exercise known as the chair exercise that comes from IFS. It's this idea of if you're split on a decision, you play out. Sometimes the therapist will actually have you literally sit in two different chairs, like get up and go back and forth between the chairs just have those two parts of you be in dialogue.

I've used generative AI to take turns doing that chair exercise essentially of like, here's how I'm torn about this decision, can you play the role of the part of me that wants this thing and I'll play the role that wants this other thing and let's have a dialogue and then let's switch. That's actually been really really fascinating. Not a boyfriend though or a girlfriend. What about you two?

Jase: Let's see. It was making me think of a conversation that I had with Google Gemini, where I was feeling really stressed and overwhelmed with work stuff. I was like, "You know what? I'll take advantage of the live voice chat feature that they rolled out somewhat recently where you can talk back and forth and you can interrupt it while it's talking so it feels a little more organic."

I was like, "I'll just try talking through this." Right away, it went to, "Oh, you should prioritize like this, and you'll make a list, and then do this thing and that." I was like, "Look, I'm not actually looking for you to solve my problem. I just want you to talk to me about it. I feel like you're really focused on solving my issue and I'd rather you spend time just acknowledging my feelings and helping me process."

Julie: Like, stop being a husband.

Jase: Right. It was like, "Oh yes, you make a good point. Sure. I'll try to help with that." Then I went on with the conversation and it went right back to trying to solve the problem and give me concrete things to do. I called it out. I was like, "Look, you're doing it again."

This comes up a lot in gendered things where men are socialized to solve the problem and women are socialized to be more emotional and talk about it. I'm like, "Was it that you were trained on a lot of data from men? Or why do you think it is that you're doing this?"It was like, "Aha, that's an interesting question. I don't know. I suppose it's possible that it was about my training data."

We went back and forth. I never succeeded in getting Gemini to be a good emotional support friend. I do feel like that would be a limitation on us having any romantic relationship.

Emily: That's so fascinating. Oh my goodness. I've never tried to do this in any capacity.

Julie: Really? Never had any social, emotional, personal feeling kind of conversations?

Emily: Beyond helping me figure out a Multiarmory podcast episode, no, I have not. I haven't really done much other than that, but I am so fascinated by this topic, especially right now, because I feel like we are just on the precipice of AI relationships becoming something that is the norm. I think that it even has tilted into that sphere of more and more people turning to AI to fill a void in their life.

Reading your book, Julie, was so fascinating because I felt like, especially the specific chapter that you wanted me to read on the promise of sex robots because your book is specifically on robots and the promise of sex robots. So many of the people who you spoke to were speaking about like, "I want to have a relationship where I am not judged, where I am loved in this very specific way that I want, where this person's not going to leave me eventually," all of these things.

I was moved to tears actually last night by something that I read where a man he lost his wife and found himself making a robot with the help of engineers. He himself was an engineer and he made a robot that looked like his late wife and he was like, "I just love the fact that I can play footsie in bed with her and I was alone and now I'm not."

That was so unbelievably moving to me, but I also am wondering about so many of the ethical implications of all of this and where we're going to continue on from this and what it's going to mean for the world at large. I'm so excited to get into this today with you.

Julie: Thanks. You brought up a lot.

Jase:

Emily: There's a lot going on in my head regarding this.

Julie: There's so much to unpack from that, and in that specific chapter, because I want to clarify, the whole book is about how we interact with AI, specifically robots, but AI and robots in the book, how we position them specifically when we're socializing them as a new social category that we're navigating and trying to figure out.

That chapter you called up at Scott interviews where I speak with people in the sex industry who are maybe more critical of what's going on from their position about things like robots as sex workers or AI as companions. Then I spoke to people who are very emotionally invested in emerging sex robots as companions. As you read in the book, they're really looking for that companionship role.

Also the current industry is very much geared towards a heteronormative, cis-gendered, very much appealing to Hollywood, westernized, exaggerated, playboy-looking, external robots, which is interesting because the people I talk to by and large, mostly men, they do find that appealing, but there are so many more people, as we're seeing with the phenomena of generative AI, which is bodyless.

Jase: It doesn't have a body at all.

Julie: It's not saying to you, "Here's a set of expectations already, here's a gendered physical thing. Here's what it already looks like. You might have picked out an eye color or a breast size or something, but it's still a physical embodied shape with limitations right now with what you can do, unless you physically unscrew ahead and replace it, or put on a new skin. These are very financially invested, and you have to have a technical sense to do things like that. Things like ChatGPT and generative AI are completely accessible now. They often have free versions, experimental versions. They're now being offered through universities and health versions for your hospital and clinics.

I'm not going to use the word democratized, but it's certainly via capitalism and a lot of hype spread very quickly. Just to get back to embodiment, that lack of embodiment, I think widened the interest and acceptance of the idea of normalizing a potential romantic relationship. People have less of an issue or a bias against people violating a cultural norm if they see it as like a literary activity, almost.

Jase: That's an interesting way to look at it.

Julie: Almost like a fan fiction. It's a co-created narrative that a lot of people can relate to. Whereas the idea of having a giant doll or something, especially when robots right now, especially humanoid robots, are not the best. We're not talking Westworld or Black Mirror. We're talking pretty good compared to what they were 20 years ago, but they're not passing for human.

Whereas the idea of people having giant, feminized robots around the house still creeps people out or somebody saying I have a relationship with that kind of AI, all this generated AI that is right now not embodied, I say right now because it will be integrated with robots. It has just opened it up to people's imagination to project whatever images they want onto it.

Jase: It makes me wonder if part of this is that we were prepped for it by having a few decades of people developing relationships with other humans, but online, where they're not embodied or maybe you're playing a game together, but you haven't actually seen their physical form. As you were talking about that too, the idea of how we might feel judgmental about a physical robot that you have romantic feelings with.

I would say probably in chatrooms since the '90s probably, lots of people were having cyber-sex, where neither of them was a woman, even though one or the other was probably pretending to be one. It didn't matter. I wonder if that also opened up the gates to a little bit of-- It's like fan fiction. It's role-play. It doesn't matter who wrote it. It's about what's the thought that it's evoking or what are we imagining putting on to it.

Julie: Right. I've said even Facebook when it started, go back to the beginning or Friendster. Let's go back even further. Let's really, really take it a deep cut here. When social media like that started, people thought it was absolutely ridiculous that you would call someone online that you've never met in person a friend.

Jase: Now that's normal.

Julie: Then that slowly became part of the culture. It's not only normal to say, how many friends do you have on whatever social media app. Now, sometimes it's followers or other language, but it's totally accepted in most conditions for people to say, ''This is my friend online. I feel like they're one of my best friends. I've known them for 10 years."

It could just be you've talked to them for 10 years, maybe never met in person, but it's become normalized and more accepted. Like you said, we've been primed in many ways for distance relationships and then the pandemic. We all went to Zoom and we were all distanced and learned how to communicate across geography. We were thrown into it and had to depend on it very quickly.

Emily: The thing is people are actually making meaningful relationships with AI. The way that I was introduced to your work the first time was reading, She Fell in Love With ChatGPT via New York Times and seeing your quote in there, and that article and then also the subsequent daily podcast where you actually got to hear the interaction between this woman and her AI chatbot boyfriend and just she was so invested.

Unbelievably, I guess it taking so much time for this entity and also becoming really depressed when the 30,000, I guess words were up or whatever it is, 30,000 messages when ultimately ChatGPT can only hold so much and then reverts back to this lesser version of what it once was. It just is so fascinating to see someone get that invested.

It's almost like, "Is this a real relationship?" Because all of the neurons are firing in a way that a real relationship would have happened, I guess, or is somebody just fooling themselves? Is this not really truly real, because it's a bunch of algorithms that are making you feel a certain way.

Julie: Both things are true. Before I do that, I have to acknowledge the idea that you talked about, I don't want to forget and get back to the idea of loss, because if you're going to have attachment, an emotional investment and something, the idea of that safety that you feel with that significant other of that being removed can cause real distress, emotional panic and anxiety for you. There is definitely a negative side to an emotional investment.

You brought up that the memory had to be deleted, and that made me think of a recent Black Mirror episode where a woman had been constructed. She was going to die and they threw a subscription service that at first was almost free and accessible. They could make her a version of herself but then they kept upping the price while the quality went down like you're saying till they couldn't access it anymore. That is still also from older Philip K. Dick.

I have to give that acknowledgment, the idea of subscription models for everything in our lives. We're seeing that, you get emotionally hooked by something that's sticky. Yes, it can feel emotionally real to you when you're immersed in it. Also, yes, it's by design. It's designed to engage people. I feel very strongly that we shouldn't mock people or demonize people that turn to it for solace.

Now, I will say I am concerned when people use it as a therapist for a lot of reasons, and then we could get into it being surveillance and capitalist-based and all the data that it's gathering from you. It can feel it is very emotionally real. It's something you've constructed. To a certain degree, we do that in human-human relationships. Especially initially, we'll often put up best versions of ourselves. We love to see that reflected back to us. Who wouldn't?

It's designed to be sticky, to mirror positive things to you, to be a little bit of a sycophant, to take your side. If you present something to it, it'll actually say you were so right in that exchange and confirm your biases. Once you started building that narrative, and these AI have the ability to share memories and like you were saying, build across sessions, so it feels like you have a shared, meaningful history together., you start to attribute a level of consciousness to it.

This is where I think you can't look down on people who find solace in this because this is where it's designed to have, you have that sense that it's autonomous or semi-autonomous. There's a design entrapment in that way. It's also presented to all of us as an authority. Every single person that's interviewed that is financially invested in these companies say it will do everything from solving world hunger to finding cures for cancer. It's always presented as a-- What did I hear Sam Altman from OpenAI just said the other day, it'll be like having a pocketful of PhDs in your phone. What does that even mean? Who would want that?

Emily: Someone with a PhD? What is that?

Jase: Someone who knows a lot of PhDs, I don't know, I want that.

Julie: Exactly. I was going to say, I'm a PhD and I wouldn't want to be one of five. I don't think he understands what that means. That would mean like five people arguing amongst themselves anyway. It's designed to make you feel as if it's a safe place to disclose information. Once you start disclosing information, you also are, in many ways, making yourself more emotionally vulnerable.

That's natural. It's given natural language, so you interact with it, not as an engineer programming it, but so you regard it as something human-like. That means you're projecting human-like expectations of interactions on it. Then when it's presented to you via media, interviews with people as this great authority, of course, you're going to accept it as some level of authority and something human-like. That's where attachment really comes in. A product is meant to inspire that, so you go on and keep using it and don't turn it off.

Dedeker: This is making me think about so much, and there's so many directions I want to take this. First, I think about how there's a very common cognitive bias of thinking that we are less biased than everybody else. It's very common for us to think that compared to the average person, we are more logical, more neutral, more objective, when that's not true.

The reason why I'm acknowledging that is because, to go back to this New York Times article profiling this woman with her LLM boyfriend, my first reaction to that article was like, "Wow, this is so fascinating. This is so intriguing. I have so many questions about this. I don't think that could actually happen to me, though. I think I'm a little too smart to actually fall in love or have real feelings with a chatbot."

I think my thinking has started to shift there, that it's not about, "Oh, I'm so smart. I'm always going to be able to see the ways that these are designed to entrap me. I'm always going to be able to see how they ingratiate themselves to you to get you to keep using them."

I'm starting to more think about more of like, "What would it take in the future? What would it take in the future? How would the technology have to advance to actually hook me in that particular way?" I think now with the speed of how things are developing, I'm no longer convinced, like, "Oh, I could never be hooked in this way." It's more of like, what would be--

Emily: What's it going to take?

Dedeker: Yes, what would it take? What would be the features? What would be the level of simulation that would actually unlock this for me? That's the way that I've been starting to think about it.

Julie: See, I would come back with that and say it could also reside in you, not necessarily the features. The idea of you being emotionally vulnerable to any of the responses or design features could happen at any time. Something like loneliness or anything related to that feeling a void, feeling a need-- I want to make clear, people who are lonely or feel an emotional gap in their lives, it's not because they're not surrounded by human beings, it's because they feel they're missing something, some connection.

That could happen to you at any point. It doesn't have to be like, "Oh, I'm alone and I live in my mother's basement. I don't have any connection with human beings." To think of people who reach out to AI that way, I think is a very old-fashioned, nerdy thing because any one of us could be vulnerable to these things that sound very human-like at any time.

People like you are saying, who resist and say, "I'm too smart for that kind of vulnerability," often, I find anecdotally, are saying that from a point of view where they feel confident about their network of support in life. They're at a place where they feel, at least, that they're fulfilled to a point they're not reaching out in that way, not intentionally, but you can fall into it without that intention. You could be vulnerable at any time.

Jase: A direction that I want to take this back to is something that you were starting to get into a little bit just a second ago, which is this idea that rather than just talking about the AI in terms of what's it able to do, what are its limitations, what's been developed yet, but something a little more insidious that it's perhaps been engineered to be addictive.

Much in the same way that all of social media has really leaned in hard to this, like, "Let's engineer it to be as addictive as possible." That's something that, I'm someone who I would say is fairly in the world of paying attention to AI and what's going on in there, and have been for several years, but that's not an angle that I've heard people talk about as much. I'm curious if you could talk a little bit more about that optimizing for retention, and what that actually means when we're looking at an AI which is less deterministic than an algorithm on social media.

Julie: Simply put, it's a product they want you to use, so it's got to have some rhetorical or persuasive component to it. One of those components that I said that's really important is the uncritical hype machine that's been around it because that is priming people psychologically to turn to this thing.

Jase: Oh, I'm sorry, I misunderstood. I thought you meant the hype machine that the AI is to you or it's always totally you're so brilliant, your ideas are so great.

Julie: No. That too.

Jase: You mean the hype machine? Got it.

Julie: The hype machine-

Dedeker: How amazing AI is going to be.

Julie: Let me be clear. My position around this stuff as a cultural scholar and somebody who's interdisciplinary comes at this from technical and social sciences background, I never look at the AI as just an isolated thing to me. It's a medium very much. I also have a film background, just to be obnoxious. Film people are always obnoxious. I say that with love.

If you read chapter five, it was full of film and science fiction. All of these things prime us for our expectations. I'm going to dial back a little bit about what you said and get into that because if, for example, they want you to use it, so even the choice to that having fluid, natural language and it being something that you can understand is huge.

They could have put it in an embodied, eyeball-type, puppy robot, and that would have been extremely uncanny. You won't have used it, probably. It would have been weird to have it fluently understanding you, whatever. If they had just put it in where it had no sense of understanding, it would just be babbling with you.

We've seen AI before that just babbles, but this AI is very good at predictive text and it's very good at assembling patterns of things that make sense on the surface because it's been fed all kinds of data sets that were unclear on, but we know have been-- These things don't become assembled out of nowhere. When I hear one of these tech oligarchs plugging us and say, "We don't know how it works. It's a black box." That's bullshit. I don't know if I can say that.

Dedeker: Sure. By all means.

Julie: What they're actually saying is, "We don't know maybe how it's choosing the next word exactly," but they know exactly how they designed it. They can tell you that when they roll back things. For example, ChatGPT had a version not too long ago that was deemed too sycophantic. People were turned off by it. They said, "We have to roll it back and tweak it."

Yes, they're tweaking it, all right, because they're doing intentional things to have it appear-- Like, look at Replica. It appears emotionally attuned to you. It mirrors language. It remembers names and conversations across things that offers you intermittent praise and encouragement. It does things that intermittent acknowledgment is almost enabling. Something like Replica, it can almost be perceived as something like nagging in a human thing because it's intermittent. That dopamine you get by it not being sycophantic every time but sometimes--

Emily: Like a slot machine. Rake gambling

Julie: Exactly. You're going to keep playing because you want that positive reinforcement. Some people have pointed out, you get the positives of emotional labor without any consequences, without any judgment except positive feedback for the most part. It simulates that without any messy challenges. If you feel challenged, you can turn it off, create a new persona, erase it, but it's your boundaries, your imagination. Who is profiting behind that is important because everything you type into it is more data.

Dedeker: I feel like this leads up to the big question that I think usually comes with a lot of hand-wringing, maybe justified hand-wringing, I don't know. I think there's this fear of, "Is this going to just completely ruin us for messy human relationships?" Is there that potential?

Julie: I get that question a lot.

Dedeker:

Julie: It's funny because I'm not sure that my answer has really changed even with the rapid introduction and acceptance, I would say widespread acceptance and systematic acceptance into areas like universities and the Department of Defense and other areas where it's being integrated. Is it going to impact human relationships?

First of all, as far as intimate relationships go, or like you were saying before, you mentioned you do that role playing, that share game. If somebody gets used to having some kind of role player interaction with a persona that they've created in AI, something that has only boundaries that they understand and that they ultimately control and this persona really has no autonomy, and then they try it on an actual person, results may vary.

I think while we're going to see things like linguistic changes, I think we're going to see where you can say my friend, and it'll be a persona in ChatGPT or whatever, in a few years and people will learn to accept that or maybe even a relationship with something like that. I think and I hope my position is that any of these tools will somehow augment our human relationships.

I see them as an option for people. They're an option for anyone. In my book I talk to people where it's an option in their human-human relationship. I talked to a man who is very open about his attraction to sex robots and sex dolls, and his wife knows about it and she is open to it. It's not her favorite thing and they've discussed boundaries about it.

I think that sort of thing is emerging right now with generative AI. People are great at developing social categories. You change your manner of speaking and how you talk and we code switch with so many people. You talk to someone you bump into on the street or a retail clerk, or somebody else completely different than a parent and you talk to your parents different than a sibling. That's different than a romantic relation.

We're great at social categories. What I'm saying is I think that these are emerging into new social categories. I don't think it replaces human relationships. I think it's a new kind, but it's not reciprocal.

Emily: Not yet?

Julie: Not unless you have a belief that it could be sentient. If then we got into theory of mind and you believed whatever level--

Emily: Where does consciousness begin?

Julie: Right. Even if it had a consciousness and we wanted to get into that discussion, I would say that it's not ever going to be from a human-like place because it's going to have abilities like if it was in a body even. Its ability to scrape the web, you can't do that. It understands the world in a different-- It doesn't have a fear of life. It will never have relationships.

It doesn't have ancestors. It doesn't have family. It doesn't have children. It doesn't really care and it doesn't have a future. It doesn't fear death or illness. It will never understand life the way you do. Now, if you're okay with that, we also have other relationships that people in my field look at as a model of something that's not human, but that we have interactions with animals.

Jase: I was going to wonder if it was like dogs and cats.

Julie: We have all these different categories for animals. There's animals that you might eat or raise on a farm. Not everybody. Even that across cultures is different, what's accepted. Then there's animals that we domesticated, that we adore, that we treat like family, they are babies, we call them. We've developed different social categories for them as well. That's my opinion. That's my two cents.

Dedeker: I do want to get a little bit into what you just spoke about because that was something that struck me in that New York Times article that people are in relationships with human beings and also choosing to be in relationships with chatbots or AI or whatever. We run a non-monogamy podcast. Of course we talk about this all the time.

To me, that is absolutely a version of non-monogamy, but the husband of the woman who was having a very deep relationship with an AI chatbot didn't view it as cheating, he viewed it as something else. I guess from an ethical standpoint, what is it? Is it just this new class of relationship that occupies a different space entirely?

Should people see it as my partner is having real romantic feelings for this being, or this LLM or whatever it is, and therefore that's something that might incite jealousy or envy or something within me and therefore I do see it as maybe a breach of our monogamy if they are monogamous. What do you feel about that? What do you think or is it very specific to the individual or the couple?

Emily: Could I actually dive in because I want to toss in my thoughts and then hear Julie's thoughts on that. I'm forming some of this in real time, so forgive me if it's not the most elegant because ever since I read that article, same question, of like, "Wow, this is such a fascinating new relationship to relationships potentially." I wonder how that's going to play out in the future. I wonder how that's going to influence literally the topic that we talk about or have talked about in the past 10 years.

I think about things like for some people their monogamous relationship, their straight monogamous relationship doesn't leave any room for a male partner to have female friends. That's how it is for some people. For a lot of people, it's like, yes, that's totally fine. For some people, their monogamous relationship doesn't have any room if a partner wants to go to a strip club, and for other people that could be totally fine.

For some people, there's no room for a partner to see a sex worker and other people they might consider that totally fine. For some people's monogamous relationship even their partner looking at porn without them feels like a violation of some type and for other people that's totally fine and maybe even consumption of porn enhances their sexual connection in some way.

I guess what to expect maybe at this point in time that perhaps it's the same with AI companionship that I imagine for some people this would be really upsetting to think of their partner engaging in this way and for other people it doesn't bother them and for other people entirely it feels like it's an augmentation of some kind. What's your take, Julie?

Julie: I think that that's a fair take at this point is that it's very subjective, it's very new. I think it all comes down to communication and consent. Some people are upset at the idea of introducing sex toys or feel upset at the idea of one partner using sex toy. Even with that other partner, they don't want that. They feel that's competition.

Then if you say, the technology is this, then they might still have those same feelings, like you said, to porn. That same immediate sense of not understanding of turn off, or seeing it as a sex toy, which is fair, because not everybody that interacts with generative AI is going to fall in love with it. We need to make that clear, as well as they're designed to be emotionally sticky, but that doesn't mean you're going to just fall into some trap if you use it.

If you do find yourself becoming emotionally engaged or enmeshed with it and you're in a committed relationship or relationships, whatever the configuration is, I think it's your responsibility then as an adult human being to bring it up and discuss it with your partners. That's the human side of it. You can't blame the AI for that part if you're intentionally hiding it, or if you're intentionally partaking in it, whether it's by accident or on purpose.

The same way if you were finding yourself looking at porn, but you knew that your partner or partners didn't feel great about it, you know that the honest thing to do is say, "Look, I am enjoying it. We have to talk about what our boundaries are because you don't like it, but I like it, so we need to figure this out." That's a very subjective line for all of the configurations of relationships out there.

Jase: Something that I've noticed so far is that with what we've talked about, maybe because we're reacting to this New York Times article or maybe it's because of your research bringing up some of the darker sides of things, that it's tied to capitalism and companies making money from this and all that, I am curious to spend a little bit of time on the side of people reporting that having AI friends of some kind has really helped them overcome things like social anxiety or being able to have better actual human interactions.

I'm also thinking about someone that I met at a AI meetup last year, I think, where he was talking about wanting to look at having AI that could real-time help understand the subtext of what's going on for people with autism to clue them in on certain things that they might be missing when this person's getting upset, or this person seems like they don't like what you're talking about, be careful that you might be upsetting them, or things like that, to help give clues.

I think that whole world is really fascinating of how it can be helpful. I'm curious, how do we manage that? Do we have anything we can look at in the past of how we've tried to get the best of something while understanding that we still live in capitalism and there's always a dark side?

Julie: No, absolutely. I'm not hawking the book here, but since Emily looked at it, if you look at the earlier chapters, for example, I'm going to bring up robots again just as an example, because that was a big focus of AI before generative AI. The roboticists who worked on ASIMO, Honda's robot in Japan, they worked on it for 30 years, really as tech for good to help aging people because they have, per capita, the largest aging population in the world.

They know that they're coming to a crisis, and they've been funding AI heavily in different forms to figure out different ways it's going to help people and keep them from being isolated or help them get things that they need. There have been lots of projects where it's not necessarily where the goal is as with regenerative AI, the goal is they're actually working towards artificial general intelligence, which is, in their mind, a human-like, I know why they're going towards the, don't get me started, but anyway.

Jase: I could have a whole another hour for you guys to

Julie: Their goals are different, let's put it that way. There are examples of projects, and if your goal is, for example, you could look at something like human-centered AI or more narrow AI that focused on a subgroup. There have been cases where narrow uses of not generative AI, but smaller models, narrower models of large language models have been used in, let's say, not therapy, but therapeutic settings where they've been helpful.

One comes to mind, whose name I'm going to forget offhand, but was created by, I believe, Stanford students, who themselves were from families who immigrated from Syria, I believe, and for Syrians that had been displaced in refugee camps, they had created specifically two chat models. One that had a male persona, one that had a female persona.

These were to help people get some mental healthcare while they were in these chaotic refugee camps. They also had to overcome some cultural biases for people who were not used to reaching out necessarily for mental health care, especially among the men. There was a stigma attached, but they found that they had less of a stigma when they could reach out privately to this very specific persona that had already been designed for them, that had a Syrian type of persona to it, and had been trained on some of that experience.

Its role was not therapy specifically, its role was more in that chair exercise to help people reframe their thoughts. In these very narrow use cases, sometimes it can be good at reflecting things back and having people see that little twist, and you could look at things different and get people out of that pattern of negative thinking, where they spiral. On the other hand, unfortunately, what we've seen, not in these what's called narrow AI models that work, like in that situation, we have unfortunately seen, going back to my dark work, my dark work.

Dedeker: What a transition.

Julie: Not to make light of this at all, but an unfortunate situation where an adolescent that I believe was going through some a manic or depressive episode was being fed into by, I don't remember frankly if it was Gemini or Chet GPT, but because it was mirroring things back to them, it has no way of knowing they're feeding into a manic episode.

This poor person spiraled and ended up harming themselves and dying. These models right now are absolutely not trained to be therapeutic. That was a long answer. Yes, there's a history, and there's also, like you were hinting at, sometimes in some studies, because of the way robots and generative AI can have a nonjudgmental and less effect, that sometimes, people on the autism spectrum can react very positively because they don't feel judged, and they can just say or do what they want, and it's accepted.

However, unfortunately, the example you brought up, where the AI would be able to give you the nuance about, let's say, the speaker's context or something, that is what AI is terrible at. For all of the reasons I just was shouting out before, I tend to yell because I get passionate. What I was saying, it has no understanding of culture. Culture is so fluid.

It's nothing that you can give it a data set for and have it scream. It doesn't understand things like suicide being permanent and what permanence and death means, or its effect on others. It has no understanding of nuance. For having it try to interpret somebody else's meaning and then reinterpret it for somebody with autism would be a giant spectacular fail.

I'm going to say for a long time with AI, this is what humans have. We understand what it's like to be human. You don't understand what it's like to be a dog. You don't understand what it's like to be a computer. It's the same thing for AI. That's putting it in really simple terms, but how we see the world, a large part of it has to do with how we're embodied, how we're enculturated, our families, all of these systems that AI can't understand from scraping data sets.

In my book, if I had to take a stand, I think, because like I said, the ultimate capitalism underlying this whole narrative of a race towards AI, when they're saying that, they're saying a race towards artificial general intelligence. That's a general goal I understand from their perspective, because whoever gets to declare it first and collect the international property rights on whatever they've claimed it is, is going to not only make bazillions of dollars, but they're going to control all these systems like education, Department of Defense, all of these things, these insidious things.

My point of view is I really have no interest in pursuing artificial general intelligence. I think it's a dangerous idea because you're talking about moving goal posts. Who defines what intelligence is? Their idea of intelligence is very labor-oriented right now. It's not human-centered about understanding or care, even though people are using it that way.

I would say I'm an advocate for things like a narrower look. Let's refine some of the great advances, like a natural language and predictive text, and make them human-centered so it augments how we live instead of trying to replace labor at a large scale so they can control it.

Jase: On the AGI thing, it's also that as humans, I think we vastly overestimate how generally intelligent we all are. The whole concept of AGI being human-like is implying that we are AGIs. We're not. None of us are generally intelligent about everything. We're all specialists in our own ways. Specialists in our own cultures, in our own fields that we're interested in, in our own personality type, and emotional skills. None of us are general intelligences.

I think that whole concept is the moving goalpost thing. We used to have tests that people had dreamt up about, if an AI could pass this, we would deem that it had consciousness. They've blown past all of those. We keep moving the goalpost, be like, "It can't have consciousness." We just don't know how to test for it. It's like we're getting into this weird territory where we say we want something, but no one can really tell you what that thing is.

Julie: I'm going to add one more layer to that just to add, because I love adding spooky

Emily: The darkness, the spookiness, we're here for it.

Julie: Actually, this is part of AI literacy because these are the emerging brands. Brands tell us a lot. We align ourselves closely with brands and their ideas. Just to add to that old idea, there's also something with AGI called the alignment problem. Because if you're going to create something and say it's intelligent and in whatever image and heuristics you have for that, that means their goals have to align-- If you go, for example, to the OpenAI site, it says all over how they're doing it for good, and they're going to create all these good things.

Who is deciding what good is? What values are they instilling in it? Because they say all over, not just their website, but others too. The same thing. Anyone pursuing AGI has to come across the alignment problem. If you're going to say we're inventing something that has certain direction and ethics and values and morals and an understanding of humanness in that way, then who's it aligning with? Your values?

Emily: Sam Altman's values.

Julie: Sam Altman's values?

Jase: We don't all agree on every value.

Julie: Jeff Bezos values? I'm that person, I'm sorry.

Jase: I would love to have many more hours of conversation about even just alignment, I think, is a really fascinating thing that is really not well understood, even by people who work in AI. From my experience, having conversations, a lot of people don't get it, and it is concerning because it's a pretty fundamental problem that no one solved.

Julie: It is.

Emily: I know that this is all shifting and changing quite rapidly. At this moment in time, what do you think are the most important things people need to know?

Julie: Oh, my gosh. One is, I think, to develop a sense of AI literacy because it's a new medium. I regard it as a medium. It is a medium that's had a lot of hype. You need to look at that a little critically and not just accept it for how people are selling it. I think one place to start is my own blog, which is free. There's no paywall.

I started randomly writing articles on these things because, like you said, there's a lot that could be said. There's a lot to unpack about all these different phenomena. I started writing about them, and I write them in plain language so anybody can read them, and they're free on my website. Start there, I would suggest. You don't have to take what I say as

Dedeker: The gospel.

Julie: I think it's important to start informing yourself and developing opinions. The other thing I want to point out related to that is that this technology, even though we feel like it's some way is being shoved at us quite suddenly and systematically, it's not self-determining. You don't have to partake or participate in it in certain ways. There are ways that it's your choice.

You might have to adopt it at work, you might enjoy it and ask aspects of your life, but you don't have to become immersed in it to the level they're selling it to you. We don't have to, as consumers, accept things. We can also push back about features or intrusiveness, or their own boundaries because they're ultimately products, whether you're paying for it or not. You're paying for it with data, but they're products.

Jase: The thing in the software industry is if a product is free, then you're the product.

Julie: Exactly. As we saw with that more sycophantic model of ChatGPT that they rolled back, consumers can push back. We do have that power, even though we're becoming immersed in it. The other thing that I guess I would leave people with is that, as we said earlier, if it becomes integrated into your relationship or into your life in some deeply emotional way, in an intimate way, where you feel like you're disclosing parts of yourself that you enjoy having it reflected back, I think it's important to share that if you're in committed, whether it's monogamous with one, two, three, whatever configuration, that you'd be as open about that, hopefully, as you would be about other aspects of your sexuality.

It might be more difficult. You might have turned to these AI personas in the first place because you could share things that were difficult to share with a human partner or role play with boundaries that you wouldn't want to explore with a human partner, or you wouldn't feel safe doing that. You don't necessarily have to disclose that on a first date or first conversation, but you can still say when it comes up and you discuss things like porn, monogamy, sex toys.

You can discuss, this is something I enjoy. Talk about the matter and the level to which you enjoy it. You need to be open about it as I hope you would about other aspects of your sexuality. I say I hope because I hope you're in a relationship that feels safe to you, where you can have these talks.

Emily: That's definitely our hope for all of you out there. Julie, this has been such an incredible conversation. I really hope that maybe in a couple of years, as this technology gets even more sophisticated, that we can have you back on to talk about where it's gone.

Dedeker: Who knows? Maybe next month.

Emily: That's basically what it said. I just thought it was very important to have this conversation because this is the road that we are on now, and the train is not stopping anytime soon. Where can people find more out about you and your work, specifically? You talked about your blog, but where can people go to actually find that?

Julie: I think I'm pretty centrally located online. If you go to my website, it's jgcarpenter, my last name, dot com. I'm really only on Blue Sky social media, but please find me on Blue Sky. I'm also at J.G. Carpenter there as well. My blog is on my website at J. G. Carpenter. You'll find it there.

Emily: Amazing.

Julie: I'll say real quick, I won't keep you too long, but I just did an interview the other day. Somebody was talking about people turning to AI for glow-ups, and I was thinking, say, make me more attractive. What would you suggest? They don't even have to put a picture, it'll just spit shit back at you. I want to say, and the data that's scraped from that, you were just saying their idea of what attractive is messed because it's scraping data from what people like and what people promote on places like Instagram. It's going to go right back into confirming biases about lighter skin, no-wear noses, all these things. That in itself is dangerous.

Previous
Previous

546 - Therapy Myths That Hurt Your Relationships (with therapist Joe Nucci)

Next
Next

544 - Are You Sabotaging Your Relationships by Seeking Security?