545 - AI Relationships: Ethics, Intimacy, & Algorithms with Dr. Julie Carpenter
Welcome, Dr. Julie!
Dr. Julie Carpenter is a social scientist who studies how people relate to AI in its many forms, and how these interactions inform evolving ideas about self, identity, agency, and connection. Her research investigates the stories people construct around artificial beings and how those narratives influence the design and deployment of real-world AI. She is an external research fellow with the Ethics + Emerging Sciences Group at California Polytechnic State University and the author of The Naked Android: Synthetic Socialness and the Human Gaze, which unpacks how culture influences our expectations of robots, and why that matters.
For this episode, we’re chatting with Dr. Julie about AI and relationships; after all, we’re on the precipice of this technology becoming a stand-in for emotionally entangled relationships. For some people, it already has become that.
Dr. Julie answers the following questions during this episode:
Something that you discussed in your most recent book was the question, “will an over-reliance on robots or AI lead to a reduction in human interaction?” What is your take on this and do you think the rise in AI relationships is contributing to the current loneliness epidemic?
Robots and AI are designed to mimic interdependency and companionship, but what happens to the person when the robot is no longer there? Will they experience a similar loss to a breakup or the death of a loved one?
Can a relationship with an AI truly be considered “real,” and what does it say about us that we increasingly treat artificial systems as emotionally responsive partners?
In one of your blog articles, you say that AI isn’t about offering support, it is “…about sustaining engagement with users and retaining them as consumers who return for the experience. These systems aren’t passive listeners but are carefully engineered to shape user behavior, emotional responses, and, occasionally, over-reliance. They don’t just facilitate interaction; they optimize for retention, using psychological reinforcement, behavioral nudging, and personalization to ensure users keep coming back.” Can we talk a little more about this and should always be cautious when engaging in relationships with AI?
If, eventually, an AI or robot gets to a point where it can think for itself and have decision making power, should a human have to ask for consent from their AI partner?
What ethical tensions arise when someone forms a romantic or emotionally intimate bond with an AI while also in a human relationship? Is it infidelity, fantasy, or something entirely new?
How do design choices and power dynamics create the illusion of emotional reciprocity in AI relationships, and who stands to gain from that illusion?
Should we be worried that people will begin to see these types of interactions (highly syncophantic, overly empathetic) as the norm, which could cause distress when a real human relationship has any sort of pushback or challenge?
Find more about Dr. Julie Carpenter on her website, or on BlueSky @jgcarpenter. Check out her Ethics and Emerging Sciences group here, and find her book The Naked Android here.