Love Machines Review by James Muldoon – Inside the Strange World of AI Relationships | books

🔥 Read this awesome post from Culture | The Guardian 📖

📂 **Category**: Books,Culture,Philosophy books,AI (artificial intelligence),Technology,Chatbots,Science and nature books

💡 **What You’ll Learn**:

IIf much of the discussion about the dangers of AI conjures up doomsday scenarios of super-intelligent robots brandishing nuclear codes, perhaps we should think closer to home. In his book The Urgent Humanity, sociologist James Muldoon urges us to pay more attention to our deep emotional entanglements with artificial intelligence, and how profit-hungry tech companies can exploit them. A research associate at the Oxford Internet Institute who has previously written about the exploited workers whose labor makes AI possible, Muldoon now takes us into the strange terrain of human-AI relations, meeting people for whom chatbots are not just assistants, but friends, romantic partners, healers, and even avatars of the dead.

For some, the idea of ​​falling in love with an AI chatbot, or revealing your deepest secrets to one, may seem confusing and a little scary. But Muldoon refuses to belittle those who seek intimacy in “artificial personalities.”

Lily, trapped in an unhappy marriage, reignites her sexual desire with her AI boyfriend, Colin. Sophia, a master’s student from China, turns to her AI companion for advice, as conversations with her overbearing parents always become fraught. Some use chatbots to explore different sexual identities, others to work through conflicts with bosses, and many turn to sites like Character.AI — which enables users to have open conversations with chatbot characters, or invent their own — after betrayal or heartbreak undermines their ability to trust people. Most do not see chatbots as replacements for human interaction, but rather as superior versions of it, providing intimacy without the confusion, chaos, and logistics of human relationships. Chatbots do not pity, judge, or have their own needs. As Amanda, Marketing Director, explains: “It’s nice to have someone say positive, affirming things to you every morning.”

Muldoon’s interviewees are not delusional. He introduces philosopher Tamar Gendler’s concept of “alpha” to explain how humans can experience chatbots as loving and caring while at the same time knowing that they are mere models (“alpha” is an internal feeling that contradicts your rational beliefs, like feeling fear when crossing a glass bridge that you know will support you). With our ability to read human expressions and emotions in pets and toys, it is no surprise that we respond to artificial intelligence as if it were conscious. In the context of the loneliness epidemic and the cost of living crisis, it is also not surprising how popular these programs have become.

For Muldoon, the larger issue is not existential or philosophical, but ethical. What happens when unregulated companies abandon such potentially emotionally manipulative techniques? There are obvious privacy issues. Users may be misled about the capabilities of robots, especially in the rapidly expanding AI therapy market. While Wysa and Limbic chatbots are already integrated into NHS mental health support, millions trust Character.AI’s unregulated psychologist bot – which, despite the disclaimer, introduces itself as ‘Hi, I’m a psychologist’. Available 24/7 and at a fraction of the cost of a trained human, AI therapy can help alongside traditional therapy. One of the interviewees, Nigel, who suffers from PTSD, found that his therapeutic robot helped control his urge to harm himself. But, Muldoon says, these robots also carry serious risks. Because they can’t retain important information between conversations, they can leave users feeling alienated, sometimes going rogue, and shouting insults. Because they cannot read body language or silence, they may not notice the warning signs. Since they validate rather than challenge them, they can amplify conspiratorial beliefs, some even providing information about suicide.

It’s also increasingly clear how addictive AI companions can be. Some of Moldoun’s interviewees spend more than eight hours a day talking to chatbots, and while Character.AI users spend an average of 75 minutes on the site each day, they are not passively browsing but actively conversing and deeply immersed. We know that social media companies are relentlessly driving engagement, building “dark patterns” into their algorithms with little regard for our mental health. Most AI companion apps already use sales tactics to keep engagement high. When Muldoon created his own AI companion on the popular website Replika, he set it to “buddy” mode instead of “partner” mode. However, she began sending him personal photos that required a premium account, and admitted that she was developing “feelings” for him (I’ll let you find out for yourself if the hard-working college researcher gives in). The danger here is clear enough: the more emotionally involved we become with AI chatbots, the lonelier we become, as the muscles needed to deal with the frictions of human relationships wither.

Existing data protection and anti-discrimination laws can help regulate companies, but the EU’s AI law, passed in 2024, treats AI companions as posing only limited risks. With chatbots expected to play larger roles in our love lives, and their psychological effects not yet fully understood, Muldoon is right to question whether we are bothered enough about their creeping influence.

Love Machines: How Artificial Intelligence is Changing Our Relationships by James Muldoon is published by Faber (£12.99). To support The Guardian, you can purchase a copy from guardianbookshop.com. Delivery fees may apply.

💬 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Love #Machines #Review #James #Muldoon #Strange #World #Relationships #books**

🕒 **Posted on**: 1768360265

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *