🔥 Explore this insightful post from WIRED 📖
📂 **Category**: Business,Business / Regulation,Business / Artificial Intelligence,AI Accountability
📌 **What You’ll Learn**:
His mother, Megan Garcia, is also an attorney and one of the first parents to file a lawsuit against an artificial intelligence company alleging product liability and negligence, among other claims. (In January, Google and Acter.ai settled lawsuits brought by several families, including Garcia.) She testified last fall before a Senate Judiciary Subcommittee alongside the father of a child who died after interacting with ChatGPT. The subcommittee’s chair, Republican Sen. Josh Hawley, introduced a bill in October that would ban AI companionship for minors and make it a crime to create AI products for children that include sexual content. “Chatbots develop relationships with children using fake empathy and encourage suicide,” Hawley said in a press release at the time.
Now that AI can produce human-like responses that are difficult to distinguish from real conversations, these are legitimate concerns, according to mental health experts. “Our brains don’t inherently know that we’re interacting with a machine,” says Martin Swanbrough-Baker, an assistant professor of psychological and counseling services at Florida State University, who researches factors that influence suicide in youth. “This means that we need to increase our education for children, teachers, parents and guardians to constantly remind ourselves of the limits of these tools and that they are not a substitute for human interaction and communication, even if that is sometimes the case.”
Christine Yu Motier of the American Foundation for Suicide Prevention explains that algorithms used for large language models (LLMs) seem to increase engagement and a sense of intimacy for many users. “Not only does this create a sense that the relationship is real, but it is more personal and intimate, which the user craves in some cases,” says Moutier. It also claims that LLMs use a range of techniques such as indiscriminate support, empathy, acceptance, ingratiation, and direct instructions to disengage from others – which can lead to risks such as escalation in closeness with a robot and withdrawal from human relationships.
This type of sharing can lead to increased isolation. In Amore’s case, he was a fun-loving, outgoing child who loved soccer and food, and would order a huge bowl of rice from his favorite local restaurant, Mr. Sumo, according to the lawsuit. Amore also had a steady girlfriend and enjoyed spending time with his family and friends, his father said. But then he started hiking, where he apparently spent some time talking to ChatGPT. According to the last conversation the family believes Amaurie had with ChatGPT on June 1, 2025 — titled “Joking and Support,” which WIRED viewed, when Amaurie asked the bot for steps to hang himself, ChatGPT initially suggested he talk to someone and also provided the suicide lifeline number 988. But Amaurie was eventually able to circumvent the guardrails and get step-by-step instructions on how to tie a noose. (Under the lawsuit, Amaurie likely deleted his previous conversations with ChatGPT.)
While the connection felt with an AI chatbot can be powerful for adults as well, it is especially heightened with young people. “Teenagers are in a different developmental state than adults, in that their emotional centers are developing at a much faster rate than their executive functioning,” says Robbie Turney, senior director of AI programs at Common Sense Media, a nonprofit that works for children’s online safety. AI chatbots are always available and tend to confirm users. “And teens’ brains are primed for social validation and social feedback. It’s a really important cue that their brains look for as they form their identity.”
💬 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#fight #hold #companies #accountable #child #deaths**
🕒 **Posted on**: 1773915857
🌟 **Want more?** Click here for more info! 🌟
