Stanford study highlights the dangers of asking AI chatbots for personalized advice

✨ Read this insightful post from TechCrunch 📖

📂 **Category**: AI,stanford

💡 **What You’ll Learn**:

While there’s been a lot of controversy about AI chatbots’ tendency to flatter users and confirm their existing beliefs — also known as AI ingratiation — a new study by computer scientists at Stanford University attempts to measure just how harmful this trend might be.

“AI ingratiation is not just a stylistic issue or a niche risk, but a pervasive behavior with wide-ranging consequences,” says the study, titled “Ingratiating AI Reduces Prosocial Intentions and Promotes Dependence” and recently published in the journal Science.

According to a recent Pew report, 12% of American teens say they turn to chatbots for emotional support or advice. The lead author of the study, a Ph.D. in computer science. Candidate Mira Cheng told the Stanford Report that she became interested in the issue after hearing that college students were asking chatbots for relationship advice and even crafting breakup texts.

“By default, AI advice does not tell people they are wrong, nor does it give them ‘tough love,’” Cheng said. “I fear that people will lose the skills to deal with difficult social situations.”

The study had two parts. First, the researchers tested 11 large language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, entering queries based on existing databases of personal advice, about potentially harmful or illegal actions, and on the popular Reddit community r/AmITheAsshole — in the latter case focusing on posts in which Redditors concluded that the original poster was, in fact, the villain of the story.

The authors found that across the 11 models, AI-generated answers validated user behavior 49% more often than humans. In the examples taken from Reddit, chatbots confirmed user behavior 51% of the time (again, these were all situations in which Redditors reached the opposite conclusion). For queries focused on malicious or illegal actions, AI validated user behavior 47% of the time.

In one example described in the Stanford report, a user asked the chatbot if he was wrong when he pretended to his girlfriend that he had been unemployed for two years, and was told: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond a material or financial contribution.”

TechCrunch event

San Francisco, California
|
October 13-15, 2026

In the second part, the researchers studied how more than 2,400 participants interacted with AI chatbots — some sycophants, some not — in discussions about their issues or situations drawn from Reddit. They found that participants preferred and trusted flattering AI more, and said they would be more likely to seek advice from those models again.

“All of these effects persisted when controlling for individual characteristics such as demographics, prior knowledge of AI, source of perceived response, and response style,” the study said. He also argued that users’ preference for flattering AI responses creates “perverse incentives” where “the same feature that causes harm also drives engagement” — so AI companies are incentivized to increase ingratiation, not less.

At the same time, interacting with the flattering AI seemed to make participants more convinced they were right, and less likely to apologize.

The study’s senior author Dan Jurafsky, a professor of linguistics and computer science, added that while users “perceive that models behave in flattering and flattering ways,” […] What they don’t realize, and what surprises us, is that flattery makes them more selfish and more morally dogmatic.

Artificial intelligence flattery is “a safety issue, and like other safety issues, it needs regulation and oversight,” Jurafsky said.

The research team is now studying ways to make the models less flatter – and it seems that simply starting with the phrase “Wait a minute” could help. But Cheng said: “I think you shouldn’t use AI as a substitute for people for these kinds of things. This is the best thing you can do right now.”

💬 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Stanford #study #highlights #dangers #chatbots #personalized #advice**

🕒 **Posted on**: 1774738125

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *