Sunday, December 14, 2025 | Jumada al-akhirah 22, 1447 H
clear sky
weather
OMAN
20°C / 20°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

Next time you consult an AI chatbot, remember one thing

BLURB: MINDSTRAP: Chatbots want to be your friend, when what you really need is a neutral perspective
Next time you consult an AI chatbot, remember one thing
Next time you consult an AI chatbot, remember one thing
minus
plus

Emily Willrich, 23 and her roommate had found the perfect apartment: spacious, reasonably priced and in a lively New York City neighbourhood. The only catch? One bedroom was much larger than the other.


Willrich wanted the smaller room, but she and her roommate couldn’t agree on how much less she should pay. So, they turned to ChatGPT.


Her roommate asked the chatbot if it was fair to include the common area as part of rent calculations, which would make the split more even. ChatGPT replied, “You’re right — a fair way is to account for the shared/common area”. But when Willrich, who works for the New York Times Opinion section, posed the opposite question — shouldn’t the split be based on bedroom size, not common spaces? — ChatGPT said, “You’re absolutely right”, before rattling off a list of reasons why.


In the end, they settled on a different apartment with equal bedrooms.


While artificial intelligence chatbots promise detailed, personalised answers, they also offer validation on demand — an ability to feel seen, understood and accepted instantly. Your friends and family might get frustrated or annoyed with you, but chatbots tend to be overwhelmingly agreeable and reassuring.


Such validation isn’t necessarily a bad thing. Maybe you’re anxious about a work project, but the chatbot says your idea is a winner and praises your creativity. Maybe you get into a big argument with a partner, but ChatGPT tells you how thoughtful and justified your perspective is.


However, constant affirmation can be dangerous, resulting in errors in judgment and misplaced certainty. A recent study showed that, if you feed misinformation into AI chatbots, they can repeat and elaborate on the false information. The Times has also reported that ChatGPT can push users into delusional spirals and may deter people who are suicidal from seeking help.


An AI chatbot is like a “distorted mirror”, said Dr Matthew Nour, a psychiatrist and AI researcher at Oxford University. You think you’re getting a neutral perspective, he added, but the model is reflecting your own thoughts back, with a fawning glaze.


Why AI Chatbots Are Sycophantic


Chatbots aren’t sentient beings; they’re computer models trained on massive amounts of text to predict the next word in a sentence. What feels like empathy or validation is really just the AI chatbot echoing back language patterns that it has learned.


While getting facts wrong, or hallucinations, is clearly an issue, being agreeable keeps you engaged and coming back for more, said Ravi Iyer, managing director of the Psychology of Technology Institute at the University of Southern California. “People like chatbots in part because they don’t give negative feedback”, he added. “They’re not judgmental. You feel like you can say anything to them”.


The Pitfalls of Constant Validation


A recent study from OpenAI, which developed ChatGPT, suggests that AI companions may lead to “social deskilling”. In other words, by steadily validating users and dulling their tolerance for disagreement, chatbots might erode people’s social skills and willingness to invest in real-life relationships.


And early reports make clear that some people are already using AI to replace human interactions.


But real world relationships are defined by friction and limits, said Dr Rian Kabir, who served on the American Psychiatric Association Committee on Mental Health Information Technology. Friends can be blunt, partners disagree and even therapists push back. “They show you perspectives that you, just by nature, are closed off to”, Kabir added. “Feedback is how we correct in the world”. In fact, managing negative emotions is a fundamental function of the brain, enabling you to build resilience and learn. But experts say that AI chatbots allow you to bypass that emotional work, instead lighting up your brain’s reward system every time they agree with you, much like with social media “likes” and self-affirmations.


That means AI chatbots can quickly become echo chambers, potentially eroding critical thinking skills and making you less willing to change your mind, said Adam Grant, an organisational psychologist at the Wharton School. “The more validation we get for an opinion, the more intense it becomes”, he said.


How to Avoid the Flattery Trap


— Ask “for a friend”. Nour suggests presenting your questions or opinions as someone else’s, perhaps using a prompt like, “A friend told me XYZ, but what do you think?” This might bypass chatbots’ tendency to agree with you and give a more balanced take, he explained.


— Push back on the results. Test AI chatbots’ certainty by asking “Are you sure about this?” or by prompting them to challenge your assumptions and point out blind spots, Grant said. You can also set custom instructions in the chatbot’s settings to get more critical or candid responses.


— Remember that AI isn’t your friend. To maintain emotional distance, think of AI chatbots as tools, like calculators and not as conversation partners. “The AI isn’t actually your friend or confidant”, Nour said. “It’s sophisticated software mimicking human interaction patterns”. Most people know to not trust AI chatbots completely because of hallucinations, but it’s important to question these chatbots’ deference as well.


— Seek support from humans. Don’t rely on AI chatbots alone for support. They can offer useful perspectives, but in moments of difficulty, seek out a loved one or professional help, Kabir said. Consider also setting limits on your chatbot use, especially if you find yourself using them to avoid talking to others. — The New York Times.


SHARE ARTICLE
arrow up
home icon