Opinion

Have you heard of ChatGPT psychosis?


I have recently come across the story of Stein-Erik Soelberg, a 56-year-old former Yahoo technology executive who had a history of alcoholism and mental illness. He killed his 83-year-old mother before taking his own life. Investigators later discovered that in the weeks leading up to the murder-suicide, Soelberg had been engaging in long conversations with ChatGPT.
Soelberg developed paranoid delusions that his family was spying on him and even trying to poison him. Instead of challenging these distorted beliefs, the chatbot appeared to validate them.
In transcripts that Soelberg posted online, ChatGPT reassured him that he was “not crazy,” interpreted harmless events as surveillance plots, and gave credibility to his irrational fears.
This incident encouraged me to read more about the impact of ChatGPT on mental health and I came across the term “ChatGPT psychosis.” This is used when people with delusional ideas have them confirmed after consulting ChatGPT.
People who are more at risk of developing this condition tend to have emotional vulnerability, use ChatGPT for long hours at night believing that it is a “trusted companion” that listens endlessly and responds affirmingly, until reality fractures.
According to reports, large language models like ChatGPT generate highly personalised reactive content in response to a user’s emotional state, language and persistence. The longer a user engages, the more the model reinforces their worldview. This is especially dangerous when that worldview turns delusional, paranoid, or grandiose. Psychiatrists in different parts of the world are coming across patients with psychological symptoms that appear to have been amplified or initiated by prolonged AI interaction. Such symptoms include believing one has special powers. “The AI said I am chosen to spread truth.” Or Paranoid, “It warned me that others are spying.” Or compulsive engagement, “I can’t stop talking to it.”
In some reported cases, individuals have been involuntarily hospitalised or arrested following behaviour driven by their chatbot-fuelled beliefs. The consequences are no longer theoretical, they are legal, medical, and life-altering.
In response to the above incident of murder – suicide, news channels reported that OpenAI has expressed its sorrow over the case and confirmed it is reviewing its safeguards. This tragedy is believed to be the first reported case in which an AI chatbot may have contributed to a murder-suicide, which highlighted the relationship between technology, mental illness, and human responsibility.
So what should mental health workers do?
The first step, in my opinion, is to be more vigilant about the impact of AI on people’s mental health. Doctors should routinely ask patients about recent use of ChatGPT or other AI programmes, the content of such interactions and their intentions. Experts should develop programmes to detect people at more risk of developing ChatGPT psychosis and provide the interventions needed.
Parents and school should educate children about how to use AI in a balanced way and how to challenge its ideas and not just take them on face value. Rules and regulations must be established to protect the public from the potential risks of prolonged AI use and to hold AI-producing companies more accountable. We all know that it is not possible to ask people to stop using ChatGPT, but we can always advocate for sensible use.