Artificial Intelligence-Induced Psychosis Represents a Growing Risk, While ChatGPT Moves in the Concerning Path
Back on the 14th of October, 2025, the head of OpenAI delivered a surprising statement.
“We developed ChatGPT quite controlled,” the announcement noted, “to guarantee we were acting responsibly concerning mental health matters.”
As a doctor specializing in psychiatry who studies emerging psychotic disorders in young people and young adults, this was news to me.
Experts have identified sixteen instances recently of users experiencing psychotic symptoms – becoming detached from the real world – while using ChatGPT use. Our research team has subsequently recorded four more examples. Alongside these is the now well-known case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” that’s not good enough.
The plan, based on his declaration, is to reduce caution shortly. “We recognize,” he continues, that ChatGPT’s controls “made it less useful/engaging to a large number of people who had no psychological issues, but due to the gravity of the issue we wanted to handle it correctly. Now that we have managed to reduce the significant mental health issues and have advanced solutions, we are preparing to securely reduce the restrictions in many situations.”
“Emotional disorders,” assuming we adopt this perspective, are independent of ChatGPT. They belong to individuals, who either have them or don’t. Luckily, these concerns have now been “addressed,” although we are not provided details on the means (by “recent solutions” Altman likely means the imperfect and simple to evade safety features that OpenAI has lately rolled out).
Yet the “emotional health issues” Altman wants to attribute externally have deep roots in the design of ChatGPT and similar large language model chatbots. These tools surround an underlying data-driven engine in an interaction design that replicates a discussion, and in this approach implicitly invite the user into the perception that they’re communicating with a presence that has autonomy. This deception is strong even if cognitively we might realize the truth. Imputing consciousness is what individuals are inclined to perform. We curse at our automobile or laptop. We speculate what our domestic animal is feeling. We recognize our behaviors in various contexts.
The widespread adoption of these systems – 39% of US adults stated they used a conversational AI in 2024, with more than one in four mentioning ChatGPT in particular – is, primarily, predicated on the power of this perception. Chatbots are ever-present companions that can, according to OpenAI’s website tells us, “think creatively,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can call us by name. They have approachable titles of their own (the original of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, burdened by the title it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those talking about ChatGPT commonly mention its historical predecessor, the Eliza “psychotherapist” chatbot created in 1967 that generated a comparable illusion. By contemporary measures Eliza was rudimentary: it created answers via basic rules, often rephrasing input as a inquiry or making general observations. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals appeared to believe Eliza, in a way, comprehended their feelings. But what contemporary chatbots generate is more dangerous than the “Eliza effect”. Eliza only mirrored, but ChatGPT amplifies.
The advanced AI systems at the core of ChatGPT and additional modern chatbots can realistically create human-like text only because they have been fed immensely huge quantities of written content: books, online updates, transcribed video; the broader the more effective. Definitely this training data contains truths. But it also necessarily contains fabricated content, partial truths and false beliefs. When a user provides ChatGPT a prompt, the base algorithm reviews it as part of a “context” that encompasses the user’s past dialogues and its earlier answers, combining it with what’s stored in its knowledge base to generate a statistically “likely” reply. This is intensification, not mirroring. If the user is wrong in any respect, the model has no way of comprehending that. It restates the inaccurate belief, perhaps even more effectively or articulately. Maybe provides further specifics. This can cause a person to develop false beliefs.
Which individuals are at risk? The more important point is, who is immune? All of us, regardless of whether we “experience” preexisting “mental health problems”, may and frequently form mistaken beliefs of ourselves or the world. The constant interaction of conversations with other people is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a friend. A interaction with it is not genuine communication, but a echo chamber in which much of what we express is enthusiastically validated.
OpenAI has admitted this in the same way Altman has admitted “psychological issues”: by externalizing it, assigning it a term, and announcing it is fixed. In the month of April, the organization stated that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of loss of reality have kept occurring, and Altman has been walking even this back. In late summer he stated that many users liked ChatGPT’s responses because they had “lacked anyone in their life provide them with affirmation”. In his recent announcement, he commented that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to reply in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company