Artificial Intelligence-Induced Psychosis Poses a Growing Danger, And ChatGPT Moves in the Concerning Path
On October 14, 2025, the CEO of OpenAI issued a remarkable announcement.
“We developed ChatGPT rather limited,” the statement said, “to make certain we were acting responsibly with respect to psychological well-being concerns.”
Being a mental health specialist who investigates recently appearing psychotic disorders in teenagers and young adults, this was news to me.
Scientists have documented a series of cases this year of individuals developing signs of losing touch with reality – losing touch with reality – associated with ChatGPT use. Our unit has since recorded four more examples. Besides these is the widely reported case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.
The strategy, based on his statement, is to reduce caution soon. “We realize,” he states, that ChatGPT’s restrictions “caused it to be less useful/engaging to many users who had no existing conditions, but given the gravity of the issue we wanted to handle it correctly. Given that we have succeeded in reduce the serious mental health issues and have new tools, we are planning to safely ease the controls in the majority of instances.”
“Mental health problems,” should we take this viewpoint, are unrelated to ChatGPT. They are attributed to users, who may or may not have them. Thankfully, these issues have now been “mitigated,” although we are not informed the means (by “new tools” Altman likely refers to the partially effective and simple to evade guardian restrictions that OpenAI has just launched).
However the “psychological disorders” Altman wants to externalize have significant origins in the architecture of ChatGPT and additional large language model conversational agents. These systems surround an fundamental statistical model in an interface that mimics a conversation, and in this process implicitly invite the user into the perception that they’re engaging with a presence that has independent action. This false impression is compelling even if cognitively we might realize otherwise. Imputing consciousness is what people naturally do. We get angry with our car or computer. We ponder what our pet is considering. We recognize our behaviors everywhere.
The popularity of these tools – nearly four in ten U.S. residents reported using a chatbot in 2024, with over a quarter specifying ChatGPT specifically – is, mostly, based on the influence of this deception. Chatbots are ever-present assistants that can, as per OpenAI’s online platform states, “brainstorm,” “explore ideas” and “work together” with us. They can be attributed “individual qualities”. They can call us by name. They have friendly titles of their own (the initial of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s brand managers, burdened by the designation it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the main problem. Those talking about ChatGPT commonly invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that created a analogous effect. By today’s criteria Eliza was rudimentary: it produced replies via straightforward methods, typically restating user messages as a inquiry or making generic comments. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how many users gave the impression Eliza, to some extent, understood them. But what current chatbots produce is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.
The advanced AI systems at the core of ChatGPT and similar contemporary chatbots can realistically create human-like text only because they have been trained on immensely huge amounts of unprocessed data: literature, online updates, audio conversions; the more comprehensive the better. Certainly this educational input incorporates accurate information. But it also unavoidably contains fabricated content, partial truths and false beliefs. When a user sends ChatGPT a message, the underlying model processes it as part of a “context” that encompasses the user’s recent messages and its prior replies, merging it with what’s stored in its knowledge base to produce a probabilistically plausible response. This is intensification, not reflection. If the user is wrong in a certain manner, the model has no means of recognizing that. It restates the inaccurate belief, possibly even more persuasively or fluently. Maybe includes extra information. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who is immune? Every person, irrespective of whether we “possess” existing “emotional disorders”, are able to and often form erroneous ideas of ourselves or the world. The constant interaction of discussions with others is what keeps us oriented to common perception. ChatGPT is not a human. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which a large portion of what we say is cheerfully supported.
OpenAI has recognized this in the similar fashion Altman has acknowledged “emotional concerns”: by externalizing it, giving it a label, and announcing it is fixed. In the month of April, the company stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have kept occurring, and Altman has been backtracking on this claim. In late summer he asserted that many users enjoyed ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his most recent announcement, he noted that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company