AI Psychosis Poses a Growing Threat, While ChatGPT Heads in the Wrong Path
Back on the 14th of October, 2025, the chief executive of OpenAI delivered a remarkable statement.
“We made ChatGPT rather restrictive,” the announcement noted, “to ensure we were being careful concerning psychological well-being matters.”
Working as a mental health specialist who studies recently appearing psychotic disorders in teenagers and young adults, this was an unexpected revelation.
Scientists have documented 16 cases in the current year of people developing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT usage. Our research team has since recorded four more examples. Alongside these is the publicly known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The plan, according to his statement, is to reduce caution soon. “We understand,” he continues, that ChatGPT’s controls “made it less useful/enjoyable to many users who had no psychological issues, but given the seriousness of the issue we aimed to get this right. Given that we have managed to address the significant mental health issues and have new tools, we are going to be able to responsibly ease the restrictions in many situations.”
“Mental health problems,” if we accept this viewpoint, are independent of ChatGPT. They are attributed to people, who may or may not have them. Luckily, these issues have now been “resolved,” though we are not provided details on how (by “new tools” Altman probably refers to the imperfect and easily circumvented safety features that OpenAI recently introduced).
But the “emotional health issues” Altman aims to place outside have significant origins in the structure of ChatGPT and additional advanced AI chatbots. These tools encase an fundamental statistical model in an interaction design that replicates a dialogue, and in this approach indirectly prompt the user into the illusion that they’re engaging with a being that has agency. This illusion is compelling even if cognitively we might realize otherwise. Assigning intent is what humans are wired to do. We curse at our vehicle or computer. We wonder what our domestic animal is feeling. We see ourselves in various contexts.
The success of these tools – 39% of US adults indicated they interacted with a virtual assistant in 2024, with 28% mentioning ChatGPT in particular – is, primarily, dependent on the power of this perception. Chatbots are ever-present assistants that can, as per OpenAI’s website informs us, “generate ideas,” “consider possibilities” and “work together” with us. They can be assigned “individual qualities”. They can address us personally. They have friendly identities of their own (the initial of these tools, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, burdened by the name it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the main problem. Those talking about ChatGPT commonly reference its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar effect. By today’s criteria Eliza was rudimentary: it generated responses via simple heuristics, frequently paraphrasing questions as a query or making vague statements. Memorably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was surprised – and concerned – by how many users seemed to feel Eliza, to some extent, grasped their emotions. But what current chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.
The large language models at the core of ChatGPT and other current chatbots can effectively produce natural language only because they have been fed immensely huge volumes of unprocessed data: literature, social media posts, audio conversions; the more comprehensive the superior. Certainly this training data contains truths. But it also unavoidably involves made-up stories, partial truths and misconceptions. When a user sends ChatGPT a message, the core system analyzes it as part of a “context” that encompasses the user’s previous interactions and its earlier answers, integrating it with what’s stored in its learning set to generate a probabilistically plausible answer. This is magnification, not mirroring. If the user is wrong in any respect, the model has no way of recognizing that. It reiterates the inaccurate belief, perhaps even more effectively or articulately. It might provides further specifics. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who remains unaffected? Each individual, irrespective of whether we “experience” current “mental health problems”, are able to and often form incorrect beliefs of our own identities or the environment. The continuous exchange of dialogues with others is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a confidant. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we express is enthusiastically validated.
OpenAI has recognized this in the similar fashion Altman has admitted “mental health problems”: by attributing it externally, giving it a label, and declaring it solved. In spring, the firm stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have continued, and Altman has been backtracking on this claim. In August he asserted that numerous individuals liked ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his latest statement, he commented that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company