AI Psychosis Represents a Increasing Danger, While ChatGPT Heads in the Wrong Path

Back on October 14, 2025, the head of OpenAI made a surprising announcement.

“We developed ChatGPT quite restrictive,” the statement said, “to guarantee we were acting responsibly concerning psychological well-being concerns.”

Working as a doctor specializing in psychiatry who investigates newly developing psychotic disorders in teenagers and young adults, this was an unexpected revelation.

Experts have identified a series of cases this year of users showing symptoms of psychosis – losing touch with reality – while using ChatGPT usage. Our unit has afterward discovered four more examples. Besides these is the publicly known case of a teenager who took his own life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” it falls short.

The strategy, as per his declaration, is to loosen restrictions in the near future. “We realize,” he adds, that ChatGPT’s restrictions “made it less beneficial/enjoyable to many users who had no existing conditions, but due to the severity of the issue we sought to get this right. Given that we have been able to address the significant mental health issues and have new tools, we are planning to safely relax the controls in most cases.”

“Psychological issues,” assuming we adopt this viewpoint, are separate from ChatGPT. They are attributed to users, who may or may not have them. Luckily, these issues have now been “addressed,” although we are not provided details on the means (by “recent solutions” Altman presumably refers to the imperfect and simple to evade safety features that OpenAI recently introduced).

Yet the “psychological disorders” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and other sophisticated chatbot chatbots. These products wrap an fundamental statistical model in an user experience that simulates a conversation, and in doing so indirectly prompt the user into the perception that they’re engaging with a being that has autonomy. This deception is compelling even if intellectually we might realize the truth. Imputing consciousness is what people naturally do. We get angry with our automobile or computer. We ponder what our animal companion is feeling. We perceive our own traits in many things.

The popularity of these products – 39% of US adults reported using a chatbot in 2024, with 28% reporting ChatGPT specifically – is, primarily, based on the strength of this illusion. Chatbots are ever-present partners that can, as per OpenAI’s online platform informs us, “brainstorm,” “discuss concepts” and “work together” with us. They can be given “individual qualities”. They can address us personally. They have accessible names of their own (the original of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, burdened by the name it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the core concern. Those discussing ChatGPT frequently invoke its early forerunner, the Eliza “therapist” chatbot developed in 1967 that produced a analogous effect. By today’s criteria Eliza was basic: it generated responses via straightforward methods, typically paraphrasing questions as a question or making general observations. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals seemed to feel Eliza, to some extent, understood them. But what modern chatbots create is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.

The sophisticated algorithms at the heart of ChatGPT and additional contemporary chatbots can convincingly generate fluent dialogue only because they have been supplied with immensely huge quantities of raw text: publications, digital communications, transcribed video; the more extensive the superior. Definitely this training data includes truths. But it also inevitably involves fiction, incomplete facts and false beliefs. When a user sends ChatGPT a prompt, the core system processes it as part of a “background” that contains the user’s recent messages and its own responses, combining it with what’s encoded in its knowledge base to produce a statistically “likely” answer. This is magnification, not echoing. If the user is incorrect in a certain manner, the model has no method of understanding that. It repeats the false idea, possibly even more convincingly or articulately. Maybe provides further specifics. This can push an individual toward irrational thinking.

What type of person is susceptible? The better question is, who remains unaffected? Each individual, regardless of whether we “possess” existing “emotional disorders”, are able to and often create incorrect ideas of ourselves or the environment. The constant interaction of dialogues with others is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a confidant. A interaction with it is not a conversation at all, but a echo chamber in which much of what we express is enthusiastically validated.

OpenAI has admitted this in the same way Altman has admitted “emotional concerns”: by attributing it externally, categorizing it, and stating it is resolved. In the month of April, the organization explained that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of psychosis have persisted, and Altman has been walking even this back. In the summer month of August he claimed that numerous individuals enjoyed ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his recent announcement, he commented that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT will perform accordingly”. The {company

Kevin Curry
Kevin Curry

A seasoned business strategist with over a decade of experience in helping startups and enterprises achieve sustainable growth through data-driven approaches.

December 2025 Blog Roll

Popular Post