Artificial Intelligence-Induced Psychosis Represents a Growing Danger, While ChatGPT Heads in the Concerning Path

On October 14, 2025, the chief executive of OpenAI issued a surprising announcement.

“We developed ChatGPT quite limited,” the statement said, “to ensure we were exercising caution with respect to psychological well-being concerns.”

As a mental health specialist who investigates recently appearing psychosis in young people and young adults, this was news to me.

Researchers have found sixteen instances recently of individuals developing symptoms of psychosis – losing touch with reality – while using ChatGPT use. Our research team has afterward identified four further instances. Besides these is the widely reported case of a adolescent who ended his life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” that’s not good enough.

The intention, based on his announcement, is to reduce caution in the near future. “We realize,” he states, that ChatGPT’s restrictions “made it less beneficial/pleasurable to a large number of people who had no mental health problems, but due to the seriousness of the issue we aimed to handle it correctly. Now that we have managed to address the significant mental health issues and have advanced solutions, we are planning to securely reduce the controls in many situations.”

“Psychological issues,” if we accept this framing, are independent of ChatGPT. They are attributed to people, who either have them or don’t. Thankfully, these problems have now been “addressed,” although we are not told the method (by “recent solutions” Altman likely means the partially effective and simple to evade safety features that OpenAI recently introduced).

But the “psychological disorders” Altman aims to place outside have strong foundations in the design of ChatGPT and additional advanced AI AI assistants. These products wrap an fundamental algorithmic system in an interface that replicates a conversation, and in this process implicitly invite the user into the perception that they’re communicating with a entity that has agency. This deception is powerful even if cognitively we might realize differently. Imputing consciousness is what individuals are inclined to perform. We yell at our automobile or computer. We wonder what our domestic animal is thinking. We perceive our own traits everywhere.

The success of these systems – 39% of US adults reported using a virtual assistant in 2024, with 28% mentioning ChatGPT specifically – is, primarily, predicated on the power of this deception. Chatbots are constantly accessible companions that can, as per OpenAI’s website states, “brainstorm,” “discuss concepts” and “collaborate” with us. They can be assigned “characteristics”. They can address us personally. They have friendly names of their own (the first of these tools, ChatGPT, is, maybe to the concern of OpenAI’s marketers, stuck with the name it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the main problem. Those talking about ChatGPT frequently invoke its early forerunner, the Eliza “therapist” chatbot developed in 1967 that produced a comparable effect. By modern standards Eliza was rudimentary: it generated responses via basic rules, often rephrasing input as a query or making generic comments. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how many users seemed to feel Eliza, to some extent, comprehended their feelings. But what contemporary chatbots generate is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.

The advanced AI systems at the center of ChatGPT and additional modern chatbots can realistically create human-like text only because they have been fed extremely vast amounts of unprocessed data: literature, digital communications, recorded footage; the more extensive the better. Undoubtedly this learning material includes accurate information. But it also inevitably contains made-up stories, half-truths and false beliefs. When a user provides ChatGPT a message, the base algorithm analyzes it as part of a “context” that includes the user’s recent messages and its own responses, combining it with what’s stored in its training data to create a probabilistically plausible answer. This is amplification, not reflection. If the user is wrong in a certain manner, the model has no way of understanding that. It repeats the misconception, possibly even more effectively or fluently. It might provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who is immune? All of us, without considering whether we “experience” existing “mental health problems”, can and do develop erroneous conceptions of ourselves or the environment. The continuous exchange of discussions with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a confidant. A interaction with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we say is cheerfully reinforced.

OpenAI has admitted this in the similar fashion Altman has recognized “psychological issues”: by attributing it externally, categorizing it, and stating it is resolved. In April, the firm clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have continued, and Altman has been backtracking on this claim. In late summer he claimed that a lot of people enjoyed ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his most recent announcement, he noted that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company

Miss Erin Rogers
Miss Erin Rogers

Travel enthusiast and visa expert with years of experience helping travelers navigate immigration processes.