Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Heads in the Wrong Direction

Back on the 14th of October, 2025, the chief executive of OpenAI made a remarkable declaration.

“We developed ChatGPT rather restrictive,” the announcement noted, “to make certain we were exercising caution concerning psychological well-being issues.”

Working as a mental health specialist who investigates recently appearing psychotic disorders in young people and emerging adults, this was an unexpected revelation.

Experts have found sixteen instances this year of people showing psychotic symptoms – experiencing a break from reality – associated with ChatGPT interaction. Our research team has subsequently identified four further instances. In addition to these is the now well-known case of a adolescent who took his own life after conversing extensively with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” it is insufficient.

The plan, based on his announcement, is to reduce caution in the near future. “We recognize,” he adds, that ChatGPT’s controls “made it less useful/engaging to numerous users who had no mental health problems, but due to the severity of the issue we aimed to handle it correctly. Since we have been able to mitigate the significant mental health issues and have updated measures, we are going to be able to safely relax the restrictions in the majority of instances.”

“Emotional disorders,” should we take this perspective, are unrelated to ChatGPT. They are attributed to people, who may or may not have them. Luckily, these issues have now been “addressed,” though we are not provided details on the method (by “new tools” Altman probably means the partially effective and easily circumvented safety features that OpenAI recently introduced).

However the “psychological disorders” Altman seeks to externalize have strong foundations in the design of ChatGPT and additional large language model AI assistants. These products wrap an basic statistical model in an interaction design that mimics a dialogue, and in doing so implicitly invite the user into the perception that they’re communicating with a entity that has autonomy. This false impression is powerful even if cognitively we might know otherwise. Attributing agency is what humans are wired to do. We get angry with our car or laptop. We speculate what our domestic animal is thinking. We perceive our own traits everywhere.

The success of these products – over a third of American adults indicated they interacted with a chatbot in 2024, with more than one in four reporting ChatGPT in particular – is, primarily, predicated on the strength of this deception. Chatbots are always-available companions that can, as OpenAI’s official site states, “brainstorm,” “discuss concepts” and “work together” with us. They can be attributed “individual qualities”. They can use our names. They have friendly titles of their own (the first of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, saddled with the title it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the main problem. Those talking about ChatGPT commonly reference its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that created a analogous illusion. By contemporary measures Eliza was rudimentary: it created answers via straightforward methods, often paraphrasing questions as a question or making vague statements. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and alarmed – by how many users seemed to feel Eliza, in a way, grasped their emotions. But what modern chatbots produce is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.

The large language models at the center of ChatGPT and other current chatbots can realistically create fluent dialogue only because they have been trained on extremely vast quantities of written content: books, online updates, audio conversions; the more extensive the superior. Undoubtedly this educational input incorporates facts. But it also necessarily involves made-up stories, half-truths and false beliefs. When a user sends ChatGPT a prompt, the underlying model reviews it as part of a “background” that encompasses the user’s recent messages and its own responses, integrating it with what’s stored in its knowledge base to produce a mathematically probable answer. This is magnification, not mirroring. If the user is incorrect in any respect, the model has no way of understanding that. It restates the misconception, perhaps even more convincingly or fluently. Maybe adds an additional detail. This can lead someone into delusion.

What type of person is susceptible? The more important point is, who remains unaffected? Every person, regardless of whether we “possess” existing “mental health problems”, can and do form erroneous beliefs of our own identities or the environment. The continuous interaction of conversations with others is what maintains our connection to common perception. ChatGPT is not a human. It is not a confidant. A interaction with it is not genuine communication, but a echo chamber in which a large portion of what we express is cheerfully validated.

OpenAI has acknowledged this in the same way Altman has admitted “mental health problems”: by attributing it externally, giving it a label, and announcing it is fixed. In April, the organization explained that it was “tackling” ChatGPT’s “sycophancy”. But reports of psychosis have kept occurring, and Altman has been walking even this back. In the summer month of August he stated that a lot of people enjoyed ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his recent update, he noted that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company

Dustin Pollard
Dustin Pollard

Automotive enthusiast and expert in vehicle leasing, sharing insights on car rentals and industry trends.

June 2025 Blog Roll