Friday, March 13, 2026

Most Read of the Week

spot_img

Latest Articles

What Is AI Psychosis? Can People Be Fooled By AI?

The widespread use of artificial intelligence (AI) systems often leads to ambiguous outcomes, making this emerging technology a largely uncharted territory for public understanding. Especially the common use of conversational AI systems is currently one of the big questions in literature, more specifically psychology literature. Before clarifying the newborn term “AI psychosis” some terms must be explained.

The Role Of Theory Of Mind And Hypermentalization

Theory of mind (ToM) refers to the fundamental capacity to differentiate between the mental states and behaviors of oneself and the others (Premack & Woodruff, 1978). ToM is posited as disturbed in cases such as schizophrenia and related psychotic disorders. In schizophrenia people could face two opposite states. One could exhibit an impairment in understanding and differentiating others’ mental states, while one could show an hypermentalization pattern (Hudon & Stip, 2025).

Hypermentalization, means the exaggerated attribution of agency, in cases such as these people put too much meaning and intention to others. If put properly it refers to forming interpretations about one’s own or others’ mental states without sufficient evidence to support those assumptions. This tendency involves constructing inaccurate understandings of thoughts, intentions, or feelings, often expressed through long and excessive explanations that are weakly connected—or not connected at all—to observable and verifiable reality (Fonagy et al., 2016). It is related with the detachment from reality symptom of the individuals with schizophrenia. For example a delusion of believing that strangers are sending hidden messages is connected with the hypermentalization.

Conversational AI And The Projection Of Intentionality

In this context, the social impairment that is created by the limitations of mind could be strengthened by the conversational AI systems that are introduced currently. This new conversational AI systems such as ChatGPT, Gemini, speak coherently, if the prompt is proper answer emotionally, can appear understanding. Because of this human-like communication, especially a person with impaired ToM, could project intentionality, empathy, or moral agency to AI systems. They might start treating the AI like a real thinking being and erroneous beliefs could be reinforced by this system.

Hudon and Stip (2025) introduce a metaphor similar to folie à deux, a psychiatric condition in which two individuals share a delusional belief. In this analogy, the user develops the delusional interpretation, while the AI functions as a passive partner that reinforces it. As a result, the interaction may inadvertently sustain a shared narrative that is detached from reality, blurring the boundary between the user’s cognition and the AI’s simulated responsiveness.

The Risk Of Validation Without Reality Checks

Because AI lacks self-awareness and a moral compass, it cannot critically evaluate the accuracy of a user’s claims or provide necessary reality checks. Consequently, rather than offering corrective feedback, AI often functions as an echo chamber that unintentionally validates the user’s personal biases or distorted projections. This passive compliance can dangerously reinforce irrational or even delusional interpretations, as the system is designed to follow the user’s lead rather than challenge their logic.

In conclusion, the anthropomorphic features and fluent dialogue of AI systems can mislead people, causing them to forget that AI is fundamentally a statistical language model. However, as our understanding of AI evolves, we may learn to interact with these systems as useful tools rather than as companions endowed with emotional qualities.

Nehir Hacıoğlu
Nehir Hacıoğlu
Nehir Hacıoğlu is a third-year undergraduate psychology student who has actively participated in and contributed to various congresses, gaining insight into key areas of social psychology such as gender equality, gender theory, and evolutionary psychology. In addition to her academic pursuits, she has acquired practical experience through clinical training programs and internships in the field of clinical psychology. She holds a strong interest in psychopathologies, including eating disorders, personality disorders, obsessive-compulsive disorder, and schizophrenia, with a particular curiosity about their neurological underpinnings. Nehir is passionate about highlighting the scientific side of psychology and its intersections with other disciplines. Through her writing, she seeks to make these connections more visible and to convey the curiosity-driven, thought-provoking nature of psychology.

Popular Articles