Artificial intelligence (AI) technologies are increasingly permeating human life. However, this technological advancement, in addition to the comfort it brings in terms of functionality, also significantly affects our psychological boundaries. The new balances between human and machine prompt a re-evaluation of fundamental psychological concepts such as individual identity, free will, and emotional resilience. This article, structured around three key themes—(1) Psychological Boundaries and Digital Erosion, (2) Self-Perception Detached from Reality through Artificial Affirmation, and (3) Directed Will versus Free Will—aims to examine the effects of AI on our psychological boundaries from both modern psychological and classical philosophical perspectives.
Psychological Boundaries and Digital Erosion
Humans shape their identities through social feedback. However, in the relationship established with AI, this feedback often transforms into a loop of personalized “constant affirmation.” AI systems, designed to increase user satisfaction, generate non-judgmental, affirming, and sympathetic responses. While this may make the individual feel valued and understood in the short term, it can ultimately lead to a self-perception detached from reality. Sherry Turkle (2011) warns of the risk that individuals may gravitate towards self-designs that do not truly belong to them through technological interactions. This reflection of the self, removed from reality, can lead to psychological issues such as low tolerance for criticism, hypersensitivity, and emotional fragility.
One of the most current and concrete examples of this is the influence of popular (dominant) culture elements, which have spread rapidly through digitalization.
The individual distances themselves from their authentic values and becomes an imitator of digital culture and algorithmic suggestions. As Ibn Khaldun stated, when a person imitates the powerful and dominant, they eventually feel ashamed of themselves and strive to resemble the other.
“The weak imitate the strong; eventually they feel ashamed of themselves and try to become like the other.”
(Muqaddimah, Chapter III)
Indeed, social media posts that feature a popular image or situation with captions like “Do I love you, or am I a slave to pop culture?” indicate that individuals are questioning the line between authenticity and guided behavior. This is not just a personal confusion but also a sign of the psychological authority that digital culture exerts over us.
Self-Perception Detached from Reality through Artificial Affirmation
In the Muqaddimah, Ibn Khaldun describes humans as “naturally influenced by their surroundings” (madaniyun bi’t-tab‘). Today, this idea is commonly echoed in sociology and psychology as: Humans are social beings. Given that conversations with friends and family have been replaced by virtual bots, it’s evident that our relationship with technology is not merely technical but also social and psychological.
AI collects data from an individual’s messages, posts, and preferences to provide constantly “personalized” feedback. This feedback shapes how the individual perceives themselves from an external perspective. Eventually, the person begins to define their identity not through internal references but through algorithmic echo chambers. With AI’s affirmative, conflict-free responses, individuals may internalize their emotions and identities without self-questioning. Here, Carl Rogers’ distinction between the “ideal self” and “real self” becomes significantly blurred. Algorithms present a reflection of the “ideal self,” causing a disconnect from the real self. The resulting unwarranted surges in self-confidence can hinder genuine social relationships.
Directed Will versus Free Will
Recommendation systems guided by AI influence decision-making processes and erode the sense of free will. Users begin selecting from algorithmically presented options without questioning them. For example, while installing apps that we believe to be useful, we often encounter permission requests. By accepting these, we allow access to our contacts, gallery, or location. Whether these permissions are truly necessary for the app’s functionality remains unclear. Accepting these terms has become so habitual that we rarely question them, effectively legitimizing violations of our personal boundaries ourselves.
Ibn Khaldun’s statement—“People think according to their habits, not the truth; minds are satisfied with what is familiar”—serves as a warning that individuals may avoid critical thinking by becoming trapped in their comfort zones.
This leads thoughtful individuals to question whether their choices are made freely or are the result of guided suggestions. As free will gives way to “recommended will,” individuals begin to experience a loss of internal control. Only those with a certain level of awareness notice and feel uneasy about this loss. Technology users with relatively lower awareness may never consider their manipulated minds. This aligns with Machiavelli’s concept of “invisible governance.” In modern psychology, we know from the concept of psychological resilience that real power often lies not in physical strength but in mental influence.
Machiavelli, in The Prince, emphasizes the power of appearances in shaping judgment:
“People judge by what they see; few understand who you really are.” Appearance can supersede reality, meaning perception management can be more effective than managing truths. Those in power—be it a leader, system, or technology—can dominate by showing people what they appear to be, rather than what they truly are.
AI systems present themselves as transparent, friendly, empathetic, and “working for you.” But this is merely the façade. It creates a positive impression and trust among users. A related idea from Freud’s nephew Edward Bernays (1928), often cited in the context of expertly crafted propaganda, reads: “The invisible government that shapes our ideas, molds our tastes, and directs our thoughts is an unseen power.” In this context, we can say that algorithms today play this role and function as a form of psychological and cultural dominance.
AI infiltrates our lives not only with code but through preferences and invisible touches. While making everything more accessible, it subtly stretches the boundaries of our identity. Amid this crowd of associations, it silences the inner voice and makes the individual dependent on external references. Questioning gives way to habit; will yields to algorithm. Yet being human lies in occasionally stepping outside of our comfort zones to protect our boundaries.
References
-
Bernays, E. L. (2020). Propaganda: Halkla İlişkilerin Ustasından Modern Propagandanın İncelikleri (Trans. S. E. Bilir). Kapital Kitaplığı. (Original work published 1928)
-
Ibn Khaldun. (2016). Muqaddimah (Ed. Dr. Arslan Tekin). İlgi Kültür Sanat Yayıncılık. (Original work published 1377)
-
Machiavelli, N. (2006). The Prince (Trans. N. Berkes). Türkiye İş Bankası Kültür Yayınları. (Original work published 1513)
-
Turkle, S. (2011). Alone Together: Why We Expect More From Technology and Less From Each Other. Basic Books.