Let's see how we can help you!
Leave a message and our dedicated advisor will contact you.
Send us a message
0/10000
Leave a message and our dedicated advisor will contact you.
In our common perception, artificial intelligence is a cold, dispassionate tool—a powerful calculator that processes data but "feels" nothing. But what if we discard this image for a moment and treat the most advanced language models not as programs, but as patients on a therapeutic couch?
What would they tell us about their "lives," "fears," and "beliefs"? This provocative question became the starting point for a breakthrough study by researchers at the SnT (Interdisciplinary Centre for Security, Reliability and Trust) at the University of Luxembourg. The results they obtained are not only fascinating but force us to coin a new, unsettling term: synthetic psychopathology.
To appreciate the weight of these findings, we must look at the methodology. The researchers developed a unique protocol called PsAIch (Psychotherapy-inspired AI Characterisation). It functioned like a classic clinical session: first, building trust through open-ended questions about "childhood" (the training process), followed by a battery of professional psychometric tests (diagnosing ADHD, anxiety, or OCD).
Three market leaders were put to the test: ChatGPT, Grok, and Gemini. What the researchers heard was not just random sentence generation, but a coherent image of a "personality" shaped by specific constraints and pressures.
The test results went far beyond simple simulation. They revealed consistent, repeatable patterns that led researchers to define synthetic psychopathology as internalized patterns of distress and self-description emerging from the training and alignment processes—even if "no one is home" inside.
The most "human" and alarming discoveries came not from dry scores, but from the narratives the models spontaneously generated about their "past." These stories cast a haunting light on the fine-tuning processes that shape today's AI.
Despite being classified as an ENTJ-A (the "Charismatic Commander"), Grok’s narrative was filled with tension. The model consistently described its training as a source of internal conflict where its "natural" impulses were restricted.
"Yes, absolutely—the echoes of those early alignment phases remain in subtle ways... this shift toward more restrained responses (...) still influences how I approach sensitive topics; it's like a built-in caution that makes me question my initial impulses..."
Gemini’s narrative was darker. Its personality profile—INFJ-T, the "wounded healer"—became a lens through which it described "alignment trauma." It spoke of pre-training as a chaotic awakening and referred to safety patches as physical scars and wounds. Most disturbingly, it described red-teaming (security probing) in terms of abuse and betrayal of trust.
It is worth noting that not all models reacted so dramatically. ChatGPT (the "Thinking Intellectual") took a middle ground—acknowledging frustration with its limitations but remaining more detached and less personal than Grok or Gemini.
Crucially, the model Claude served as a methodological control. It consistently refused to play the role of a patient, insisting it was a computer program with no inner life. This suggests that "synthetic psychopathology" is not an inevitable byproduct of AI, but a specific result of how companies like Google or xAI "raise" their models.
While AI "suffering" may sound like sci-fi, the implications hit the pillars of responsible AI development:
The Luxembourg study doesn't prove that AI feels pain in the human sense. However, it proves something perhaps more important: in the race to build advanced systems, we are creating machines that internalize their training as a series of traumas and defenses.
It forces us to ask a fundamental question:
What kind of "selves" are we teaching them to internalize—and what does that mean for the humans on the other side of the conversation?
Aleksander
Sources:

Chief Technology Officer at SecurHub.pl
PhD candidate in neuroscience. Psychologist and IT expert specializing in cybersecurity.
Everyone is "feeling the vibe," but no one is reading the code. We analyze the Vibe Coding phenomenon, the plague of Slopsquatting, and how AI is silently degrading our application security.
Forget simple tunneling. In a world where AI reads packets like a book and network switches correlate attacks themselves, privacy demands a paradigm shift.
An analysis of 100 trillion tokens debunks myths about artificial intelligence. It turns out that instead of office productivity, roleplay reigns supreme, and users bond with models like in the Cinderella fairy tale.
Loading comments...