AI is learning to read between the lines. According to new research from the University of Barcelona, AI can now pick up on your personality traits just by analyzing how you write.
Using models like BERT and RoBERTa, researchers trained AI to pick out clues in written text. These are not just fancy grammar checks. They used powerful models that break down how words work in context. Through a method called “integrated gradients,” the AI explains how certain words contribute to the final prediction.
The word “hate” might scream negativity in one sentence, but in another, it shows care, like in “I hate seeing people suffer.” The AI understands that difference.
The Big Five vs. MBTI
The study tested two major personality models: the Big Five (also called OCEAN) and the MBTI. The Big Five blew MBTI out of the water. Why? Because it works on a sliding scale, traits like openness or conscientiousness are not yes-or-no answers. They exist on a spectrum. That makes it easier for AI to spot patterns and make sense of nuance. It can tell the difference between someone who is a little bit social and someone who is the life of the party.
MBTI didn’t do so well. The AI kept picking up on surface-level patterns that didn’t hold up. It confused the structure of MBTI with actual personality signals. That is a problem if you are using it to judge someone’s character. It is a big win for the Big Five and a big red flag for MBTI fans.

Sank / Pexels / What makes this even more interesting is the neuroscience link. The project involved the university’s Institute of Neurosciences, adding brain science into the mix.
It is a mirror of how we think, feel, and relate to the world. The words we choose reflect what is going on in our heads, and AI is learning to read that reflection.
The researchers used explainable AI (XAI) to show how the system made its calls. That means it is not a black box. It is transparent. You can trace how certain words and phrases led to a personality read. That is huge for trust. If AI is going to help in sensitive areas like mental health or hiring, it has to be accountable. And this research shows that it is possible.
What Does This Mean?
In clinical psychology, AI could track patients' writing over time. Shifts in tone or word choice might signal changes in mood or mental state. In HR, it could help reduce bias by focusing on linguistic signals instead of gut feelings. And in education, it could help tailor learning styles to fit a student’s personality.
Think of tools like Whisper.ai that can turn speech into text. If you add that to what the AI already knows from written words, the accuracy goes way up.

Bert / Pexels / The researchers say the next step is multimodal analysis. That means combining written language with other forms of communication, like voice or facial expressions.
And there is one more twist: culture. Language doesn’t mean the same thing everywhere. A phrase that signals confidence in English might sound arrogant in another language. The study is now expanding to test models across different languages and cultures. That matters because if AI is going to read people fairly, it has to understand the context they come from.
Of course, this opens the door to big ethical questions. Just because AI can read your personality doesn’t mean it always should. Privacy, consent, and accuracy are front and center. But if done right, this tech could do more good than harm. It could help humans understand each other better and maybe even help us understand ourselves.