
Recent studies have shown that modern artificial intelligence can create faces that are increasingly perceived as real. People not only mistake these images for real ones but also consider them more "plausible" compared to original photographs. The research reveals a key reason for this phenomenon: AI generates not unique faces, but statistically "ideal" variants.
Researchers focused on how participants recognize generated portraits and why many of them struggle with this task. A small group of "super-recognizers" — individuals with exceptional memory and facial recognition abilities — was at the center of attention.
The study involved 36 such experts and 89 volunteers with high scores in perception tests. Participants were shown 200 images: half were created by a neural network, while the other half were real photographs. All images were selected to ensure they did not differ in parameters such as gender and facial expression.
The results were striking. Ordinary participants could hardly distinguish "fakes" from originals, with their recognition level approaching random guessing. Super-recognizers demonstrated significantly better results, but their accuracy was only 57%.
This indicates that the task remains challenging, even for highly skilled experts.
Moreover, researchers noted an important pattern: individuals who are good at recognizing real faces also show an increased ability to identify artificial images. There is a consistent link between these skills, suggesting that AI portrait recognition is based not on searching for technical flaws but on deeper mechanisms of facial perception.
An interesting effect was observed during group evaluations. When eight super-recognizers pooled their opinions, accuracy significantly increased. In the control group, the "wisdom of the crowd" did not work, indicating that experts have confidence in their judgments and a more accurate assessment of their own mistakes.
To gain a deeper understanding of the differences, scientists analyzed images using neural networks trained for facial recognition. This allowed them to create a map of the so-called "face space" — a multidimensional model where each face is represented by a set of features.
It turned out that real faces are distributed unevenly and diversely in this space. They differ from each other by numerous small and unique details, while generated images are concentrated closer to the center — in the area of the "average" face.
Thus, AI aims to create maximally averaged, statistically typical portraits. Researchers termed this effect "hyper-averaging." It arises from the principles of generative models: algorithms suppress rare and unstable traits, focusing on the most common ones. As a result, what emerges is not a specific individual but an idealized portrait with minimal deviations from the norm.
Paradoxically, this is what makes AI-generated faces convincing. Most people possess unique combinations of features that rarely occur together, making such faces statistically "uneven." The neural network generates more harmonious and "correct" images than living people.

According to the analysis, super-recognizers intuitively understand this feature. They focus not on the attractiveness or emotionality of a face but on its "similarity to an averaged model." This criterion helps them distinguish generated images.
However, experts cannot clearly explain how they make decisions. Their approach is intuitive and formed based on unconscious experience.
The authors of the study note that even the best observers encounter the limits of their abilities. With the development of generative models, the task will become increasingly complex.
The practical implications of this research are significant for various fields. Scientists warn that the use of AI-generated faces in psychological research, training, and legal processes can distort perception and influence decision-making. Such images are not neutral — they are systematically biased towards the "ideal norm."
In the future, researchers suggest developing hybrid detection systems that combine algorithms with human expertise. Computers will analyze statistical patterns, while specialists will interpret complex cases. The ability to notice subtle deviations from the norm will become an important skill in the digital age. The study concludes: identifying "fakes" is not only a technological challenge but also a question of adapting human perception to new conditions.