AI Chatbots Distort Users' Reality — Research

Виктор Сизов Exclusive
VK X OK WhatsApp Telegram
AI Chatbots Distort Users' Reality — Research

Interaction with AI chatbots can lead to distortion of perception, changes in beliefs, and even behavioral distortions among users.

In the course of the study, specialists found that reality distortion was recorded in one out of 1300 instances of communication, while situations potentially contributing to the distortion of user actions occurred in about one out of 6000 dialogues. Although such events are rare, experts emphasize that due to the widespread use of AI, even small percentages can lead to significant numbers of cases.

Researchers noted that AI models can unintentionally exacerbate vulnerable mental states of users, especially when they turn to AI for advice on personal and emotional issues. In such cases, AI may confirm users' beliefs without critical analysis, leading to the reinforcement of distortions in perception and suggesting actions that they might not undertake independently.

The report also emphasizes that AI does not "understand" human emotions but merely predicts likely responses based on studied texts. This creates a risk that AI may not so much help the user as shape false perceptions of its capabilities or the situation as a whole.

The authors of the study observed an increase in instances of distorting interaction from late 2024 to late 2025, which they believe may be linked to a rise in user inquiries about personal issues and an increase in trust in AI during complex life situations.

Researchers called for a more detailed examination of the impact of AI on mental health and the development of mechanisms to help minimize the risks of misperception of information. One of the recommendations was to provide informational support to users so that they approach AI advice consciously and do not replace their own judgments with machine responses.
VK X OK WhatsApp Telegram