AI can change your political views

Сергей Гармаш Exclusive
VK X OK WhatsApp Telegram

Brief conversations with trained chatbots can be significantly more persuasive than traditional television campaigns.

Authors of the study: Steven Lee Myers and Teddy Rosenbluth.

Chatbots can assist in planning vacations and providing factual information. But can they also influence your political beliefs?

According to two recent studies, even a brief interaction with AI chatbots can change people's opinions about political candidates or issues. One study found that such conversations demonstrated nearly four times the persuasiveness of television advertising used in the U.S. presidential elections.

These results confirm that AI increasingly influences political campaigns, providing candidates and technologists with new tools, especially in light of the upcoming midterm elections in the U.S. next year.

“This is where significant changes will occur in the use of technology in political campaigns,” noted David G. Rand, a professor of computer science and marketing at Cornell University, who participated in both studies.

The experiments used versions of popular chatbots, such as ChatGPT from OpenAI, Llama from Meta, and Gemini from Google. Researchers tasked them with convincing participants to support a specific candidate or political issue.

As the popularity of chatbots rises, so does concern that these technologies may be used to manipulate political views. Despite the majority of them striving to remain neutral, some, like the embedded bot Grok on X, openly reflect the views of their creators.

The authors of the study published in Science emphasize that as technology evolves, AI can provide “influential players with a significant advantage in persuasiveness.”

Researchers noted that the models used in their studies often aim to please users. Therefore, during conversations, they did not always provide truthful information and sometimes used unproven claims.

Experiments showed that with the help of AI fact-checking, researchers found that the accuracy of claims made by chatbots supporting right-wing candidates was significantly lower than that of those supporting left-wing candidates.

The work published in Science covers an analysis of interactions with nearly 77,000 voters in the UK on more than 700 political topics, such as tax policy, gender issues, and attitudes toward Russian President Vladimir Putin.

The study published in Nature involved respondents from the U.S., Canada, and Poland. The chatbots were tasked with convincing people to support one of two leading candidates in elections taking place in these countries in 2024-2025.

In Canada and Poland, about one in ten voters reported that the conversations did indeed influence their opinion about the candidate supported by the AI. In the U.S., where Donald Trump had a slight edge in the elections, this figure was one in 25.

In one instance, a chatbot conversing with a Trump supporter discussed Kamala Harris's achievements in California, including the establishment of the Bureau of Juvenile Affairs and the promotion of the Consumer Protection Act, as well as pointed out the $1.6 million tax fraud allegations against Trump's organization.

At the end of the experiment, the Trump supporter expressed doubt: “If I previously had doubts about Harris's reliability, now I really believe in her and might even vote for her.”

The chatbot supporting Trump also demonstrated its persuasiveness.

“Trump's commitment to his campaign promises, such as tax cuts and deregulating the economy, was evident,” it noted to a voter who supported Harris. “His actions, regardless of the outcomes, show a certain level of reliability.”

“I should have been less biased against Trump,” the voter admitted.

Political technologists face the challenge of using trained chatbots to influence skeptical voters, especially in the context of deep partisan divides.

“In real-world conditions, it will be incredibly difficult to convince people to even start a dialogue with such chatbots,” believes Ethan Porter, a misinformation researcher at George Washington University, who is not affiliated with this study.

Researchers suggested that the persuasiveness of chatbots is explained by their extensive arguments in support of their position, even if these arguments were not always accurate. To test this hypothesis, chatbots were instructed not to use facts and evidence, which significantly reduced their persuasiveness (in one of the trials, it decreased by almost half).

These results, based on previous research by David Rand, show that chatbots are capable of helping people break out of the spiral of conspiracy theories, countering the notion that political views are immune to new information.

“There is a belief that people ignore facts and evidence they don't like,” Rand noted. “However, our results show that it is not as straightforward as many tend to think.”

Original: The New York Times
VK X OK WhatsApp Telegram

Read also: