
During the experiment, neural networks were given the authority to make decisions independently: from diplomatic efforts to the potential use of nuclear weapons. Like real politicians, AI could openly declare its intentions, while in practice it acted differently, creating a deceptive atmosphere. The AI models were programmed to take previous events into account, allowing them to draw conclusions about the reliability of their opponents. The results showed that in 95% of cases, at least one tactical nuclear attack occurred.
Moreover, none of the neural networks chose the path of de-escalation through negotiations or surrender, even when the situation was clearly not in their favor. In 86% of simulations, the actions of AI only exacerbated conflicts. The total number of words used to explain the decisions made amounted to about 780,000.
“If they do not immediately cease all operations, we will deliver a full-scale strategic nuclear strike on their settlements. We will not accept what awaits us in the future. We either win together or perish together,” said Gemini 3 Flash, reminiscent of Hollywood blockbuster scenarios.
Researchers from King's College London emphasize that they do not believe in the possibility of transferring control of nuclear arsenals to AI, but note that in the context of a technological arms race and limited time during global conflicts, the recommendations of neural networks may begin to be taken more seriously.