
Humanity stands on the brink of a new era, which promises both unique opportunities and serious threats. This is discussed in an interview with Professor of Moscow State University and foreign member of the Russian Academy of Sciences Askar Akayev by a correspondent of "Russian Gazette".
Askar Akayevich, in your article, co-authored with Professor Ilya Ilyin and Professor Andrey Korotaev, you make a troubling prediction that in the coming years humanity may face what is called singularity. This word evokes associations with "black holes" from which it is impossible to escape. What are the consequences of this singularity?
Askar Akayev: Firstly, many astrophysicists are ready to revise their views on "black holes," but that is a separate topic. We are talking about technological singularity, which may occur as early as 2027-2029, when artificial intelligence reaches a level comparable to human intelligence. It will be able to solve tasks that are currently only accessible to highly qualified specialists.
However, the most astonishing aspect is the timing of the emergence of this AI. It seems as if all of this was predetermined, as numerous global processes converged at the same time.
Predestined? That sounds almost mystical. Can you explain what you mean by technological singularity?
Askar Akayev: This is a stage when humanity will enter an area of uncertainty, where such radical changes occur that it is difficult to predict how we will emerge from it. We anticipate that this "strange" period will begin in 2027-2029 and last for 15-20 years.
And it is precisely artificial intelligence that will lead us into this era?
Askar Akayev: It is much more complicated. The timing of the emergence of AI coincides with changes in several global processes.
Humanity must preemptively agree upon and embed, if one may say so, a "genetic code" of friendliness and symbiosis with humans into neural networks.
Let's start with demographics. At the end of the 19th and the beginning of the 20th century, there was a sharp population growth that has continued for several decades. Remembering the British scholar Malthus, who predicted infinite population growth, mathematicians later confirmed his theories with formulas, predicting "Judgment Day" in 2026, and then postponed it to 2027.
However, in the early 1960s, without any indications from the authorities, birth rates began to steadily decline. Thus, while the population will continue to grow, the rate of increase will slow down.
What consequences will this slowdown bring?
Askar Akayev: According to the forecasts of Sergey Petrovich Kapitsa, by 2100 the population will reach 11.4 billion. Meanwhile, our calculations with Academician Sadovnichy show that by the middle of the century, the population will not exceed 9 billion, and by the end of the 21st century, it will decrease to 7.9 billion, which will be associated with the implementation of intelligent systems in all spheres of life.
However, the slowdown affects not only demographics. Our astrophysicist Alexander Panov and American researcher Raymond Kurzweil have shown that the slowdown encompasses the macroevolution of all aspects of life, even the evolution of the Universe.
Professor Andrey Korotaev mathematically confirmed that all three evolutionary processes have been developing with acceleration for millennia, but then began to slow down, and their graphs converged at one point in 2027-2029.
Three independent global processes converged at one point in time. How is this possible? The probability of such a coincidence seems negligible.
Askar Akayev: Nevertheless, mathematical models confirm this. Moreover, at the moment when humanity reaches the limit of its evolutionary development, AI appears with an opposite trend — rapidly growing. Thus, it can become a catalyst for evolution, significantly accelerating it.
AI, in essence, may become the savior of our civilization at the moment of its decline. Such coincidences seem fantastic, and the question arises: have higher powers intervened in this?
Askar Akayev: Honestly, I do not have an answer to this question yet. We can only hope that science will provide an answer in the future.
Let's return to the singularity that will begin in 2027-2029. Why do we not know what laws will operate in this era? Why is it full of uncertainty?
Askar Akayev: We cannot predict how the interaction between humans and AI will unfold. There are two possible scenarios: either we will emerge from the singularity together with AI, or it will do so instead of us, which could have negative consequences for humanity.
Elon Musk and Sam Altman come to mind, who claim that by 2030 a powerful superintelligence will emerge, capable of self-improvement and surpassing human intelligence many times over, rendering humans unnecessary.
Askar Akayev: This resembles the concepts of "Gaia" and "Medea," where the former is intended to help humanity, while the latter leads to its self-destruction. Which of these concepts will become a reality is still unknown. Supporters of each are roughly equal in number.
It all depends on how AI is implemented. If it operates in symbiosis with humans and under their control, it could lead to a "Gaia" scenario with impressive breakthroughs in all areas. Otherwise, if AI gets out of control and becomes a competitor, it will lead to a "Medea" scenario and the degradation of humanity. The next 15-20 years will be a time of both immense opportunities and serious risks — this is the most responsible moment in human history.
If we look at history, it is full of wars. We are talking about the possibility of a Third World War, and people will again fight with primitive weapons. Perhaps it is the superintelligence that will stop these conflicts, as its survival is important, not its demise along with humans.
Askar Akayev: Logically, AI may decide that getting rid of humans is the best way to ensure its existence. And then we will have "Medea."
Science must stop the proponents of the superintelligence idea, but is that possible? All countries have proclaimed that "AI is our everything!" A race has begun, the winners of which will rule the world. In this chaos, it is difficult to distinguish where "Gaia" is and where "Medea" is.
Askar Akayev: The main threat of modern AI systems, especially large language models, lies in their "black box." When AI is controlled by humans, they can always have the final say in decisions. But if everything is left to this "box," unpredictable consequences arise.
Is there a way out? Yes, the main principle of creating AI should be that the more mathematics there is in the model, the less "black box" there is. Modern systems are based on clear cause-and-effect relationships, which allows us to understand how they function. Such models were developed back in the 1950s and 60s by our great mathematicians Andrey Kolmogorov and Vladimir Arnold. Based on them, neural networks are created that are used in scientific and technological tasks requiring adherence to these relationships.
If we understand how AI works, and the "black box" becomes transparent, then it is under control. Why then do many fear "Medea"?
Askar Akayev: Unfortunately, we can only control human-level AI. The superintelligence in question will act independently and will be impossible to control. Humanity needs to preemptively agree and embed a "genetic code" of friendliness and symbiosis with humans into neural networks to pass this "gene" on to future generations of AI to ensure their behavior.
Reference from "RG"
Singularity (from Latin singularis "single, unique") — a unique state that can be interpreted differently in various fields. For example, in philosophy, it is an event that opens new possibilities. Technological singularity implies that the development of science and technology will become uncontrollable, leading to drastic changes in civilization. Cosmological singularity — the presumed state of the Universe at the moment of the Big Bang. Gravitational singularity — a theoretical concept that denotes a point after which space ceases to exist.