USD
41.77 UAH ▼0.22%
EUR
49.1 UAH ▲1.32%
GBP
56.6 UAH ▲0.96%
PLN
11.54 UAH ▲1.32%
CZK
2 UAH ▲1.62%
How can artificial intelligence possibly exacerbate the crisis between two oppon...

Taiwan invasion. Artificial intelligence, autonomy and risk of nuclear war

How can artificial intelligence possibly exacerbate the crisis between two opponents who have nuclear weapons? Consider the following fictional scenario: in the morning of December 12, 2025, Beijing's political leaders and Washington sanctioned nuclear strikes in the Taiwanese Strait. We live in the era of dizzyingly rapid technological changes, especially in the field of artificial intelligence.

The beginning of the process of preparing the Armed Forces to participate in the battle -filled battlefield technologies is no longer just speculation or science fiction. Read the best materials of the section on Focus. Military Focus on Facebook "Artificial Intelligence Technologies" has already been introduced into military equipment. The armed forces around the world have advanced in planning, research and developments, and in many cases the deployment of artificial intelligence.

Focus translated James Johnson's new text dedicated to new technologies in the war. Artificial intelligence does not exist in a vacuum. Artificial intelligence itself is unlikely to change strategic schedules. On the contrary, it will most likely enhance the destabilizing influence of advanced weapons, thereby making war rapid and shortening the timing of decision making.

The destabilizing effect of artificial intelligence in the military sphere can increase the tension between nuclear powers, especially between China and the US, but not for the reasons you think.

How and how much changes in the field of artificial intelligence mark the departure from automation in the nuclear sphere, which has several decades? How radical changes are these developments? What are the potential risks associated with combining artificial intelligence with nuclear weapons? We cannot answer these questions comprehensively, extrapolating current trends in the development of artificial intelligence.

But we can identify the potential risks of current trends and consider ways to manage them. It is worth considering how achievements in the field of artificial intelligence technologies are investigated, developed and, in some cases, deploy and function in the context of broader nuclear restraint architecture - early warning, intelligence, observation and reconstruction, command and management, nuclear weapons, as well .

Machine training of artificial intelligence can improve existing systems of early warning, intelligence, observation and reconstruction in three ways. Unlike intelligence and early warning systems, artificial intelligence is unlikely to affect the command and management of nuclear weapons, which has been automated but not autonomy for several decades.

As we already know from other articles on this site, the algorithms that underlie complex autonomous systems today are too unpredictable, vulnerable to cyberattacks, unclear (the problem of "black box"), brittle and short -sighted to use them in critical areas unattended.

Currently, experts and nuclear powers have a wide consensus that even if technology allows artificial intelligence to make decisions that directly affect the functions of nuclear command and management (ie, the decision to launch missiles), they should not be pre -delegated to artificial intelligence. Will this fragile consensus withstand the growing temptation to strike the first stroke in multipolar nuclear order until it is unclear.

It is also unknown whether commanders who are prone to anthropomorphization of entities, cognitive unloading and unnecessary trust in automation, avoid temptation to consider artificial intelligence as a panacea from cognitive shortcomings of human decision -making. The question may not be whether nuclear states will introduce artificial intelligence technologies into management, but in who, when and how to do it.

Artificial intelligence technology will affect nuclear weapons delivery systems in several directions. For example, Chinese maneuverable DF-ZF hypersonic glider is a dual-use prototype (with nuclear and conventional weapons) and autonomy function. Artificial intelligence and autonomy can also strengthen the potential for the second impact of states - and therefore restraint - and help manage escalation during a crisis or conflict.

Artificial intelligence can be used to strengthen ordinary weapons with potentially significant strategic consequences - especially strategic non -nuclear weapons used in regular operations. Machine training will increase the level of artificial intelligence aboard manned and unmanned fighter jets, thereby increasing their ability to break the opponent's defense with the help of ordinary high -precision ammunition.

In addition, increased autonomy through artificial intelligence can allow UAVs to act in swarms in conditions that were still considered inaccessible or too dangerous to manned systems (for example, in prohibited access zones, deep -sea conditions and outer space). The Azerbaijani-Armenian War of 2021 and the current Ukrainian-Russian confrontation showed how small states can integrate new weapons systems to improve efficiency and capacity on the battlefield.

Methods of machine learning significantly increase the ability of anti -missile, air and cosmic defense systems to detect, track, target and intercepts goals. Although artificial intelligence technology was integrated with the automatic recognition of goals in the 1970s, the speed of defense system identification was increased slowly due to a limited signature database used by automatic goal recognition system.

Achievements in the field of artificial intelligence and, in particular, generative networks can eliminate this obstacle by generating realistic synthetic data for learning and testing systems for automatic goals. In addition, autonomous drones can also be used to enhance the air defense (such as false goals or flying mines). Artificial intelligence technology also changes ways to develop and operate offensive and defensive cybernetic tools.

On the one hand, artificial intelligence can reduce the vulnerability of the army to cyberattacks and radio electronic fighting operations. For example, cyber defense and contradiction tools designed to recognize changes in behavior and anomalies in the network and automatically detect harmful programs or vulnerability of the software code, can protect nuclear systems from cybercrimes and obstacles.

On the other hand, the achievements in the field of machine learning of artificial intelligence (including increasing the speed, secrecy and anonymity of the Cyberoperatives) will allow literally to show the vulnerability of the enemy - that is, the undetected or unusual vulnerability of the software. Motivated opponents can also use malicious software to control, manipulate or deceive the behavior and recognition systems of the Autonomous Project Maven Autonomous Systems.

For example, the use of enemy generative networks to create synthetic and realistic data is a threat to both machine learning and programmable attacks. In general, the technology of artificial intelligence in the nuclear sphere will be a sharp sword: the improvement of nuclear systems is associated with the expansion of a set of agents available to opponents for cyberattacks and operations of radio electronic struggle against these systems.

In the end, achievements in artificial intelligence technology can contribute to the physical safety of nuclear weapons, especially against threats from outsiders and non -state entities. Autonomous vehicles (for example, "robots Sabbaths") will protect the nuclear forces of states, patrolling the parameters of sensitive objects or forming armed automated observation systems along the vulnerable borders-this is how the autonomous South Korean robotic time Super Aegis II operates.

In combination with other new technologies, such as large data analytics and early warning and detection systems, artificial intelligence can be used to create new solutions to counteract distribution. For example, there is no need for inspectors on sensitive objects, which provides non -interference in the weapons control agreement.

How can artificial intelligence possibly exacerbate the crisis between two opponents who have nuclear weapons? Consider the following fictional scenario: in the morning of December 12, 2025, Beijing's political leaders and Washington sanctioned nuclear strikes in the Taiwanese Strait. Independent "Lightning War" investigators in 2025 expressed confidence that none of the parties used "fully autonomous" weapons with artificial intelligence and violated the law on armed conflicts.

In the 2024 elections, President Tsi In-Vien, once again touched Beijing, won a convincing victory and provided the third term to democrats who advocate for independence. With the onset of the mid-2020s, the tension in the region continued to ignite, as both sides-hostages of a rigid line of politicians and generals-kept uncompromising positions, abandoning diplomatic gestures, and resorted to rhetoric of escalation, false news and campaigns.

At the same time, both China and the United States have launched artificial intelligence to provide awareness of the battlefield, exploration, observation and intelligence, early warning and other decision -making tools to predict and find tactical answers to the enemy's actions in real time.

By the end of 2025, the rapid increase in the accuracy, speed and prognostic capabilities of commercial attachments with double -purpose artificial intelligence convinced large states to use machine learning not only to improve tactical and operational maneuvers, but also more and more to justify strategic decisions.

Under the impression of Russia, Turkey and Israel, artificial intelligence tools to maintain autonomous switches that press terrorist activity, China introduced the latest double -purpose artificial technologies by donating careful testing and evaluation. Since Chinese military invasions in the Taiwanese Strait - aircraft flights, blockade training and surgery surgery with drones - marked the sharp escalation of tension, China and US leaders demanded immediate implementation and the relics.

As the rhetoric of hatred on social networks increased on both sides, intensified by misinformation and cybersema of command and control networks, more and more votes were stated that Taiwan's immediate violent accession to China. Under the influence of a situation in the Pacific Ocean, the United States decided to accelerate the commissioning of the prototype of an autonomous system of strategic forecasting and recommendations (SPRS), which works on the basis of artificial intelligence.

This system helps to make decisions in activities such as logistics, cybernetics, space safety and energy management. China, fearing to lose asymmetrical advantage, launched a similar system support system for Strategic & Intelligence Advisory System (SIAS) to ensure autonomous readiness for any crisis.

On June 14, 2025, at 06:30 the patrol boat of the Taiwanese coast guard collided with a Chinese autonomous seafood, which performed a reconnaissance mission in Taiwan's territorial waters and drown it. On the eve of Tai President, a high -ranking delegation of US Congress staff and White House officials with a diplomatic visit.

By 06:50, the effect of dominoes, strengthened by bots, dippers and operations under someone else's flag, passed for the red line of Beijing, which means that restraint was not enough. Up to 07:15 These information operations coincided with a splash of cybercrimination aimed at the Indo-Pacific Command of the United States and Taiwanese military systems, defensive maneuvers of orbital Chinese anti-scene means, activation of automated logistics systems .

At 07:20, American SPRS appreciated such behavior as a serious threat to national security and recommended to strengthen the position of restraint and conduct a powerful demonstration of force. At 07:25 the White House sanctioned the flight of an autonomous strategic bomber in the Taiwanese Strait.

In response, at 07:35, the Chinese SIAS system reported Beijing about an increase in the load on the Indo-Pacific Command of the United States and the most important command and communication units in the Pentagon. At 07:40 Sias increased the threat of a US preventive blow in the Pacific Ocean to protect Taiwan and attack on China-controlled territory in the South China Sea.

At 07:45 SIAS recommended Chinese leaders the use of conventional weapons (cybernetic, sipping, hypersonic weapons and other intellectual high -precision missile technologies) for limited preventive impact on the most important US facilities in the Pacific, including the US Air Force base.

At 07:50, the Chinese military leadership, fearing the inevitable disinfectant blow of the United States and increasingly relying on Sias estimates, sanctioned the attack that Sias had already foreseen and, therefore, planned and prepared for it. At 07:55 SPRS warned Washington of the inevitable attack and recommended that you immediately strike a limited nuclear stroke to force Beijing to stop the offensive.

After a limited US-Chinese nuclear stroke in the Pacific, which killed millions of people and tens of millions were injured, both sides have agreed to terminate hostilities. Immediately after the cruel slaughterhouse, who killed millions of people in a few hours, the leaders of both sides were stunned by the "lightning war". Both sides tried to restore a detailed analysis of the decisions taken by SPRS and Sias.

However, the developers of the algorithms under the basis of SPRS and Sias have reported that it is impossible to explain the justification of the decisions and considerations of the AI, which stand for each decision from the subset. In addition, due to various time limits, encryption and privacy imposed by final military and business users, it was impossible to keep magazines and reverse testing protocols.

So, did the technology of artificial intelligence be the cause of the "lightning war" in 2025? Finally, the best way to prepare for nuclear future with artificial intelligence can be to comply with several basic principles that should be guided when managing nuclear weapons in cooperation with new technologies.

In order to achieve these worthy goals, artificial intelligence can help specialists in defense planning in the development and conduct of military simulations and other virtual exercises to develop operational concepts, testing various scenarios of conflicts and identifying areas and technologies for potential development.

For example, the methods of machine learning of AI - modeling, imitation and analysis - can supplement imaginary models and low -technological desktop simulations of hostilities to identify unforeseen circumstances at which the risk of nuclear war may occur. As Alan Thuring wrote in 1950: "We can look into the future for only a few steps, but we will see much of what we need to do. " James Johnson is a teacher of strategic research at Aberdeen University.