USD
41.19 UAH ▼0.2%
EUR
45.13 UAH ▼1.3%
GBP
53.89 UAH ▼1.82%
PLN
10.5 UAH ▼1.38%
CZK
1.78 UAH ▼1.47%
The speed at which neuroric networks can work with is the risk of too fast escal...

Nuclear weapons will become even more dangerous if it is managed by AI: security guarantee is not

The speed at which neuroric networks can work with is the risk of too fast escalation of nuclear conflicts - people may simply not have time to intervene. According to conventional opinion, artificial intelligence (AI) is being introduced everywhere. Chatbots are available that draw neural networks, smartphones and TVs with Shi-Chip. The changes apply not only to consumer equipment but also to the military.

Today, the opportunities of AI are used to update old and very dangerous technology - nuclear weapons (uders). How dangerous it is under the control of neurotrops, found out the focus. In his article for Medium Ken Briggs, engineer and popularizer of science, writes that AI algorithms can radically increase the efficiency of nuclear blows and their accuracy, thereby minimizing side effects, especially concerning tactical nuclear weapons.

The role of AI in the projection and modeling of warheads is no less revolutionary. Complex models of machine learning can simulate nuclear explosions with high accuracy, providing invaluable information about the effects of nuclear explosions. This knowledge is crucial when optimizing the constructions of warheads. In addition, SI systems can analyze environmental and geographical data to recommend optimal goals taking into account different strategic factors.

The US Vice quotes Edward Gaysta, a researcher-politologist from Rand Corporation and the author of "restraint in the face of uncertainty: artificial intelligence and nuclear war", who believes that AI is not well studied technology and its interaction with a long-created nuclear weapon. special attention. "There is often a parallel between nuclear weapons and artificial intelligence, emphasizing the danger of both technologies.

Unfortunately, both of these topics make even smart people fantasize, even those who have direct work experience with at least one of the technologies," Gaist summed up. "The integration of AI with nuclear weapons systems raises serious ethical issues and requires careful management to mitigate potentially negative consequences," Ken Briggs said.

According to Briggs, one of the most controversial issues of integration of neurotrans and uggs is the delegation of decision -making to systems of artificial intelligence, especially in scenarios related to the use of nuclear weapons. "The perspective that machines will make life and death decisions, raise fundamental moral issues and fears about the potential lack of accountability and transparency.

The speed with which neurothroes can work with, also creates a risk of too fast escalation of conflicts - people can simply not have time to interve , - the material reads. The integration of AI into nuclear weapons systems can also lead to significant changes in the global dynamics of forces. Countries that are successful in SI-technologies can take a mountain, potentially provoke a new arms race, focused on neural network and cyber can.

These races can increase global tension and lead to increased instability, especially in regions where nuclear rivalry exists. Moreover, increasing concern is the risk of cyberattacks on nuclear management and control systems supplemented with artificial intelligence, Ken emphasizes. The vulnerability of these systems to hacking or failure can lead to accidental launches or unauthorized use of ulcers, adding even more risk to global safety.

"Since AI continues to develop, it is extremely important that international rules, ethical recommendations and reliable cybersecurity measures are carried out in parallel with its development to mitigate the risks listed above. The future of nuclear war and defense, which is now inextricably linked Caution, guaranteeing that these powerful technologies will serve to strengthen global safety and not undermine it, "the author summarized.