Paul van Hooft | AI and Nuclear Weapons: Keeping the human in the loop, not only for the decision, but also before the decision – HCSS

admin
18 Min Read

The most publicly discussed fear of the impact of AI is that it will take the decision to use nuclear weapons out of the hands of humans. Depictions of AI in popular culture, whether Sci-fi movies or Netflix series, conjure images of cold, calculating consciousnesses that decide to do away with the inferior human species that preceded them. Of course, these map on perfectly to the depictions in popular culture of nuclear weapons rapidly and inevitably bringing about the end of the world. Given the ability of nuclear weapons to destroy entire cities in seconds, such fears are not unreasonable, even if difficult to grasp.

The need to keep ‘the human in the loop’ is decidedly uncontroversial, with the nuclear-armed powers largely and openly agreeing to not relinquish control over the final decision to launch nuclear weapons. That reflex among decision-makers of nuclear armed states should not be considered surprising. The decision to launch nuclear weapons is already highly centralized when it comes to humans, with most if not all nuclear-armed powers essentially having leaders as ‘nuclear monarchs’ that can singlehandedly make decisions to launch. Fearing unforeseen escalation, leaders of nuclear-weapons states have been reticent to delegate responsibility to military officers, except during periods of high uncertainty. This reticence also reflects a general military reticence to allow initiative outside of the tactical and operational levels of war, if even that, during conventional war. Leaders prefer to keep their finger on the proverbial button (though it is usually a key). However, the real danger of AI’s role in nuclear strategy is not the automation of the final, catastrophic decision; it is the less obvious, half-hidden integration of AI into processes that assist with the final decision.

The real danger of AI’s role in nuclear strategy is not the automation of the final, catastrophic decision; it is the half-hidden integration of AI into processes that assist with the final decision.

AI is too often mystified. It can be conceived as computerized systems that can perform tasks that are considered to require human intelligence, including learning, solving problems, and achieving objectives under varying conditions, with varying levels of autonomy and absence of human oversight.[1] They are faster and more reliable than humans when it comes to engaging with massive amounts of data, which gives them advantages not only in commerce – such as identifying patterns in consumer behavior – but also in the military enterprise where speed and information processing can spell the difference between success and failure, and perhaps life and death.

There are four ways, as James Johnson argues, that AI could affect nuclear deterrence and decision-making: (1) command and control; (2) missiles delivery systems; (3) conventional counterforce operations; and (4) early warning and Intelligence, Surveillance, and Reconnaissance (ISR). Regarding ISR, machine learning, and specifically deep learning, could collect, mine, and analyze large volumes of intelligence. This could be visual, radar, sonar, or other – to detect informational patterns and locate specific nuclear delivery systems, whether missile silos, aircraft, mobile launchers, or perhaps even submarines. It could potentially identify patterns in behavior of nuclear-armed adversaries. Moreover, it could allow the sensor systems themselves – for example long-range UAVs – to collect information for longer periods of time. Yet, AI-assisted cybertools could also be used for information-gathering through espionage. While command and control is not the first candidate for the direct use of AI, AI-assisted processes could in turn be used to protect the cyber security of nuclear infrastructure.

AI could increase the precision of nuclear-armed or conventionally armed missiles, whether the individual Multiple Independent Reentry Vehicles (MIRV), or hypersonic weapons, and provide protection against electronic warfare jamming and cyber-attacks, as well as provide endurance to platforms over longer periods of time. Finally, AI could improve conventional counterforce operations, whether the ability to penetrate defended airspaces with manned or unmanned aircraft, or, conversely, improve the detection, tracking, targeting, and interception of traditional air- and missile defenses. Again, AI could not only improve defense against kinetic attacks, but also cyberattacks.[2] The brief overview is deceptive in terms of the impact it could have on international security as relating to nuclear weapons; it suggests the increased efficiency and effectiveness of existing technologies and procedures. To understand that impact, we need to understand the logic of nuclear deterrence and strategic stability more generally.

The effect of nuclear weapons on international security has been varied; though claims have been made of a ‘nuclear revolution’ that would dampen the risks of great power conflict, the actual effect has not been nearly as obvious. That is specifically the consequence of the first-strike and second-strike logic of nuclear weapons. If both sides in a nuclear-armed rivalry believed nuclear retaliation was unavoidable in case they or their adversary initiated aggression, both would be dissuaded from doing so. However, this vulnerability is hard to accept for leaders of nuclear-armed states. Consequently, they will pursue ‘damage limitation’ policies towards their adversary’s nuclear forces, whether by building capabilities to destroy them first, by improving defenses against them, or by disrupting or destroying the adversary’s decision-making process to launch. In turn, this deteriorates their adversary’s confidence in their own secure second-strike capability with which they would retaliate.[3] This dynamic undermines both facets of strategic stability: first-strike stability and crisis stability.

First-strike stability refers to a situation where neither of two nuclear-armed adversaries believe that one of them has a first-strike advantage to destroy the other’s arsenal before the latter can launch. It is a more structural appraisal of the balance of capabilities between them. The perception of an advantage on the side of the adversary and a vulnerability on one’s own side could lead the other to invest in more or qualitatively different warheads or delivery systems, or change the nuclear posture to launch-on-warning. Such a response makes the initiation of a nuclear exchange more likely. During the Cold War, fears of a declining secure second strike drove both the U.S. and Soviet superpowers to develop arms to find and maintain their own advantage and prevent the other from gaining an advantage. The number of nuclear weapons grew to enormous heights because of these fears, but also qualitative investments in other technologies, from precision to missile defenses to quiet submarines, swelled.

Crisis stability, as paradoxical as the term may seem, denotes a situation where a nuclear-armed state does not escalate a confrontation with an adversary to the nuclear level. The state could escalate because it believes its adversary has already begun a nuclear exchange, or it believes its adversary is attempting to destroy its nuclear arsenal with a conventional or nuclear first strike. The 1962 Cuban Missile Crisis contained an incident where a Soviet submarine armed with a nuclear torpedo that was breaking through the American naval quarantine around Cuba believed it was under deliberate attack by an American surface vessel as part of the beginning of a nuclear exchange. Fortunately, one officer insisted on surfacing first. In 1983, during the period of heightened Soviet-American tensions of the latter Cold War, Soviet early warning satellites seemed to detect a first launch by the United States; it was again one Soviet officer that deemed the data did not fit expectations of what an American attack would look like. Human judgement turned out to be correct in both cases (and other known cases).

The deeply unsettling effect of nuclear weapons encapsulated in strategic stability has thus existed since the beginning of the Cold War and dominated what has been referred to as the first nuclear age, which was marked by the stand-off between the U.S. and Soviet superpowers.[4] During the first nuclear age, the other nuclear-armed states – the UK, France, China, and (undeclared) Israel – had very limited nuclear arsenals compared to those of the superpowers. Both aspects of strategic stability became less important during the second nuclear age, where the major concern was the proliferation of nuclear weapons. The fear particularly focused on the potential acquisition of nuclear weapons by so-called rogue states and non-state actors, especially in the wake of the 9/11 attacks and the subsequent war on terror, which culminated in the invasion of Iraq. For various reasons, the assumptions that underlay deterrence, namely that nuclear-armed states were rational or attempting to be rational, were thought not to apply to the apocalyptic ideologies of terrorist groups. The third nuclear age has made strategic stability relevant again, with the growing number of nuclear-armed states – India, Pakistan, and North Korea – and the increasing Chinese arsenal creating a situation of nuclear multipolarity with risks of overspill between regions, as well as the various emerging disruptive technologies that includes AI.

AI is particularly unsettling to strategic stability during a crisis,because much of its processes are opaque to the end user

AI has the potential to deeply unsettle strategic stability, particularly if humans attach a great deal of confidence to its workings. The description of AI’s integration into the nuclear weapons architecture above suggests that with its improvements in finding targets and the precision with which to destroy them, it could give a first-strike advantage to a nuclear-armed state with which to destroy their adversary’s nuclear arsenal as well as defend against it – or create the perception that they have such an advantage. By bringing together multiple sources of data, AI-assisted data analysis may even improve the ability to find the adversary’s concealed delivery systems such as mobile launchers or perhaps submarines. Automated missile defense could suggest the ability to absorb an initial nuclear attack. Perception matters greatly here, on both sides in a nuclear standoff. AI is particularly unsettling to strategic stability during a crisis because much of its processes are opaque to the end user, if not also the designer. AI-assisted pattern analysis could interpret aggressive intentions or actions, where the limited time horizon during a crisis does not allow for careful scrutiny of the AI’s inputted data, or its process of analyzing it. As humans tend to believe in the ‘objectivity’ of machines, relying on the findings provided by AI could prove particularly psychologically seductive and thus dangerous.

However, there is another dimension to this, namely that AI is only as good as the data it has at its disposal, and there is therefore a real incentive to poison the data available to fool the AI. One could think of this as analogous to the measure-countermeasure competition between radars and radar jammers in the electronic warfare domain. The objective of an adversary would be to create a false positive or a false negative. A false negative would be tricking the adversary’s AI-assisted data analysis to overlook delivery systems, whether silos, mobile launchers, aircraft or submarines, adding another layer to concealment. A false positive would be the reverse, namely tricking the adversary into believing that there are more warheads or more delivery systems than there in fact are. After all, deterrence is about instilling fear that costs of aggression outweigh the benefits; nuclear deterrence is very much about ensuring that a state can still retaliate with nuclear weapons even after being attacked. Nuclear-armed powers could benefit from both approaches. Weaker states that are less confident in their second-strike capability could be interested in trying to poison the data with false positives, triggering arms-race dynamics on the other side. False negatives could create unwarranted confidence on the side that is poisoning the data or signal the preparation for a first strike, but also sudden nasty surprises on the other side.

Great powers might be cautious about the application of these methodologies, as they have every incentive to prevent escalation. However, nihilists or millenarian rogue states or non-state actors – the dominant fear during the second nuclear age – might be perfectly willing to use AI-assisted deep fakes or data poisoning to provoke escalation. Why go through all the risks to acquire nuclear weapons, when you could have superpowers destroy each other for you?

Why go through all the risks to acquire nuclear weapons, when you could have superpowers destroy each other for you?

Doomsday is not here yet, but caution is needed. The good news is that the United States and the other nuclear-armed great powers are aware of these dangers. Statements on keeping the human in the loop for any decision to launch are welcome. However, the effort should go much further. While it is difficult and thus unrealistic to imagine states outright rejecting the benefits that AI can bring to the military – and thus the nuclear – enterprise, a better understanding of the implications of where and how AI is integrated into their own systems would ameliorate some of these risks. Efforts are already underway, whether between the United States and China, hosted by the UK together with other G7 states, or the Dutch-Korean initiative. Discussions between great powers and middle powers would thus help improve the governance of these risks.

Footnotes

[1] Laurie A Harris, “Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress,” n.d.

[2] James Johnson, AI and the Bomb: Nuclear Strategy and Risk in the Digital Age (Oxford University Press, 2023), 24-30, https://books.google.com/books?hl=en&lr=&id=lRupEAAAQBAJ&oi=fnd&pg=PP1&dq=ai+bomb&ots=pHjR5Zpvvp&sig=6UhyAiHBOgR9vEX7ReLhhFjsyQI.

[3] Paul Van Hooft and Davis Ellison, “Good Fear, Bad Fear: How European Defence Investments Could Be Leveraged to Restart Arms Control Negotiations with Russia” (The Hague, Netherlands: Hague Centre for Strategic Studies, 2023); Matthew Kroenig, The Logic of American Nuclear Strategy: Why Strategic Superiority Matters (Oxford University Press, 2018); Keir A. Lieber and Daryl G. Press, The Myth of the Nuclear Revolution: Power Politics in the Atomic Age (Cornell University Press, 2020); Brendan Rittenhouse Green, The Revolution That Failed: Nuclear Competition, Arms Control, and the Cold War (Cambridge University Press, 2020)..

[4] Paul Bracken, The Second Nuclear Age: Strategy, Danger, and the New Power Politics (Macmillan, 2012); David A. Cooper, Arms Control for the Third Nuclear Age: Between Disarmament and Armageddon (Georgetown University Press, 2021).

Share This Article
By admin
test bio
Please login to use this feature.