This is a sample chapter from the forthcoming It Is Possible: A Future Without Nuclear Weapons, by Ward Hayes Wilson.
H-Bomb test
Chapter 10
Deterrence Theory
Some nuclear weapons advocates admit that nuclear weapons are not very good weapons. And they admit that nuclear weapons are mostly symbols. But they argue that none of that actually matters. They might say, “The fact is that no one really wants to use nuclear weapons. Nuclear weapons’ only real purpose is to deter. That is why we need these weapons, why we have to keep them. Without nuclear weapons, we couldn’t deter the Russians or the Chinese. Deterrence is the essential thing. It buttresses our alliances, and those alliances maintain the world order.”
Some nuclear weapons advocates even admit that nuclear deterrence is built on very little factual evidence. But they argue that it doesn’t matter if you arrive at a theory by intuition or careful, step-by-step experimentation. “The proof,” they might say, “is not in the way you build a theory. The proof is in the way it works. The fact is, there hasn’t been a major war in seventy-five years. Nuclear deterrence has kept the peace in a truly remarkable way. And there is a case to be made that as long as we keep our nerve and maintain a certain level of nuclear weapons, deterrence can work for as long as we need it to.” And they argue this point with surprising conviction. They seem quite confident that nuclear deterrence does work—and works robustly.
So, the question this chapter tries to answer is this: Is that confidence justified? Is the case for nuclear deterrence so strong and so sound that it can make us believe that nuclear deterrence is safe, even though real evidence is almost entirely lacking? Sometimes a theory is so obviously true that we rely on it despite not having very much proof. Is it possible that nuclear deterrence theory is one of these theories?
* * *
Some nuclear weapons advocates claim that nuclear deterrence has never failed. There has been no nuclear war for more than seventy years, they point out; so questions about the reliability of nuclear deterrence are unfounded. From one perspective, the case that threatening punishment works to deter seems self-evident. One early exponent of deterrence wrote, “That the fear of punishment can deter is shown ... vividly by its efficacy in the training of animals.” [1] And that’s true. You can, for example, train a dog not to get up on the sofa by hitting it with a slipper. Hit the dog a few times and pretty soon just the sight of the slipper will likely make him slink away to his corner. Nuclear weapons advocates then say, “If you can deter a dog with a slipper, surely you can deter a human being with nuclear weapons. Human beings are far smarter than dogs. And nuclear weapons are far more frightening than a slipper.” It seems like a telling point. If it is that easy to make deterrence work, isn’t it likely to work well in most circumstances?
Nuclear weapons advocates are so sure. The fact is, it’s a little unnerving how firmly they argue and how sure they seem. Could they possibly be wrong when they seem so sure? Let’s take a closer look at the theory of deterrence and see if their confidence is justified.
Perfection
Before we begin, it’s worth noting that the bar for nuclear deterrence is quite high. Normally if a theory works in the great majority of cases, or even just in most cases, it counts as a useful explanation of the world. A theory about what causes brain disease that is right eighty-six percent of the time would probably count as a useful theory. But because the consequences of nuclear war are so severe, the demands on nuclear deterrence theory are much greater. Nuclear deterrence simply cannot fail—the consequences are too great for us to allow it to happen. Even one failure could be catastrophic. So nuclear deterrence has to be perfect; it has to work every time. This is a demanding requirement. And it turns out that when you compare nuclear deterrence to other kinds of deterrence, the comparison seems to suggest that nuclear deterrence is unlikely to get over the bar.
Remember that nuclear deterrence is only one kind of deterrence. There are many different kinds. Deterrence is a type of threat, and threats can be used in a variety of circumstances—for example, threatening to spank a child, threatening to take someone’s driver’s license away, threats used in ancient times to starve a city unless it surrendered, threatening to punish certain crimes with the death penalty, and so on. Each of the many different types of deterrence has different characteristics and a different likelihood of succeeding. But they all share one characteristic: None has a perfect record of success. Sometimes children do misbehave. Sometimes cities elect to starve. Sometimes people drive when they are drunk. Of all these different kinds of deterrence, none works perfectly. Even when the evidence shows that a particular kind of deterrence seems to work much of the time, there are still occasions when it fails. So, if nuclear deterrence is like these other forms of deterrence, if what applies for them also applies to threats with nuclear weapons, then it seems probable that nuclear deterrence will also fail some of the time.
Nuclear weapons advocates sometimes argue that nuclear deterrence will work better than these other types of deterrence because the penalty that you pay if nuclear deterrence fails is so much greater than the penalty with these other kinds of deterrence. And they have a point. The destruction of civilization and the loss of, say, two hundred million lives could be called the ultimate penalty. But this argument actually highlights a weakness, because for a murderer, the death penalty is also the ultimate penalty. There are real parallels between the severity of nuclear war for civilization and the severity of the death penalty for an individual. What greater penalty could there be for an individual than death? Does a murderer care if life goes on after his death or not? His life has ended. For him, the world has come to an end. The fact that the death penalty regularly fails to deter, therefore, is particularly troubling. The death penalty is not a perfect deterrent. It turns out that people regularly fail to be deterred by the threat of paying the ultimate penalty.
This tendency to fail in other kinds of deterrence casts a shadow over nuclear deterrence. If other kinds of deterrence cannot work every time, then how can we believe that nuclear deterrence will work every time? If deterrence using other means is not perfect, why should we expect deterrence with nuclear weapons to be different? No, no, no, no, say nuclear weapons advocates. Nuclear deterrence is different. It has specific characteristics that make it more likely to succeed. They put forward three arguments for why nuclear deterrence is different from other forms of deterrence: because it carries with it a unique power to hurt, because nuclear war is so clearly irrational that no one in their right mind would choose it, and because of the overpowering fear of nuclear war. These three arguments are distinct—they do not interlock or support one another. Different groups of nuclear weapons advocates rely on each of them. So, let us examine each one in turn.
The power to hurt
Thomas Schelling, a Nobel Prize–winning economist who is sometimes called the founder of nuclear deterrence theory, argued that the threat of enormous harm that nuclear weapons create is a powerful tool for influencing others. Schelling argued that nuclear weapons’ unprecedented ability to devastate and punish could be used to ensure that deterrence works reliably. In his Arms and Influence, Schelling writes eloquently and at length about how the “power to hurt” makes deterrence effective. “It is the power to hurt, not military strength in the traditional sense, that inheres in our most impressive military capabilities at the present time.”[2]
The difficulty with the power to hurt that Schelling talks about is that many of the leaders who make decisions about war and peace are like Ronald Niedermann. Ronald Niedermann is a character in the very popular trilogy by Stieg Larsson that begins with The Girl with the Dragon Tattoo. Niedermann is the half-brother of the heroine, Lisbeth Salander. He is abnormally large and strong, and he is a member of a gang that sells drugs—in other words, he is a very bad guy. He kidnaps Salander’s friend, and in a key scene of the second book, Niedermann faces off against a professional boxer, another friend of Salander’s, who has come to rescue the woman who has been kidnapped. The boxer, while he’s impressed with Niedermann’s size, is still confident. He’s competed at a very high level in his weight class and is sure he has the ability to fight any man. However, during the fight, a peculiar thing happens. Despite the boxer landing several wicked punches—blows that would have knocked out a lesser man—Niedermann seems unfazed. He shakes his head, grunts, and then continues fighting. It’s almost as if he can’t feel he is being hit.
And it turns out that is exactly the problem. Niedermann has a rare neurological condition called “congenital analgesia,” a condition in which a person cannot feel—and has never felt—pain. In the fight with Salander’s boxer friend, Niedermann doesn’t crumple under the blows because he literally cannot feel them. In any fight, a person with congenital analgesia is less likely to give in because he will not feel any pain from the blows that he receives.
Schelling’s contention that the power to hurt is power overlooks the fact that many national leaders are like Niedermann: They have congenital analgesia when it comes to civilian casualties in war. This is not to say that they did not start out life with the normal amount of human sympathy for the suffering of others. But in wartime, they feel a responsibility to the needs of the state that outweighs this sympathy. When war is fought and the necessity to preserve the state is challenged, leaders are not likely to put a high value on civilian suffering. As the examples already cited demonstrate, wartime leaders are capable of enormous callousness when it comes to civilian lives. Chiang Kai-shek drowning his own civilians, Joseph Stalin letting ordinary Russians starve in Leningrad, Winston Churchill allowing British cities to be bombed—the list of wartime leaders who found ways to live with the deaths of civilians is quite long.
It seems probable that most leaders in wartime are connected to citizens similarly to the way that Niedermann feels pain. They have a disinterested, abstract understanding of what is going on; they understand that the body they are a part of is being injured; but they do not actually feel the pain in any visceral, immediate way—at least, any pain they feel doesn’t overwhelm their assessment of the strategic balance in the conflict.
Despite the popularity of Schelling’s work with people who call themselves realists, it’s not clear that Schelling’s description of the “power to hurt” is realistic. As the nuclear scholar Robert Jervis pointed out, “One could not have coerced Pol Pot by threatening to destroy his cities.”[3] Nuclear weapons mostly hurt civilians, but civilian deaths rarely, if ever, determine whether a leader continues a war.
Rationality
So, let us turn to the second way that nuclear weapons advocates claim that nuclear deterrence cannot fail: the assertion that no rational person could think about the consequences of a nuclear war and then choose to fight one.
For some reason, the assumption of rationality is deeply lodged in nuclear weapons thinking. From the earliest days of nuclear strategy, when people first began to think about how best to fight a nuclear war, there seems to have been a powerful inclination to assume that rationality would play a large role in decision-making. From the outset, when game theory and logic models dominated, to the more recent history of rational choice thinking, the lure of imagining that decisions would be made based on reason and cost-benefit analysis has been strong.[4]
This is surprising because there is simple and undeniable proof that human beings are not rational—proof that is widely available and easily accessible. In fact, this proof is so commonly available that it is virtually everywhere that human beings now live.[5] In fact, one piece of this omnipresent evidence is probably in the room with you right now. Look around. See it? If you don’t, reach down and gently pat your stomach just above your belt line. Now, using your thumb and forefinger, grasp a little bit of the flesh above your hip on one side or the other of your torso. Did you just forget about nuclear weapons for a moment and think to yourself, “You know, I should probably lose five pounds”? If you did, you’re probably like the great majority of people in the developed world.[6] And that is entirely adequate proof that we are not completely rational.
Consider this: Is it rational to be overweight? Is there not a great deal of data showing that being overweight has negative consequences for health? Isn’t it the case that the people who seem to live longest are often quite trim and live on relatively low-calorie diets? Now think about some of the habits that contribute to your being a little overweight: smoking, too many sodas, sweet snacks, sitting lazily on the sofa watching videos instead of running and playing outside. I’m not lecturing you. I could really stand to lose about ten pounds myself (maybe fifteen, if I’m honest). What I’m asking is this: If human beings were really rational beings, and if our decisions were controlled by rational choices, would anyone be overweight? There’s a great deal of evidence that maintaining an ideal weight is really healthy for you. And that the lifestyle of activity and exercise that’s required for achieving such a weight has enormous benefits, too. So, why are so many people overweight? Is it rational? Is it a conscious choice to do things that are not the best for your health? Obviously not.
It is clear that our choices are sometimes controlled by what we think and what we consciously tell ourselves. It is possible for us to decide on a course of action rationally. But much more often, our choices—in eating, most obviously, or in love or in whether to drink alcohol, in fact, in many categories of behavior—are controlled by urges, instincts, desires, and emotions that are very difficult for our conscious mind to control. If a great deal of our behavior is in thrall to subconscious emotions and urges, is it such a stretch to imagine that our choices in wars might be propelled by emotions, too?
What is striking about the argument for rational choice and game theory is that it continues to appeal to nuclear weapons advocates, despite the obvious and commonsense evidence that rationality only sometimes rules our choices. This insistence on a theory of rationality has real-world consequences. One example of the way these assumptions of rationality played out in actual U.S. policy occurred during the administration of President Richard Nixon. In the 1960s, think-tank intellectuals had explored an approach to foreign policy bargaining they called “uncertainty strategy.” Schelling and others began to argue that it could be useful to seem to be a little crazy in a nuclear crisis. If you make a threat, according to the theory, and you seem to be a little crazy, the chances that the threat will work increase.
“Uncertainty strategy” eventually made its way into Nixon’s thinking, and he decided to use it to try to force North Vietnam to negotiate an end to the war in Vietnam. H. R. Haldeman, one of the president’s top aides, picks up the story in his memoir of Nixon’s presidency:
Nixon not only wanted to end the Vietnam War, he was absolutely convinced he would end it in his first year.... “I’m the one man in this country who can do it, Bob.” ...
He saw a parallel in the action President Eisenhower had taken to end [the Korean] war.... He [Eisenhower] secretly got word to the Chinese that he would drop nuclear bombs on North Korea unless a truce was signed immediately. In a few weeks, the Chinese called for a truce and the Korean War ended.
In the 1950’s Eisenhower’s military background had convinced the Chinese that he was sincere in his threat. Nixon didn’t have that background, but he believed his hardline anti-Communist rhetoric of twenty years would serve to convince the North Vietnamese equally as well that he really meant to do what he said....
The threat was the key.... “I call it the Madman Theory, Bob. I want the North Vietnamese to believe I’ve reached the point where I might do anything to stop the war. We’ll just slip the word to them that, “for God’s sake, you know Nixon is obsessed about Communism. We can’t restrain him when he’s angry ... and he has his hand on the nuclear button”—and Ho Chi Minh himself will be in Paris in two days begging for peace.[7]
The uncertainty strategy assumes that if you are crazy, it will change the cost-benefit calculation of your adversary. Your adversary will calculate that if you are crazy, there is a greater likelihood that you will actually do what you say and, therefore, the safest course is to capitulate. By acting as if you are irrational, so the theory goes, you create a significant advantage.
The problem with this theory is that it assumes that your acts of irrationality exist in a world of rationality. It assumes that while you are acting insane, your adversary is being reasonable. But human beings only act rationally some of the time. The assumption that when Nixon acted irrationally the North Vietnamese would respond by thinking things through and making a cost-benefit assessment might have been true. If your adversary started acting crazy, you might think harder about what your most rational response would be.
But it could also be that instead of being more reasonable when you act crazy, an adversary might respond by being emotional. Emotions tend to beget emotions. Crazy behavior on your part might well lead to crazy behavior on your adversary’s part. Rather than making your adversary more cautious, crazy behavior on your part might lead to an escalating spiral of emotion—anxiety leading to fear, fear leading to threats, threats leading to greater fear, and, in the end, one leader orders a preemptive nuclear strike.[8]
In the event, the strategy did not work. Despite issuing a string of threats in 1969, Nixon was not able to force the North Vietnamese into significant concessions at the peace talks. Despite Nixon’s certainty that he could resolve the conflict, the war in Vietnam ground on for another six years. Thousands of U.S. soldiers, North Vietnamese soldiers, and civilians died during those years—a high price to pay for being wrong. But the costs could have been much higher. If Nixon had tried to rely on the uncertainty principle in a nuclear crisis with another nuclear-armed state—with the Soviet Union, for example—the costs of being wrong could have been catastrophic.
The idea that the risk of nuclear war will call forth untapped reserves of rationality in leaders is a pleasant one. It is, to borrow Ernest Hemingway’s phrase, “pretty to think so.” But it seems unlikely to be true all the time or perhaps even most of the time. The nuclear weapons advocates who are relying on rationality to protect us from nuclear war are not being realistic.
As Freeman Dyson put it:
Assured destruction would be a stable strategy in a world of computers. In a world of human beings, it fails to bring stability because it lacks the essential ingredients which human beings require: respect for human feelings, tolerance for human inconsistency, adaptability to the unpredictable twists and turns of human history.[9]
A realistic approach to nuclear war and nuclear crisis, it seems to me, would assume that decision makers will be driven by instinct, buffeted by emotion, and would be susceptible to unreason. It would expect that a crisis involving nuclear weapons would be like other human crises where the stakes are high and the danger is very great—in other words, that the pressures of such a crisis would make people more vulnerable to mistakes, emotional outbursts, and irrational behavior.
Insisting on a rationality that does not exist is not realism. It is wishful thinking, and dangerous wishful thinking at that. If the assumption of rationality in a crisis turns out to be wrong, millions of innocent people could suddenly and irrevocably pay for it with their lives.
The power of fear
Finally, another explanation advanced by nuclear weapons advocates for why nuclear deterrence might be different from other forms of deterrence (and therefore less likely to fail) is the “power of fear” explanation. Nuclear deterrence cannot fail, they argue, because the image of nuclear war is so frightful, the danger so clear-cut, and the power of fear so imperative that it would be impossible for any leader to ignore the danger. Rationality might not restrain a leader, the possibility of pain inflicted might not work, but fear most certainly would, according to this argument.
This is, in some ways, a more plausible argument than either the pain or rationality arguments. It is certainly true that anyone who has seriously contemplated nuclear war has felt deeply afraid of it. And fear is a much more powerful and fundamental motivator of action than rationality. But even this stronger argument is not persuasive.
Is fear the most powerful emotion? Despite what nuclear weapons advocates might say, fear does not always dominate. Brave men—through training and loyalty to their small group—overcome fear all the time in war. During Roman times, the berserkers of some Germanic tribes were said, when they were in the grip of their frenzy, to feel no fear. Zulu warriors simply took hallucinogenic drugs that suppressed their fear before battle. Belief in an afterlife can overcome fear. A desire for revenge can overcome fear. Sometimes lust for killing can overcome fear. False optimism can overcome fear. Delusional people are not restrained by fear. Drunkards can forget to be afraid. Enraged people can ignore fear. Sometimes starvation and hunger overcome fear.[10] Sometimes a profound sense of humiliation overcomes fear. People motivated by intense love can act bravely for the sake of their beloved. And so on. There are a multitude of emotions that can overcome fear—some for only a short time, others for much longer. But the point is, fear is not the ruling emotion, master of all the others.
The problem with the notion that fear will always restrain aggression when there is a risk of nuclear war on hand is that fear is not king. It cannot restrain all the other emotions all the time. Sometimes fear will be strongest, but sometimes other emotions or instinctual urges will be stronger. We know this is true from the experience of soldiers. We know it from history. We know it from psychology. Fear is not strong enough to command all the other emotions.
Nuclear weapons advocates might respond, “But nuclear war is different. No other fear includes the danger of vast destruction, perhaps even the end of civilization itself.” And it’s true: Nuclear war would include some elements—like widespread radiation poisoning—that human beings have never experienced before. But there is no evidence that this difference invalidates what we know about human nature. In fact, there is clear and incontrovertible evidence that a world leader could look the prospect of nuclear war squarely in the eye and still not be afraid.
Compelling evidence in this regard comes from the Cuban Missile Crisis. Fidel Castro, as leader of Cuba, was concerned as the crisis mounted. The likelihood of worldwide nuclear war was obvious to all observers, and there was overpowering dread and anxiety in the United States, in Europe, in Russia, and in other parts of the world as people waited fearfully to see what would happen. But that is not what was worrying Castro. Castro was worried that Soviet Premier Khrushchev might shrink back from nuclear war. In the early morning hours of October 27, 1962, the twelfth day of the crisis, Castro was awake, drafting a cable for Khrushchev.
Dear Comrade Khrushchev, Analyzing the situation and the information that is in our possession, I consider that an aggression in the next 24–72 hours is almost inevitable....
If the aggression takes the form of the second variant and the imperialists attack Cuba with the purpose of occupying it, the danger facing all of mankind ... would be so great that the Soviet Union must in no circumstances permit the creation of conditions that would allow the imperialists to carry out a first atomic strike against the USSR.
Castro, as befits a man who had been trained by Jesuits, often used intricate reasoning and complex wording. As he drafted this letter, talking with associates, some of them weren’t entirely clear what he was driving at. Finally, one of his aides “blurted out” the question: “Do you want to say that we should deliver a nuclear first strike against the enemy?” Castro replied that that would be too blunt:
No, I don’t want to say it directly. But under certain conditions, without waiting to experience the treachery of the imperialists and their first strike, we should be ahead of them and erase them from the face of the earth, in the event of their aggression against Cuba.[11]
Perhaps the most striking thing about Castro’s letter urging nuclear war is that he must have known that in a nuclear war, Cuba would be laid waste. Even if the Russians launched a first strike, and even if that first strike was devastatingly effective, some U.S. forces—both nuclear and conventional—would have survived. In the ensuing war, Cuba would have been a principal target. At the very least, the nuclear missile sites being built in Cuba would be attacked, and probably Cuba’s most important cities as well. Urging nuclear war meant urging the destruction of much of Cuba.
Castro looked the vast horror of nuclear war in the face and recommended to Khrushchev, in a roundabout way, that he launch a preemptive first strike with Russia’s nuclear forces if war came. He was aware of the stakes and understood that even if the Soviet Union launched a first strike, some U.S. nuclear forces would survive. And yet he urged his ally to launch such a war. How can Castro’s letter be explained? If the fear of nuclear war is so great that no leader could face it and not be cowed, how can Castro’s actions be possible?
The answer is that his actions can’t be explained if fear of nuclear war was controlling his actions. Castro’s letter appears to be direct confirmation that sometimes in a crisis, despite the fear of nuclear holocaust, a national leader could recommend launching a nuclear war. It is proof that fear does not always rule.
So, even though fear is a powerful emotion, and even though the costs of nuclear war are plain, the notion that leaders will always draw back from the brink because of fear is contradicted both by what we know about human nature and by this extraordinary episode from history. Insisting on the power of fear to restrain nuclear war, when you know that it is not the most powerful emotion, is not realism.
False certainty
There is an old joke that a Keynesian economist is like a blind man in a dark room trying to find a black cat; a Marxist economist is like a blind man in a dark room trying to find a black cat that isn’t there; and a supply-side economist is like a blind man trying to find a black cat in a dark room that isn’t there who shouts, “I found it!” False certainty—claiming to have found something you cannot possibly have found—makes the joke funny. But false certainty in foreign relations is both dangerous and foolish.
In 1940, the French high command claimed that no modern tank attack could be launched through the heavily wooded Ardennes region. They were certain of it. France lost the war because of their false certainty. In 1776, the leaders of Great Britain were certain they didn’t have to compromise with their American colonies. A collection of distant colonies would be no match for the greatest military power on earth, they assured themselves. Their false certainty resulted in the loss of one of the largest and richest parts of their empire. One of the most profound dangers when thinking about nuclear deterrence is adopting an attitude of false certainty.
Because there is no way to get inside someone’s head and shine a light on the workings of their decision-making during a nuclear crisis, because there is no way to measure the determination, credibility, estimates of arsenal size, and so on inside someone’s head, it is not possible to know anything definite about how nuclear deterrence works—or doesn’t work. Nuclear deterrence is not a science; it is an art.
I once sat with a very senior government official who had dealt with nuclear weapons at the highest level (including getting one of those phone calls in the middle of the night that a nuclear attack might be underway). He is a wise and considerate person and one that I respect a great deal. However, I was surprised to hear him say, “Well, we know that deterrence doesn’t work [in situation x] but it will work [in situation y].”[12] The authority of his past experience and his former position at the highest levels of government gave his voice a weight and certainty that made me, for a moment, wonder if he knew exactly when nuclear deterrence would and wouldn’t work. But realism also consists in admitting what you don’t know. Despite my admiration for him, I am sure he was wrong. It is not possible to know when a nuclear deterrence threat will work and when it will not.
With nuclear deterrence, it is not possible to be certain. In any given situation, you cannot know whether nuclear deterrence will work or fail. The map of deterrence attempts is not divided into areas that are clearly and sharply marked—some labeled “will work,” others labeled “won’t work.” Nuclear deterrence is all the empty place on the map where medieval scholars used to write “Here be dragons.” The entire map is shrouded in darkness.
Based on what we know of human nature, it seems plausible that nuclear deterrence will work—some of the time. And all the different situations where advocates claim it will work can indeed be successful use of deterrence—some of the time. Nuclear deterrence can prevent wars—some of the time. It can give you diplomatic leverage—some of the time. It will secure your treaties—some of the time. It is the ultimate guarantee of safety—some of the time.
But the claim that nuclear deterrence can work every time cannot be true, for two reasons. First, we just don’t know that much about how nuclear deterrence works. Nuclear deterrence happens inside the black box of the mind, and we have no tools that can get in there and measure accurately.
For example, imagine that leader A has said that if leader B annexes territory in country C, then leader A will launch a nuclear retaliation in response. Deterrence theory says that fear of nuclear war is a significant factor influencing the effectiveness of a deterrence threat. How do you measure the level of fear of nuclear war in leader B’s mind? What instruments do you use? What numbers can you write down? Nuclear deterrence theory says that the credibility of the person threatening to retaliate with nuclear weapons has an impact on whether the deterrence threat works. How do you measure B’s assessment of A’s credibility? Is there a “credibility assessing” portion of the brain that has been identified? Can we use scans or probes to measure the activity in that part of the brain? And if we could measure “activity,” how does one correlate activity in that part of the brain with a leader’s credibility? Is there a machine we can use where a reading of six is a clear indication of a low level of belief in an adversary’s credibility and a reading of twenty-eight means a high level of belief?
Some nuclear deterrence theorists work from past events, trying to create datasets of successes and failures when deterrence seems to have worked. But, although it may be useful, this is an exercise in guessing and estimating. Action A occurs, and then a leader takes action B. Are the two connected? Did A cause B? We have no way of knowing with any certainty. Various people may feel certain that they do, but that is not objective proof. That is no more than a hunch or an intuition. There is no calculation that can be made to measure the causative effect of A on B. No scientific measurement has gone on. Are we really comfortable risking the lives of billions of people on hunches?
So, the first reason that we cannot be certain about nuclear deterrence is that it is a field with very few real facts. The second reason is, if anything, more compelling. Nuclear deterrence cannot be trusted because human beings are unpredictable. We are the variable in the equation. Our moods and emotions change. Optimistic and feeling unbeatable one day, we are downcast and overwhelmed the next. This means that the best we can say, in any given situation, is, “Nuclear deterrence has a better-than-average chance of working in this situation. But there are no guarantees.”
Imagine a crisis between a large nuclear-armed state and a smaller one. The small state has a fledgling nuclear arsenal. It has never, in fact, successfully test-fired one of the long-range missiles that are intended to deliver its nuclear warheads. The larger state has weapons that have been thoroughly and successfully tested for decades. The smaller state delivers a nuclear threat. Most nuclear deterrence theorists would argue that this threat ought to fail. The disparity in the sizes of the nuclear arsenals and the difference in the reliability of the means of delivery ought to make the smaller state’s threat certain to fail.
But the fact is, the small state’s threat could succeed. Leaders lose their nerve all the time for a multitude of reasons. We have already seen several cases of nuclear threats that ought to have worked—like the existence of an Israeli nuclear arsenal and its implicit threat during the 1973 Middle East War—that failed. In any crisis, nuclear deterrence can fail—even in the most unlikely of circumstances. And in any crisis, nuclear deterrence can also succeed—even when experts predict that it shouldn’t. With nuclear deterrence, there are never any guarantees.
In poker, there is no way to scientifically predict whether a bluff will work. There may be factors that often point to a bluff working, but there is no way to use calculations to know with certainty when it will infallibly work. Nuclear deterrence is not chess. It is not logic. It is not calculation. It is gambling. We can only know as much about nuclear deterrence as we know about betting in poker.
So, we have to treat nuclear deterrence carefully—like an explosive chemical compound that is unstable. Most times the compound will probably work. But we can’t rely on it always working. Human beings have emotions, and no matter how strongly we set our conscious minds to control our emotions, sometimes our emotions overwhelm our better judgment and spark actions we didn’t plan. Nuclear deterrence is not controlled or certain. It is always a gamble.
Conclusion
So, we have examined the theory of nuclear deterrence and the confident way that nuclear weapons advocates talk about it. And they do often speak as if they have found the black cat. But they haven’t. There is no way to get a measuring device inside the human brain to measure the many different factors that go into a nuclear deterrence decision, and no leader has ever had his or her brain measured in this way during a crisis. There is no reliable evidence about what makes nuclear deterrence work and what makes it fail.
Nuclear deterrence theory is flawed from end to end. Leaders do not feel the deaths of civilians sufficiently to make deterrence failure-proof. The contention that rationality will hold human beings back in time of crisis is contradicted by human nature. The hope that fear will prevent nuclear war is based on the false assumption that fear is always the strongest emotion. And every time you employ nuclear deterrence, even in the most tried and true circumstances, there is a real chance that it will fail.
Footnotes
Franklin E. Zimring and Gordon J. Hawkins, Deterrence: The Logical Threat in Crime Control (Chicago: The University of Chicago Press, 1973), pp. 1–2.
Thomas Schelling, Arms and Influence (New Haven, CT: Yale University Press, 1966), p. 7.
Robert Jervis, “Deterrence and Perception,” in Strategy and Nuclear Deterrence, ed. Steven E. Miller (Princeton, NJ: Princeton University Press, 1984), p. 59.
For more on rational choice, see Ithak Gilboa, Rational Choice (Boston: The MIT Press, 2012); Michael Allingham, Choice Theory: A Very Short Introduction (London: Oxford University Press, 2002); and Jeffrey Fiedman, ed., The Rational Choice Controversy: Economic Models of Politics Reconsidered (New Haven, CT: Yale University Press, 1996).
Except for a few situations in which human beings do not have adequate control over their circumstances, like South Sudan, for example.
Paul Ryan, ignore this section.
H. R. Haldeman, The Ends of Power (New York: Times Books, 1978), pp. 82–83. It is worth noting that Nixon’s belief that nuclear threats were key to ending the Korean War are challenged by most historians and now considered doubtful.
Hannah Arendt’s comment on violent action applies as well, it seems to me, to violent threats. “Action is irreversible, and a return to the status quo in case of defeat is always unlikely. The practice of violence, like all action, changes the world, but the most probable change is to a more violent world.” Hannah Arendt, On Violence (New York: Harcourt, Brace & World, 1969), p. 80.
Freeman Dyson, Weapons and Hope (New York: Harper & Row Publishers, 1984), p. 245.
A sense of humiliation is an underappreciated cause of deterrence failure. There is good evidence for it playing a key role in the deterrence failures of the Middle East War of 1973, the Falkland Islands War, and the Cuban Missile Crisis. See works by Richard Ned Lebow and Janice Gross Stein, and especially Blema S. Steinberg, “Shame and Humiliation in the Cuban Missile Crisis,” pp. 653–690.
Dobbs, One Minute to Midnight, pp. 203–204.
This is a quote from a conversation with a widely respected, universally liked former government official who is considered one of the leading experts on nuclear weapons in the world today. It is a view I have also heard in conversation with other nuclear deterrence experts many times.
(c) Ward Hayes Wilson 2023 All rights reserved.