Averting Catatrophe: When the Leaders are the Problem
The following text was first published as an afterword to Flirting with Disaster: Why Accidents Are Rarely Accidental (Hardcover) by Marc S. Gerstein and Michael Ellsberg [click for details]
Dr. Gerstein’s final chapter has given guidelines for leaders on how they might avert the kinds of catastrophes described in this book. It would be good for society (and all organizations) if more leaders exhibited this kind of concern and followed the suggestions he gives.
However, in my own experience in government, and in my study of national security policy catastrophes in the decades since, I have come to believe that the most dangerous practices in the national security realm reflect priorities, in general, that are set by top officials: getting reelected, avoiding condemnation for past actions, or other political or bureaucratic objectives. Those priorities generally take great precedence over safety or preventing public harm.
The behavior of the people down below in the hierarchy is generally responsive to those priorities, because the way for them to keep their jobs and get ahead is self-evidently to conform to the priorities of their superiors, and especially the top boss. It isn’t as though the lower people in the organization themselves profit by adopting those priorities over other priorities, such as safety. But they want to keep their jobs, and they keep them by delivering to their superiors what they want. And what those superiors often want is help in avoiding or concealing documentation of warnings or recommendations that might convict them, on later examination, of self-interest or recklessness in choosing or continuing policies that failed.
Many of the examples in this book involve leaders consciously gambling with other people’s lives, on a catastrophic scale. In the case of Challenger, there was only a single instance when the engineers from Morton Thiokol, who had to sign off on the launch, tried to stop it. It wasn’t as though they were Chicken Littles, always getting in the way and making trouble. Launches were routinely being postponed for a day, but not by the Thiokol engineers. So that was an unprecedented warning by them. Yet the decision-makers went ahead.
You don’t have to be especially sympathetic to the decision-makers in these cases to assume that they didn’t consciously desire the disastrous outcomes that arose. That’s generally obvious. But the public tends to accept as a corollary: “No reasonable, decent person could have consciously risked this outcome if they recognized it was a serious possibility.”
That is a very plausible assumption. It expresses our deeply ingrained sense of ourselves and other human beings. But it is wrong. It is a widely held misunderstanding of the way we ordinary humans act in organizational settings, either in positions of power and responsibility, or as subordinates. Officials who have a public responsibility to make responsible choices do take reckless, unreasonable risks, more often and on a greater scale than most outsiders can even imagine. That fact is unfamiliar because, to avoid accountability and blame, those same officials conceal it, and direct their subordinates to cover it up; and the subordinates do so, again for understandable (though not admirable) career motives, acting as bystanders while risky gambles are undertaken. Dr. Gerstein focuses especially on the latter behavior, that of subordinates. Let me add some reflections on the behavior of the leaders.
What Dr. Gerstein shows is that reasonable people, who are not malicious, and whose intent is not to kill or injure other people, will nonetheless risk killing vast numbers of people. And they will do it predictably, with awareness. The Merck officials knew they were risking vast numbers of lives with Vioxx. So did the decision-makers responsible for protecting New Orleans. They knew the risks from the beginning, at every stage. In these and other cases, the responsible decision-makers may have underrated the risks in their own minds, but they knowingly took great efforts to conceal evidential data, at the time and later, from those who might judge differently.
In most of the cases in this book—Challenger, Katrina, Vioxx, Columbia, Chernobyl, Andersen—the leaders chose, in the face of serious warnings, to consciously take chances that risked disaster. What are the circumstances under which leaders take these kinds of gambles? My own experience and research suggests, very often, the following answer: when the potentially disastrous gamble offers the possibility of avoiding loss altogether, coming out even or a little ahead; and when the alternative to taking the gamble assures the certainty of loss in the short run—a loss that impacts the leader personally.
The sure loss that is rejected may appear small or even trivial to an observer, compared with the much greater damage, perhaps societally catastrophic, that is risked and often subsequently experienced. The latter damage, however, may be to “other people,” outside the decision-maker’s organization or even nation, and inflicted in “the long run”: thus, less easily attributed to this decision-maker, who may well have moved on by the time of the disaster. In effect, the decision-maker acts as if a sure, short-term loss to his own position—a perceived failure, risking his job or reelection or his influence—were comparably disastrous to a possible social catastrophe that costs many lives: an avoidable war, a widely used drug that proves to have lethal side effects, a dangerous product, the explosion of a nuclear plant or space vehicle.
In the leader’s eyes, both of these outcomes are “disasters.” One of them, resulting from a particular course of action, is sure to occur. The other is uncertain, a possibility under another course of action, though perhaps very likely—and it is combined with the possibility, not available with the other course, of coming out even or perhaps ahead, winning or at least not losing. In choosing the latter option, he sees himself as accepting the possibility of a loss—in the hope of coming out even—rather than accepting a certainty of a failure, defeat. It seems—and it is so presented to him by some advisers—a simple, inescapable decision, “no real choice”: the possibility of winning, or at least of avoiding or postponing defeat, versus a “no-win” course of action or, worse, a sure loss in the short run. He and these advisers simply ignore the fact that the scale of the respective losses, and who it is that mainly suffers them, are vastly different in the two courses. (I observed this bureaucratically over and over in Vietnam, and it is evident in current advocacy of occupying Iraq or attacking Iran.)
It was, in fact, the experimental work on choice by Kahneman and Tversky described by Dr. Gerstein that led me to recognize the frequency of the above choice-context, and of the resulting choice of a gamble involving possible catastrophe, as a common precursor to organizational or social disaster. In particular, these researchers’ discovery of the special salience given to “sure” outcomes, and of the greater strength of the impulse to avoid any loss—relative to some chosen benchmark—than to increase one’s gain, led me to understand in a new way otherwise baffling decisions that have led to major catastrophes in national security.
Applying hypotheses suggested by this research to decisions— including the escalation of the Vietnam War (in which I participated personally, on a staff level), the decision to invade and occupy Iraq, and to serious, secret threats to initiate nuclear war in more than a dozen crises—I have been forced to the following unhappy conclusion (which applies, on a smaller but still tragic scale, to many of the examples in this book): Men in power are willing to risk any number of human lives to avoid an otherwise certain loss to themselves, a sure reversal of their own prospects in the short run.
That grim proposition sounds extreme, I would say, largely because of near-universal and effective efforts to conceal the organizational decision-making data on alternatives and prospects that would reveal such preferences. Failure to conceal these data would point to culpability, recklessness, perhaps criminality on the part of specific decision-makers or a whole organizational team. Results could range from embarrassment, loss of prestige and influence, to expulsion from job or office, the downfall of an administration, even a prison sentence. The cover-up to avoid such accountability is usually successful. Hence, specific disasters—when the gambles are lost—appear to the public as shocking, inexplicable surprises. (And the public mistakenly infers, as it is meant to, that it appears the same way to the decision-makers involved.)
One lesson of this book is that you will not reduce those risks adequately by action within the firm or government agency. The organization has to be monitored by other organizations that are not under the same management, that don’t respond to the same boss. You can set up processes within the organization that make truth-telling, realistic assessments, and warnings of danger somewhat more likely. But that isn’t close to being an adequate solution. Subordinates who act like bystanders (to keep their jobs) are indeed part of the problem, as Gerstein argues, but the organization’s leaders themselves are the major part.
The most promising solution—in the case of government—is going back to the system that our founders set up. It obviously didn’t provide any guarantee, but it was an ingenious system of confronting men of power with other men of power within the system. Checks and balances; investigative powers of Congress, with subpoenas; investigators with some degree of independence from the president; an independent judiciary. All of these are things that you don’t have in a dictatorship. They are institutions that leaders such as Vice President Richard Cheney, for one, openly disdain.
These are not just luxuries that make us feel more free and privileged. They are vital safety mechanisms. Democratic, republican, constitutional government of the form invented here, revered at least in principle till recently, is less efficient and decisive than unrestrained executive power in what is effectively an absolute monarchy or dictatorship. Things move less fast, and there are constant complaints that nothing gets done, compared with a “unitary executive,” a presidency of unlimited “inherent powers” of the sort that Cheney and his special band of legal advisers prefer and proclaim. But the latter leads straight to a succession of Iraqs. As Tom Paine put it, most wars arise from “the pride of kings.”
Similar checks to unaccountable power and secrecy are needed, as Gerstein’s case studies show, in nongovernmental organizations and corporations. To mention a spectacular case not covered in this book, where cover-up was even more blatant than in most of Gerstein’s examples and the lethal effects even greater (comparable to the death tolls in major wars): Tobacco executives didn’t need more truth-telling within their organization to reduce vast dangers to the public. (For a recent account, see The Cigarette Century by Allan M. Brandt.) They were busily engaged in muffling every subordinate who brought up any warning, and preventing or neutralizing any warning by outsiders. All the major tobacco CEOs perjured themselves when they said in sworn testimony before Congress that “We have no knowledge that our product is carcinogenic, or that we market it to minors, or that it is addictive.” That was clear-cut perjury in every case, quite apart from the arguable criminality and certain lethality of their practices. Yet not one of them has been brought up on criminal charges, or even contempt of Congress.
Such indictments would be useful. It would save lives in the future if not only figures such as Jeffrey Skilling of Enron but also, more important, a lot of other leaders who take and conceal risks to the lives of others were to be indicted or impeached and subjected to criminal prosecution, if convicted, prison.
But above all, we need more whistle-blowers from within. Their truth-telling to outside authorities and audiences is essential. And the only way to get it—since dangers to their own careers in their organizations cannot be eliminated—is to somehow encourage them to accept those risks, for the benefit of others.
Is that asking the impossible? Difficult, unusual, unlikely, yes: yet it is humanly possible, and essential. Humans have the capability for great concern, altruism, and even self-sacrifice in the interest of others outside their immediate families and teams, and they very often show it: only not often enough, indeed quite rarely, in their official roles within organizations. Unfortunately, as human beings, we also all have the capability of being selective in our concern, and of being manipulated in our selectivity of concern by our leaders and colleagues in our groups.
A major reason for the occurrence of disasters is that, as humans, we often choose keeping our job, protecting our reputation, getting promoted, maintaining our access to inside information, getting reelected, assuring college education for our children, preserving our marriage, and holding on to our house in a nice neighborhood—all considerations that are neither trivial nor discreditable for any of us— over actions, including truth-telling to the public, that would risk some of these but which could potentially save vast numbers of other people’s lives.
I would like readers to realize—and this book has great potential for alerting them—that there may well come times when the amount of harm they could avert by speaking out could well outweigh the personal harm they might suffer by doing so, great though that might be.
When I released the Pentagon Papers in 1971, former senator Wayne Morse told me that if I had given him those documents at the time of the Tonkin Gulf Resolution in 1964 (when I had many of them in my office safe in the Pentagon), “The Resolution would never have gotten out of committee. And if it had been brought to a vote, it would never have passed.” That’s a heavy burden to bear. But scores of other officials, perhaps a hundred, could have given those documents to the Senate as well as I.
More recently, any one of a hundred people within the government could have averted the Iraq War by telling the public—with documents—what they knew about the lies the president was feeding the public. Yet no one did. A middle manager or even lower-level person could have saved the Challenger, or rung the bell on Vioxx. Shouldn’t one of them have done it, or more than one? Every one of the stories Dr. Gerstein tells could have had a happier ending if his book, existing earlier, had inspired one person in the respective organization—at the top, bottom, or in between—to act with moral courage.
When confronted with potential looming catastrophes, people within large organizations often think, Somebody else will take care of this. And surely the top people know more than I do. It’s their job to take care of it, and surely they will. The truth is, there’s no likelihood at all that the leaders will take care of it. If readers who find themselves facing organizational disasters realize, perhaps from this book, It’s up to me, and if I don’t do it, it’s probably not going to get done. The others aren’t going to do it. Maybe I’m the one who needs to do it, some may be more willing to take personal risks to avert catastrophes.
Thus, reading this book could change lives. From the examples given, a reader could recognize two things. First, in the words of a Chinese proverb my wife, Patricia, likes to quote: “If you don’t change course, you are likely to end up where you are heading.” If the course your team, your organization, or your nation is on looks to you as though it is going over a cliff, heading for a disaster, it may well be doing so.
Second, readers should realize, If I see this, and lots of other people see it, too, it does not follow that somebody else will take care of it. Disasters occur because leaders often choose crazy or dangerous courses and people like me don’t rock the boat. You, the reader, can choose otherwise.
In the situations Dr. Gerstein describes, the leaders do not lack for subordinates giving warnings within the organizational chain of command. The problem is that the warnings are stifled or overridden; subsequently, those who see the dangers and even see them happening keep their silence. My hope is that people reading this book might decide that averting catastrophe can be worth going outside the organization—warning the public, Congress, investigative bodies— and the media directly with documents to back it up. Many individuals inside government and corporations, from low-level clerks to upper managers and cabinet members, have that power—at the risk of their careers, to be sure—to tell the truth, and perhaps to rescue their own organizations or countries from disaster, as well as rescuing other potential victims.
For the last six years, since the Iraq War first approached (and more recently, equally disastrous prospects of attack on Iran), I have been urging patriotic and conscientious insiders who may be in the situation I once was in—holding secret, official knowledge of lies, crimes, and dangers of impending, wrongful, catastrophic wars or escalations—to do what I wish I had done in 1964 or early 1965, years earlier than I did: Go to Congress and the press and reveal the truth, with documents. The personal risks are real, but a war’s-worth of lives might be saved.
The above text was first published as an afterword to Flirting with Disaster: Why Accidents Are Rarely Accidental (Hardcover) by Marc S. Gerstein and Michael Ellsberg [click for details]