Название | The Failure of Risk Management |
---|---|
Автор произведения | Douglas W. Hubbard |
Жанр | Ценные бумаги, инвестиции |
Серия | |
Издательство | Ценные бумаги, инвестиции |
Год выпуска | 0 |
isbn | 9781119522041 |
Flint Michigan water system contamination (starting 2012)
Samsung Galaxy Note 7 battery failures (2016)
Multiple large data breaches (Equifax, Anthem, Target, etc.)
Amtrak derailments/collisions (2018)
Events such as these and other natural, geopolitical, technological, and financial disasters in the beginning of the twenty-first century periodically accelerate (maybe only temporarily) interest in risk management among the public, businesses, and lawmakers. This continues to spur the development of several risk management methods.
The methods to determine risks vary greatly among organizations. Some of these methods—used to assess and mitigate risks of all sorts and sizes—are recent additions in the history of risk management and are growing in popularity. Some are well-established and highly regarded. Some take a very soft, qualitative approach and others are rigorously quantitative. If some of these are better, if some are fundamentally flawed, then we should want to know.
Actually, there is very convincing evidence about the effectiveness of different methods and this evidence is not just anecdotal. As we will see in this book, this evidence is based on detailed measurements in large controlled experiments. Some points about what works are even based on mathematical proofs. This will all be reviewed in much detail but, for now, I will skip ahead to the conclusion. Unfortunately, it is not good news.
I will make the case that most of the widely used methods are not based on any proven theories of risk analysis, and there is no real, scientific evidence that they result in a measurable improvement in decisions to manage risks. Where scientific data does exist, the data show that many of these methods fail to account for known sources of error in the analysis of risk or, worse yet, add error of their own.
Most managers would not know what they need to look for to evaluate a risk management method and, more likely than not, can be fooled by a kind of “analysis placebo effect” (more to come on that).1 Even under the best circumstances, where the effectiveness of the risk management method itself was tracked closely and measured objectively, adequate evidence may not be available for some time.
A more typical circumstance, however, is that the risk management method itself has no performance measures at all, even in the most diligent, metrics-oriented organizations. This widespread inability to make the sometimes-difficult differentiation between methods that work and methods that don't work means that ineffectual methods are likely to spread. Once certain methods are adopted, institutional inertia cements them in place with the assistance of standards and vendors that refer to them as “best practices.” Sometimes they are even codified into law. Like a dangerous virus with a long incubation period, methods are passed from company to company with no early indicators of ill effects until it's too late.
The consequences of flawed but widely adopted methods are inevitably severe for organizations making critical decisions. Decisions regarding not only the financial security of a business but also the entire economy and even human lives are supported in large part by our assessment and management of risks. The reader may already start to see the answer to the first question at the beginning of this chapter, “What is your biggest risk?”
A “COMMON MODE FAILURE”
The year 2017 was remarkable for safety in commercial air travel. There was not a single fatality worldwide from an accident. Air travel had already been the safest form of travel for decades. Even so, luck had some part to play in the 2017 record, but that luck would not last. That same year, a new variation of the Boeing 737 MAX series passenger aircraft was introduced: the 737 MAX 8. Within twelve months of the initial roll out, well over one hundred MAX 8s were in service.
In 2018 and 2019, two crashes with the MAX 8, totaling 339 fatalities, showed that a particular category of failure was still very possible in air travel. Although the details of the two 737 crashes were still emerging as this book was written, it appears that it is an example of a common mode failure. In other words, the two crashes may be linked to the same cause. This is a term familiar to systems risk analysis in some areas of engineering, where several failures can have a common cause. This would be like a weak link in a chain, but where the weak link was part of multiple chains.
I had an indirect connection to another common mode failure in air travel forty years before this book came out. In July 1989, I was the commander of the Army Reserve unit in Sioux City, Iowa. It was the first day of our two-week annual training and I had already left for Fort McCoy, Wisconsin with a small group of support staff. The convoy of the rest of the unit was going to leave that afternoon, about five hours behind us. But just before the main body was ready to leave for annual training, the rest of my unit was deployed for a major local emergency.
United Airlines flight 232 to Philadelphia was being redirected to the small Sioux City airport because of serious mechanical difficulties. It crashed, killing 111 passengers and crew. Fortunately, the large number of emergency workers available and the heroic airmanship of the crew helped make it possible to save 185 onboard. Most of my unit spent the first day of our annual training collecting the dead from the tarmac and the nearby cornfields.
During the flight, the DC-10's tail-mounted engine failed catastrophically, causing the fast-spinning turbine blades to fly out like shrapnel in all directions. The debris from the turbine managed to cut the lines to all three redundant hydraulic systems, making the aircraft nearly uncontrollable. Although the crew was able to guide the aircraft in the direction of the airport by varying the thrust to the two remaining wing-mounted engines, the lack of tail control made a normal landing impossible.
Aviation officials would refer to this as a “one-in-a-billion” event2 and the media repeated this claim. But because mathematical misconceptions are much more common than one in a billion, if someone tells you that something that had just occurred had merely a one-in-a-billion chance of occurrence, you should consider the possibility that they calculated the odds incorrectly.
This event, as may be the case with the recent 737 MAX 8 crashes, was an example of a common mode failure because a single source caused multiple failures. If the failures of three hydraulic systems were entirely independent of each other, then the failure of all three hydraulic systems in the DC-10 would be extremely unlikely. But because all three hydraulic systems had lines near the tail engine, a single event could damage all of them. The common mode failure wiped out the benefits of redundancy. Likewise, a single software problem may cause problems on multiple 737 crashes.
Now consider that the cracks in the turbine blades of the DC-10 would have been detected except for what the National Transportation Safety Board (NTSB) called “inadequate consideration given to human factors” in the turbine blade inspection process. Is human error more likely than one in a billion? Absolutely. And human error in large complex software systems like those used on the 737 MAX 8 is almost inevitable and takes significant quality control to avoid. In a way, human error was an even-more-common common mode failure in the system.
But the common mode failure hierarchy could be taken even further. Suppose that the risk management method itself was fundamentally flawed. If that were the case, then perhaps problems in design and inspection procedures, whether it is hydraulics or software, would be very hard to discover and much more likely to materialize. In effect, a flawed risk management is the ultimate common mode failure.
And suppose they are flawed not just in one airline but in most organizations. The effects of disasters like Katrina, the financial crisis of 2008/2009, Deepwater Horizon, Fukashima, or even the 737 MAX 8 could be inadequately planned for simply because the methods used to assess the risk were misguided. Ineffective risk management methods that somehow manage to become standard spread this vulnerability to everything they touch.
The ultimate common mode failure would be a failure of the risk management