Название | The Failure of Risk Management |
---|---|
Автор произведения | Douglas W. Hubbard |
Жанр | Ценные бумаги, инвестиции |
Серия | |
Издательство | Ценные бумаги, инвестиции |
Год выпуска | 0 |
isbn | 9781119522041 |
Expert Intuition, Checklists, and Audits
The most basic of these is part of the “everything else” category in exhibit 2.2—expert intuition. This is a sort of baseline of risk management methods. This is pure gut feel unencumbered by structured rating or evaluation systems of any kind. There are no points, probabilities, scales, or even standardized categories. There are shortcomings to this but there is also lot of value. Experts do know something, especially if we can adjust for various biases and common errors. In order for other methods to be of any value at all, they must show a measurable improvement on gut feel. (In fact, we will show later that unaided expert intuition isn't the worst of them.)
Other approaches that we lumped into the “everything else” category are various forms of audits and checklists. They don't do any structured prioritization of risks based on real measurements. They just make sure you don't forget something important and systematically search for problems. You definitely want your pilot and surgeon to use checklists and to guard against fraud or mistakes; you want your firm's books to be audited. I mention them here because it could be argued that checklists sometimes perform a pure assessment role in risk management. Most organizations will use audits and checklists of some sort even if they don't fall under the sort of issues risk managers may concern themselves with.
The Risk Matrix
The most common risk assessment method is some form of a risk matrix. A total of 41 percent of respondents in the HDR/KPMG survey say they use a risk matrix—14 percent use a risk matrix based on one of the major standards (e.g., NIST, ISO, COSO, etc.) and 27 percent use an internally developed risk matrix. Internally developed risk matrices are most common in firms with revenue over $10 billion, where 39 percent say that is the method they use.
Risk matrices are among the simplest of the risk assessment methods and this is one reason they are popular. Sometimes referred to as heat map or risk map, they also provide the type of visual display often considered necessary for communication to upper management. See exhibit 2.3 for an example of a risk map for both verbal categories and numerical scores.
As the exhibit shows, a risk matrix has two dimensions, usually labeled as likelihood on one axis and an impact on the other. Typically, likelihood and impact are then evaluated on a scale with verbal labels. For example, different levels of likelihood might be called likely, unlikely, extremely unlikely, and so on. Impact might be moderate or critical. Sometimes, the scales are numbered, most commonly on a scale of 1 to 5, where 1 is the lowest value for likelihood or impact and 5 is the highest. Sometimes these scores are multiplied together to get a “risk score” between 1 and 25. The risk matrix is often further divided into zones where total risk, as a function of likelihood and impact, is classified as high-medium-low or red-yellow-green.
EXHIBIT 2.3 Does This Work? One Version of a Risk Map Using Either Numerical or Verbal Scales
There are many variations of risk matrices in many fields. They may differ in the verbal labels used, the point scale, whether the point scales are themselves defined quantitatively, and so on. Chapter 8 will have a lot more on this.
Other Qualitative Methods
The next most common risk assessment method is a qualitative approach other than the risk matrix. These include simply categorizing risks as high, medium, or low without even the step of first assessing likelihood and impact, as with the risk matrix. These also include more elaborate weighted scoring schemes in which the user scores several risk indicators in a situation, multiplies each by a weight, then adds them up. For example, in a safety risk assessment, users might score a particular task based on whether it involves dangerous substances, high temperatures, heavy weights, restricted movement, and so on. Each of these situations would be scored on some scale (e.g., 1 to 5) and multiplied by their weights. The result is a weighted risk score, which is further divided into risk categories (e.g., a total score of 20 to 30 is high and over 30 is critical). This sort of method can sometimes be informed by the previously mentioned checklists and audits.
Mathematical and Scientific Methods
The most sophisticated risk analysts will eventually use some form of probabilistic models in which the odds of various losses and their magnitudes are computed mathematically. It is the basis for modeling risk in the insurance industry and much of the financial industry. It has its own flaws but just as Newton was a starting point for Einstein, it is the best opportunity for continued improvement. It could use subjective inputs, as do the other methods, but it is also well-suited to accept historical data or the results of empirical measurements. This includes the probabilistic risk analysis used in engineering as well as quantitative methods used in finance and insurance. This means that uncertainties are quantified as a probability distribution. A probability distribution is a way of showing the probability of various possible outcomes. For example, there may be a 5 percent chance per year of a major data breach. If the breach occurs, there is a 90 percent chance the impact is somewhere between $1 million and $20 million.
As the previous survey showed, quantitative methods usually involve Monte Carlo simulations. This is simply a way of doing calculations when the inputs themselves are uncertain—that is, expressed as probability distributions. Thousands of random samples are run on a computer to determine the probability distribution of an output (say, the total losses due to cyberattacks) from the inputs (the various possible individual types of cyberattacks and their impacts).
These methods also include various types of statistical analysis of historical data. Although the lack of data is sometimes perceived as a problem in risk analysis (16 percent of HDR/KPMG survey respondents said this was a problem), statistical methods show you need less data than you think, and, if we are resourceful, you have more data than you think. There are a couple of categories of methods that are not strictly based on statistical methods or probabilities, but may get lumped in with mathematical or scientific methods, at least by their proponents. One is deterministic financial analysis. By deterministic I mean that uncertainties are not explicitly stated as probabilities. Readers may be familiar with this as the conventional cost-benefit analysis in a spreadsheet. All the inputs, although they may be only estimates, are stated as exact numbers, but there are sometimes attempts to capture risk analysis. For example, a discount rate is used to adjust future cash flows to reflect the lower value of risky investments. One might also work out best-case and worst-case scenarios for costs and benefits of various decisions.
One final approach that sometimes gets grouped together with mathematical methods in risk management includes expected utility theory, which gives us a way to mathematically make trade-offs between risk and return. These