How Expected Loss Can Be a Misleading Estimate of Risk

Risk_estimate.jpg

What is risk?

"Risk is the likelihood of loss times the amount of loss if the event occurs." 

We’ve heard it a million times. It is more a statement of a principle than the formula it pretends to be.
It has two fatal flaws as a guide to action.

The first flaw, as this idea is usually implemented, is the use of qualitative categories, like “high”, “medium”, and “low” for likelihoods and the amounts (magnitudes) of loss. The FAIR (Factor Analysis of Information Risk) taxonomy fixes this flaw. Likelihood becomes an actual probability, and loss magnitude is quantified in money or some other measure.

A less-appreciated feature of a FAIR analysis is that it produces a whole spectrum of possible loss magnitudes and their associated probabilities. This is the probability distribution of potential loss magnitudes, sometimes called Value at Risk (VaR).

This is a critical contribution of the FAIR approach. The “risk” of a decision or a scenario is not a number but a distribution of possible values. The decision maker’s job is to choose among the distributions for the available options. This is a hard job, and there is no known “scientific” way of doing it.

Risk is a distribution, not a number

In this note, I’ll give a simple example that shows why risk is a distribution, not a number, and then show why this has consequences.

Take two events from your personal world. Event 1 is that your house suffers costly damage, including storms, floods, fire, and failure of major appliances. Looking at your history of such expenses and the cost of homeowner’s insurance, you put this at $4000 if something were to happen, and you figure an occurrence rate of once in ten years. Multiplying, you get a “risk” of $4000 x 1/10 = $400 per year.

Event 2 is you get into a severe car crash. You have $1 million in uninsured medical expenses, are disabled, suffer $5 million in quality of life reduction, and suffer $2 million reduction in lifetime earnings. The loss magnitude is $8 million. Considering the number of auto crashes in the US per year, the number of miles you drive, and your driving record, you estimate the probability of suffering such an accident as 1 in 20,000 per year. Multiplying $8 M by 1/20,000, you get a “risk” of $400 per year.

The implication of this simple math is that the “risk” of house damage is the same as the “risk” of a crippling auto accident. Do you believe it? Of course not, but why not?

This is the problem you get into by trying to directly compare low-cost high-probability events directly with low-probability catastrophic events by a single number. In terms of risk modeling, you need to look at the entire distribution of possible losses for each event. Rare, catastrophic losses are just not the same as frequent small ones, as any survivor of Hurricane Sandy can tell you.

This example reveals a split in the way that organizations plan for risk. In a big company, the risk of lost laptops is so predictable it is treated as just a cost. They literally budget for it. These are low-cost, high-frequency events. That is why corporations tell their travelers never to opt for extra insurance on rental cars (aside from the fact that it is actuarially a bad deal).

If the costs are bigger and the frequencies are smaller, companies may buy insurance or do other things, like encrypt databases, to reduce the probabilities or the loss magnitudes.

But there are events so bad that they could impair the mission of the organization or even threaten its very existence. The probabilities may be very small, but the consequences are catastrophic. This is called “tail risk”, referring to the tail or extreme part of the loss probability distribution. And this is where business continuity planning comes into play. That is why Congress mandated “living wills” for banks that are too big to fail. 

The Verizon Data Breach Digest (retrieved Mar 7 2016) gives the U. S. Army’s perspective on this split in its guidance on drafting operations orders:

Enemy courses of action (COAs)

Focusing on the "most prevalent" and to an extent the "most lethal" data breach scenarios is akin to the U.S. Army's approach for tactical field units preparing for combat. Within the "Five Paragraph" Operations Order, tactical units prepare for two possible enemy COAs: the "Most Likely COA" and the "Most Dangerous COA." (p.4)

Because a FAIR analysis results in the entire probability distribution of loss magnitudes, including highly improbable catastrophe events, it can identify scenarios where the best risk mitigation strategy is contingency planning. A good source for this perspective is Managing Extreme Financial Risk, by Karamjeet Paul.

In summary, risk is a distribution, not just a single number, and extremes merit contingency plans. FAIR helps the analyst in both ways.

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37