“Likelihood” is one of those words, like “risk” itself, that appears repeatedly in the risk management literature but casts as much shadow as light on the subject. In this article, we’ll see that likelihood is a probability, and why it is sometimes best expressed as an expected frequency of occurrence.

An essential part of risk assessment is to evaluate the “likelihood” of threats and losses. For example, NIST SP800-30 says that the purpose of risk assessment includes “inform[ing] decision makers … by identifying … the likelihood that harm will occur.” SP800-30 has a whole appendix (G) devoted to helping analysts map fuzzy English words, like “the threat is highly likely to have adverse impacts” into “semi-quantitative values” on a scale from 0 to 10 or 100. The point of all this seems to be solely to make it easy for the analyst to get some useful results out of fuzzy thinking.

To clear this up, let’s get back to basics. The dictionary says that likelihood is “the probability or chance of something.” From these clear roots of likelihood as probability, most risk-assessment methodologies immediately wander off into a weed field of qualitative verbiage.

**The FAIR Approach**

FAIR takes the direct approach. Rather than being distracted by the qualitative weeds, we should just accept that *likelihood is a probability, and a probability is a number. *Probability ranges from 0 to 1, not from 1 to 10 and certainly not from “very low” to “very high.”

In some contexts, for example Threat Event Frequency, FAIR measures “likelihood” by the expected rate of occurrence in a standard unit of time, usually a year, instead of probability directly. There are three good reasons for this, psychological, financial, and technical.

- The
**psychological**reason is that it is often easier to think in terms of frequencies than probabilities. It is more intuitive to think of a rare event as occurring, on the average, say once every 10 or 25 or 100 years, as opposed to having probability of occurring in a year of 10% or 4% or 1%. Government flood and weather agencies tend to express unusual events as annual expected frequencies. The Federal Emergency Management Agency publishes maps showing 100- and 500-year flood plains, being those areas that can expect inundation once every 100 or 500 years. So probability as expected frequency is a practice in good standing. - The
**financial**reason is that annual frequencies are needed for budgeting and planning. The basic time unit of management planning is one year. Budgets must be set, investment decisions made, insurance policies bought, all on an annual basis. Investors and regulators require annual reporting, which implies annual planning. Consequently, the risk manager needs to know, for frequent threats, how many times to expect them to occur in a year. A large company may average 25 lost laptops per year. You can budget for that. But assessing the probability of one or more laptops being lost as 100% doesn’t help budgeting much. - The third reason is
**technical**, and is illustrated by the two previous examples. For rare events, the annual frequency is nearly the reciprocal of the annual probability. A 100-year flood is expected to occur in the coming year with a 1% probability. But what’s the reciprocal of 25 laptop losses per year? If we calculate as we would for floods, we’d say that 25 per year implies an average time between losses of 1/25^{th}or .04 of a year (analogous to an average time between floods of 100 years). The reciprocal of that would be a “probability” of 25; it just doesn’t make sense.

Where it *does *make sense to use probabilities instead of annual frequencies is vulnerability. Vulnerability is the probability of a Threat Event becoming a Loss Event. Threat Event Frequency (times per year) times Vulnerability (probability) equals Loss Event Frequency (times per year), exactly what the manager needs to know for budgeting.

These are the reasons that FAIR sometimes uses frequencies and sometimes probabilities. As usual, FAIR guides us well through what otherwise easily becomes a muddle. “FAIR is a framework for critical thinking,” writes Jack Jones, creator of FAIR.

**Room for Improvement in NIST SP800-30**

So why doesn’t NIST just say “probability?” I suspect NIST is, commendably, trying to make its publications more accessible to a general readership. And I suspect the authors believe the general readership is not a little math-averse, and even more so probability-averse. However, when cheap calculators have statistical buttons for mean and standard deviation, and elementary statistics is taught in high school, we may hope that the risk management profession may quickly mature beyond probability phobia, and that uttering the “p-word” will not get one icy stares in the executive suite.

Incidentally, statisticians have a specific and technical definition of “likelihood.” This is another example of how more mature professions have given rigorous meaning to ordinary words, and gotten away with it. The risk management profession can do it too, if we are determined.

*"The believer is happy; the doubter is wise." Hungarian proverb*