Jack Jones: What Do Qualitative and Quantitative Risk Measurements Have in Common?
There are a lot of blog posts and conference presentations that discuss the differences between qualitative vs. quantitative risk analysis. Most of the time, those discussions focus on the challenges or perceived flaws in one or the other.
However, in truth there’s more similarity than you might imagine between qualitative risk vs. quantitative risk measurements, which is important to understand if we’re going to make meaningful comparisons and choose wisely between them.
This post is Part 1 in the series Jack Jones on Qualitative vs. Quantitative Risk Measurement
Jack Jones is Chairman of the FAIR Institute and creator of FAIR™ (Factor Analysis of Information Risk), the international standard for quantitative risk analysis, introduced in the book Measuring and Managing Information Risk.
Here are some of the most important similarities:
- They both have the same purpose — to inform decision-making by representing how much loss exposure exists from one or more loss event scenarios.
- They both have some “scope of measurement.” In other words, something specific is being measured. For example, how much additional risk a particular control deficiency represents.
- Both types of measurements use models.
- Both types of measurements rely on data.
The first two bullets seem pretty obvious, so let’s set them aside for the moment and focus on the last two.
Risk Analysis Models
A model is involved any time you’re measuring something that is derived from two or more parts. For example, speed is derived from distance and time. The model for deriving speed from those two parts is simply speed = distance / time. Likewise, risk is derived from multiple elements (FAIR refers to them as “factors”) which are combined in some fashion.
Risk models can vary from the undefined mental model of an individual risk analyst waving a wet finger in the air, to formally defined and vetted models like FAIR.
Regardless, all risk measurements involve the use of a model.
Data for Risk Measurement
Data also are a part of any risk measurement. This is understood when we think about quantitative risk measurement, however it’s almost never recognized as a factor in qualitative risk measurements.
But stop to think for a second. When someone proclaims something to be “high risk” rather than “medium risk”, what accounts for the difference? Assuming they didn’t make their choice randomly, their choice had to be based on data. The data may be nothing more than their own uncalibrated estimates regarding likelihood and impact, or the data may be a treasure trove of high-quality empirical evidence.
Regardless, all risk measurements involve the use of data.
Which Begs a Question…
If they’re so similar, is qualitative risk measurement just as good as quantitative? The answer to that depends on at least two things:
1. Good for what purpose — i.e., what problem is the analysis trying to solve?
2. How effectively a qualitative risk analysis is performed.
Regarding the first of these, qualitative analyses have some natural limitations. First, because they’re ordinal measurements, when we proclaim something to be “High Risk” for example, we only know that whatever is being measured has been put into that category. We don’t know whether it’s at the high end of the “High Risk” category or the low end. Consequently, any prioritization is going to be constrained by the inherent imprecision of ordinal scales. Qualitative risk measurements also can’t be used to determine the risk reduction ROI of improvements to controls, nor can you aggregate qualitative measurements.
Still, if all you’re looking for is very coarse prioritization, then qualitative risk measurement (when done well) can be a perfectly reasonable approach. This “when done well” caveat is crucial though. It’s also a big enough topic that deserves a separate blog post, which will be posted next week. Stay tuned!