However, in truth there’s more similarity than you might imagine between qualitative risk vs. quantitative risk measurements, which is important to understand if we’re going to make meaningful comparisons and choose wisely between them.
This post is Part 1 in the series Jack Jones on Qualitative vs. Quantitative Risk Measurement
Jack Jones is Chairman of the FAIR Institute and creator of FAIR™ (Factor Analysis of Information Risk), the international standard for quantitative risk analysis, introduced in the book Measuring and Managing Information Risk.
Here are some of the most important similarities:
The first two bullets seem pretty obvious, so let’s set them aside for the moment and focus on the last two.
A model is involved any time you’re measuring something that is derived from two or more parts. For example, speed is derived from distance and time. The model for deriving speed from those two parts is simply speed = distance / time. Likewise, risk is derived from multiple elements (FAIR refers to them as “factors”) which are combined in some fashion.
Risk models can vary from the undefined mental model of an individual risk analyst waving a wet finger in the air, to formally defined and vetted models like FAIR.
Regardless, all risk measurements involve the use of a model.
Related: 3 Ways to Game the System with Qualitative Risk Analysis (But Don’t Do It)
But stop to think for a second. When someone proclaims something to be “high risk” rather than “medium risk”, what accounts for the difference? Assuming they didn’t make their choice randomly, their choice had to be based on data. The data may be nothing more than their own uncalibrated estimates regarding likelihood and impact, or the data may be a treasure trove of high-quality empirical evidence.
Regardless, all risk measurements involve the use of data.
Learn FAIR quantitative risk analysis techniques with training sponsored by the FAIR Institute
If they’re so similar, is qualitative risk measurement just as good as quantitative? The answer to that depends on at least two things:
1. Good for what purpose — i.e., what problem is the analysis trying to solve?
2. How effectively a qualitative risk analysis is performed.
Regarding the first of these, qualitative analyses have some natural limitations. First, because they’re ordinal measurements, when we proclaim something to be “High Risk” for example, we only know that whatever is being measured has been put into that category. We don’t know whether it’s at the high end of the “High Risk” category or the low end. Consequently, any prioritization is going to be constrained by the inherent imprecision of ordinal scales. Qualitative risk measurements also can’t be used to determine the risk reduction ROI of improvements to controls, nor can you aggregate qualitative measurements.
Still, if all you’re looking for is very coarse prioritization, then qualitative risk measurement (when done well) can be a perfectly reasonable approach. This “when done well” caveat is crucial though. It’s also a big enough topic that deserves a separate blog post, which will be posted next week. Stay tuned!