Jack Jones: The Quality of Qualitative Risk Measurements
What makes for a high-quality qualitative risk measurement? The answer is simple. We just have to go back to the scope, model, and data elements mentioned in my last blog post (What Do Qualitative and Quantitative Risk Measurements Have in Common?). If you haven’t read that post, it would probably make sense to do so before finishing this one.
The first two elements (scope and model) are almost binary in how they make or break a risk analysis (whether qualitative or quantitative). If the scope of a measurement is unclear, then the results will be unreliable. Similarly, if the model being used is fundamentally flawed, the results will be unreliable. Unfortunately, the vast majority of qualitative risk measurements that I’ve seen are broken in both dimensions.
This post is Part 2 of the series Jack Jones on Qualitative vs Quantitative Risk Measurement.
Jack Jones is Chairman of the FAIR Institute and creator of FAIR™ (Factor Analysis of Information Risk), the international standard for quantitative risk analysis, introduced in the book Measuring and Managing Information Risk. Jack recently released FAIR-CAM™, the FAIR Controls Analytics Model™.
Scope problems in risk analysis
Ask a cybersecurity professional or auditor how much risk a weak password represents, or a missing patch, excessive access privileges, flat network architecture, poor backup/recovery capabilities, etc., and they will almost always give you a High, Medium, or Low answer.
But what have they just measured?
Which loss event scenarios are relevant to the control in question, and what other controls were considered that may compensate for the deficient one? Rarely will the person have taken the time to clearly define for themselves, let alone anyone else, the scope of what they just measured.
Mental models in qualitative risk measurement
Let’s take another look at the typical High, Medium, or Low answer for risk.
What model did they use to arrive at that measurement?
It is rare to find someone who uses a clearly defined and vetted model when doing qualitative risk measurement. Almost always, it’s simply someone’s mental model at work, and nobody (including the person who provided the measurement) knows what mental calculation was used to combine the many factors to arrive at their answer.
And then there’s data for risk analysis...
High/Medium/Low risk, blah, blah, blah.
But what data did they use to arrive at that measurement?
Here again, almost without exception they won’t know or be able to tell you. Somewhere in their subconscious are data points — some valid, others not — which they used to arrive at their result.
The bottom line
Keep this in mind the next time you’re sitting across the table from a colleague, consultant, or other stakeholder and having what feels like a religious argument about whether something is High, Medium, or Low risk. In that circumstance, they probably aren’t the moron you think they are, and you’re probably not the moron they think you are. You’re just using different scopes, different mental models, and different data to arrive at your answers.
You can use FAIR to improve the scope and model elements of your risk measurements, and well-established methods (such as Douglas Hubbard’s calibrated estimation techniques) to improve the quality of data you use. This is true for both qualitative and quantitative measurements. Does this require more thought and effort than simply waving a wet finger in the air? Yes. Welcome to the world of responsible risk measurement.
And by the way… if you go to the trouble of clearly scoping an analysis, applying a clearly defined model, and smartly applying whatever data you have, then the difference in effort between qualitative and quantitative analysis virtually disappears. In which case, why would anyone want to accept the inherent limitations of qualitative risk measurements?