This last post in the series will focus on briefly summarizing and answering the thoughts/concerns posted by Martin Huddleston in his comments following Part 2. I felt this follow-up post was warranted because some readers seemed to misinterpret Martin’s comments as an indictment that quantitative measurements of cyber risk aren’t meaningfully better than qualitative measurements or hadn’t been proven better through empirical evidence. (Read Martin's comments.)
Following his comments to my last post, Martin and I had an excellent telephone conversation in which we were able to explore points of agreement and disagreement. Cutting to the chase — we didn’t disagree on anything. Martin’s many years of experience in security, risk management, and engineering (he currently heads the cyber group for APMG, the international accreditation and examination institute) has simply made him a bit jaundiced about common practices and beliefs in our profession.
During our conversation he emphasized that he:
- Strongly believes in and prefers evidence-based decision-making (as we all should)
- Hates arbitrary and ambiguous qualitative scales (as we all should)
- Believes subject matter experts can be deceived by important subtleties and complexity in the risk landscape, which can make their estimates less reliable (which is true)
- Believes it is critical to faithfully represent uncertainty in risk measurements (which is true)
- Believes that many of the security tools commonly in use today generate profoundly inaccurate reporting due to poor analytics (which is true)
- Resents money that is poorly and/or needlessly spent on security due to the problems listed above (as we all should)
We also agreed on the fact that, despite its imperfections, quantified cyber risk measurement using calibrated estimates and leveraging analytic functions like Monte Carlo is always going to represent a meaningful improvement over qualitative measurements.
The bottom line is that risk measurement quality (and the decisions that result) can be represented as a continuum. At one end is the informal gut-driven qualitative approach that has (and continues to) drive most of the risk decision-making in cyber. At the other end of the continuum is the purely data-driven and highly refined approach that more mature sciences benefit from.
In-between there is an evolutionary path that continually improves our ability to make better risk management decisions. That path is what we as a profession need to focus on, and that’s what FAIR and the methods that surround it (e.g., rigorous scenario modeling, calibrated estimation, Monte Carlo, etc.) enable. Anyone who argues that common qualitative measurements where little or no rigor is applied is good enough, simply hasn’t thought about it and doesn’t have a leg to stand on.
BTW — I ran this blog post by Martin before publishing to ensure that I didn’t misrepresent his thoughts.
Read more by Jack Jones, chairman of the FAIR Institute and creator of Factor Analysis of Information Risk (FAIR).