Following his comments to my last post, Martin and I had an excellent telephone conversation in which we were able to explore points of agreement and disagreement. Cutting to the chase — we didn’t disagree on anything. Martin’s many years of experience in security, risk management, and engineering (he currently heads the cyber group for APMG, the international accreditation and examination institute) has simply made him a bit jaundiced about common practices and beliefs in our profession.
During our conversation he emphasized that he:
We also agreed on the fact that, despite its imperfections, quantified cyber risk measurement using calibrated estimates and leveraging analytic functions like Monte Carlo is always going to represent a meaningful improvement over qualitative measurements.
The bottom line is that risk measurement quality (and the decisions that result) can be represented as a continuum. At one end is the informal gut-driven qualitative approach that has (and continues to) drive most of the risk decision-making in cyber. At the other end of the continuum is the purely data-driven and highly refined approach that more mature sciences benefit from.
In-between there is an evolutionary path that continually improves our ability to make better risk management decisions. That path is what we as a profession need to focus on, and that’s what FAIR and the methods that surround it (e.g., rigorous scenario modeling, calibrated estimation, Monte Carlo, etc.) enable. Anyone who argues that common qualitative measurements where little or no rigor is applied is good enough, simply hasn’t thought about it and doesn’t have a leg to stand on.
BTW — I ran this blog post by Martin before publishing to ensure that I didn’t misrepresent his thoughts.
Read the rest of the series: Is Cyber Risk Measurement Just Guessing? Part 1 and Part 2
Read more by Jack Jones, chairman of the FAIR Institute and creator of Factor Analysis of Information Risk (FAIR).