Cutting to the heart of the problem, Jack said, “We exist as a profession to help our organizations manage the frequency and magnitude of loss event scenarios. Today’s common risk measurement practices do not support that objective” – specifically use of control frameworks like NIST CSF or maturity models like C2M2 as stand-ins for true risk measurement.
Done right, cyber risk analysis should deliver results that enable prioritization of cybersecurity projects based on cost-benefit analysis, as well as communicating risk in the business terms that the organization understands, Jack said. He outlined three requirements to hit that level.
“Without clear scoping, the odds of measuring risk accurately are much lower, regardless of whether you’re doing qualitative or quantitative measurement,” he said.
2. An accurate, open model for measurementAll risk measurement (that is, measurement of loss event frequency and magnitude) runs on models and all models involve assumptions – but only open models such as FAIR or NIST 800-30 enable us to understand, challenge and accept (or not) those assumptions, Jack said. The mental models behind subjective judgements made by risk analysts or closed, proprietary models inside risk analytics software don’t meet that standard.
Download now: Understanding Cyber Risk Quantification: A Buyer’s Guide by Jack JonesFAIR Institute Contributing Membership required to download. Join now!
3. Data, accounting for uncertaintyBeyond data, Jack acknowledged some other concerns about quantitative risk measurement – it does require training to break old habits and adjust to a new paradigm and it, in effect, increases the costs of risk measurement in extra effort to gather data and research the business problems you are trying to solve. But those increased costs of measurement are made up for in fewer wasted resources and better focus, which also translates into lower probabilities of “very bad days” for organizations.
Is there an easy button? Jack concluded with thoughts on the “siren song of automation” for risk analytics. The good news: “Better loss magnitude data is becoming available.” The not-so-good news: “Publicly available loss data is incomplete, threat data is very easy to misinterpret/misapply, controls data is a mess.”
Jack is working on the controls problem with his FAIR Controls Analytics Model (FAIR-CAM™) that enables empirical measurement of the effectiveness of individual controls and controls systems. But for now, “automated cyber risk measurement is incredibly easy to screw up and when it’s screwed up, all you’ve done is automate poor decision-making.”