The Risk of Mis-Quantifying Risk

Red Flag - CRQ Buyers Guide 2-1-1

“Cyber risk quantification” - Google the term and you’ll see that everybody says they are doing it. But are they, really?

As Jack Jones, creator of the FAIR standard model for cyber risk quantification wrote in his CRQ Buyer’s Guide:

“There is a lot of confusion about cyber risk measurement methods, their inherent benefits and challenges, and what qualities create ‘good’ cyber risk measurement. As a result, it’s easy for organizations to leverage measurement methods and solutions that may not be trustworthy.”

And as a result, risk management organizations today face a new risk: mis-quantifying risk when they think they’re getting quantification right. 

Let’s get clear on the basics:  Risk is most effectively expressed as a scenario with some likelihood (a threat actor impacts an asset via an attack vector) and impact (resulting in different forms of loss). Cyber risk quantification – to earn the title – directly measures the probability and material impact of a risk scenario. Assigning numbers to risks based on some other system may indirectly correlate with risk but not measure it – and lead to mis-quantification. 

Cyber Risk Mis-quantification Warning Flags

Let’s look at some common methods that present themselves as risk quantification and can lead unwary analysts into mis-quantification 

Expert (Subjective) Risk Ratings

We are talking about assigning numbers to risks and positioning them on a 5 x 5, likelihood vs impact matrix, based on subjective ratings. Yes, we should value the opinions of veterans of cyber risk management but these numbers could easily be expressed as colors; we don’t know if a “2” is twice as much loss exposure as a “1”. This numbering can work for a quick/dirty sorting among risks but certainly don’t perform math on them. 

Maturity Models or Controls-Focussed Assessments

These assessments perform a valuable service for cyber risk managers, identifying gaps in cybersecurity controls in comparison to a framework such as the NIST CSF and assigning a numeric rating for compliance with the framework’s recommended best practices.  But they are measuring control conditions not risk. With the introduction of the FAIR Controls Analytics Model (FAIR-CAM™), risk analysts can take a more nuanced view of the relation of controls to risk posture, not simply on/off but accounting for conditions of all the related controls in the stack to quantify loss exposure in FAIR terms. 

Outside-in Vulnerability Scans

Another valuable technical service but open to mis-quantification. Scans do identify weak points in cyber defenses (like missing patches) but often go wrong when they apply the Common Vulnerability Scoring System (CVSS) to arrive at a score that does not measure risk and, moreover, is really a partial look at an organization’s risk posture when compared with the encompassing view from FAIR and FAIR-CAM. 

Security Ratings Services/Credit-like Scores

Another, outside-in view that considers a wider scope of inputs (email security, website security, brand reputation, etc.), and agglomerates them into a single numeric score that’s an appealing easy button but is not risk quantification in the terms that best support decision making: quantifying the probable occurrence and financial impact of loss events. 

Hazards of Risk Mis-quantification

Can’t identify top risks without a reliable way to compare among risks quantitatively for likelihood and impact. 

Can’t account for uncertainty. FAIR risk assessments always display a range of probable outcomes, not single scores,  recognizing the uncertainty in all risk measurements, and the reality of decision-making for any organization. 

Can’t prioritize among security investments for ROI with no defensible way to quantify risk reduction in financial terms.

Can’t justify spending for a mitigation plan or an entire budget with no reliable way to demonstrate probable risk reduction. 

Can’t answer “where did you get the numbers?” Your business partners will rightly question the integrity of all your analytics unless you base them on a proven, open model – FAIR and related standards – rather than a vendor’s proprietary algorithms. 

Mis-quantification at Scale

The trend in cybersecurity metrics is toward automation of cyber risk quantification – and that raises the stakes on getting quantification right. As Jack Jones wrote in his blog series on CRQ automation:

"The point is, an automated CRQ technology that spits out inaccurate results can do more harm than good by driving poor decisions.  If, for example, it tells a CISO that the most important investment to make is X, when in fact Y is a much better investment, then the technology has failed in its purpose.  At best it resulted in wasted resources.  At worst; both wasted resources and greater risk.”

Jack’s message: Double back and check that your CRQ solution delivers on foundational practices of FAIR before risking mis-quantification at higher speed and lower accuracy. 

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37