6 Red Flags when Evaluating a Cyber Risk Quantification Provider (CRQ Buyer’s Guide)

Red Flag - CRQ Buyers GuideFAIR creator Jack Jones wrote a buyer’s guide to cyber risk quantification to “make the marketplace aware of the distinctions on what qualifies as CRQ and what qualifies as better vs. dangerous CRQ.” In the “red flag” section of the guide, Jack waves buyers off from some of the more dangerous misperceptions and misdirections you’ll run into steering your way through what’s now a crowded marketplace for CRQ solutions. Here are six of them:


Download now: Understanding Cyber Risk Quantification: A Buyer’s Guide 

FAIR Institute Contributing Membership required to download. Join now!

Watch Jack Jones discuss the Buyer’s Guide in a webinar on demand (FAIR Institute Contributing Membership required). Watch now!


   

1.  “Risks” That Aren’t Loss Event Scenarios

Let’s start with first principles. Cyber risk quantification can only be effectively measured in terms of the probability of a loss event scenario occurring and the magnitude of the event should it occur. Vendors will often confuse things from the risk landscape like “weak passwords” or “disgruntled insiders” with measurable risk – wrong.

2.  Scope Overlap

For meaningful measurement, loss event scenarios must be tightly focused on a well-defined asset at risk, a threat actor, and an impact. In some badly done solutions, scope creeps into ill-defined scenarios that also overlap with other scenarios, making measurement on aggregated risk unreliable.

CRQ Buyers Guide 33.  Ordinal Measurements

If a solution uses ordinal values – one/two/three, high/medium/low, red/yellow/green – chances are those values are based on subjective ratings and should be avoided. If the solution purports to perform math on those values – back away quickly.

4.  Precise Inputs and Outputs

“One of the most important criteria for realistic and useful risk measurement is that inputs and outputs reflect uncertainty,” Jack writes, and that requires showing results in ranges. A solution that generates a single number generates a false sense of certainty.

5.  Use of Control Standards

Standards like the NIST CSF are valuable lists of best practices but are frequently misused to rate cybersecurity “maturity” with ordinal scores (see red flag #3 above) based on the absence or presence of controls - then the problem is compounded by performing math on those scores as risk analysis. Jack has written extensively on the shortcomings of using controls standards as proxies for risk measurement – see his FAIR Controls Analytics Model (FAIR-CAM™).

6.  Weighted Values

Some solutions purport to calculate risk by weighted values for controls, threat communities or other factors that go into risk calculations. Beware: these values are likely to be subjective, suffer from the problems of ordinal and precise values and scope definition.


Download now: Understanding Cyber Risk Quantification: A Buyer’s Guide 

FAIR Institute Contributing Membership required to download. Join now!

 

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37