Why Cyber Risk Quantification (CRQ) Demos Aren't Enough

CRQ Vendors 4Imagine that you’re looking for an encryption solution.  There are many providers on the market, all of whom use one of the well-vetted public encryption standards.  But let’s imagine there’s a new player in the market — one that claims to have a vastly improved, but proprietary, solution. 

They say they’ve found a way to make encryption stronger, faster, and simpler to use.  However, realizing that proprietary encryption is generally frowned upon in the market, the vendor invites you to see a demo and ask them questions about what they do.

 

JJack Jones 2019 NACD Summit Small 2ack Jones is Chairman of the FAIR Institute and creator of the FAIR™ standard for quantitative analysis of cyber risk, as well as the new FAIR Controls Analytics Model (FAIR-CAM™).  Join Jack at the 2022 FAIR Conference, Sept. 27-28.

 

 

When the day comes for the demo, the vendor shows you a very impressive dashboard that illustrates where encryption could be used in your environment, and what appears to be a very slick key management feature set.  It appears to be very smooth and simple.  They also demonstrate that when you feed plain text into their technology you get encrypted text (e.g., “TyXÄ[4cÉ^ÄP”)[1] out — which is, after all, the fundamental value proposition of any encryption product. 

And indeed, that output does look encrypted.  But you’re a curious and cautious cybersecurity professional, so you ask them to explain a bit about their algorithm.  They talk for a few minutes using terms you recognize, like “symmetric algorithm” and “key length,” which sounds legitimate, but you’re not an encryption expert so you’re not able to judge.

So, what’s my point…

CRQ Vendors 2 copySimilar to my encryption example, anybody can take numeric data — whether it’s SME estimates or telemetry from tooling — and do some math to generate a value that will look legitimate on the surface.  That’s blindingly easy.  Being able to defend those values is another problem altogether. 

For example, I recently saw a demo by a CRQ provider that demonstrated a profoundly flawed approach to loss event probability.  In their model, if a system is vulnerable then the probability of a loss event is 100%.   No timeframe reference (this week, this year, this century), no accounting for whether the asset faces an active threat landscape versus an inactive one, no accounting for compensating controls.  No.  If it’s vulnerable, then the maximum loss will occur (which is also nowhere near accurate). 

Related: Attacking FAIR - A Reply by Jack Jones

Under this model the probability of Seattle being wiped off the map by a tsunami is 100% — again, with no timeframe reference.  But their user interface was drop dead gorgeous, and their solution ingests all sorts of telemetry (which is an example of combining potentially good data with a bad model).  According to them, they can have you up and doing CRQ in a day.

Along the same lines, someone I spoke to recently had seen a demo by another CRQ provider that also was very impressive from a user interface perspective.  However, the inputs they were using (ordinal scores from common control frameworks) prompted this person to ask how the vendor arrived at their results.  He said the tap dancing was top notch, but he never got an answer to his question.

 FAIRCON22 Ad - Email

Look, Factor Analysis of Information Risk (FAIR) isn’t the only approach to measuring risk, it’s not perfect (no model ever will be), and for all I know it may not be the best.  But for years now a lot of organizations have used it in one form or another to great advantage, in part because the community understands and trusts it.

The bottom line

Open standards for complex things like encryption exist for a reason; that reason being it’s very difficult to get it right, and very easy to get it very wrong.  And unfortunately, because cybersecurity risk measurement is complex and nuanced, those who aren’t deeply familiar with all of the different ways it can go wrong won’t be able to evaluate a solution to know whether it can be trusted. 

The reason this matters should be obvious – organizations are making critical decisions regarding their cybersecurity priorities and solutions using this information.  And to paraphrase Douglas Hubbard, “The biggest threat to good risk management is poor risk measurement.

With this in mind, my next blog post will focus on a related question we hear all the time — “How do I defend my risk analysis?”  More to come…


[1] What was the plain text message? I’ll leave it to the crypto nerds in the audience to figure it out.  The algorithm was “proprietary” (i.e., not one of the standards) and very simple.

 

FAIRWEBSITEArtboard Fundamentals white backgroundAdvance your career in risk management -

Learn FAIR in online courses, achieve certification as a FAIR analyst.

 

 

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37