Jack Jones: Automating Cyber Risk Quantification (Part 5 of 5)

Automation 5In the previous post, I provided examples of some controls-related data that can’t be used to support automated cyber risk quantification (CRQ).  But the news isn’t all bad.  There are some data that can be used to support CRQ.


Read the entire blog post series on automating cyber risk quantification.


Data for cyber risk quantification

Sources of data that can be used to support CRQ include, but aren’t limited to:

  • Anti-phishing training data (for understanding the efficacy of your employees as phishing resistance controls)
  • Anti-malware data (for understanding the efficacy of anti-malware protection)
  • Data from honeypots (for TEF and control efficacy data)
  • Vulnerability scan data (but not the CVSS scores themselves, for data regarding the efficacy of patching efforts)
  • Asset value/liability data (to inform loss magnitude values)

Clearly, this list is not comprehensive.  The point is that data are available to support CRQ — hypothetically even automated CRQ.  There is, however, a caveat.  The model needs to apply the data appropriately.  As I said earlier — especially with controls data — this is the thorniest problem of all.


Jack Jones 2019 NACD Summit Small 2Jack Jones is the creator of FAIR™ (Factor Analysis of Information Risk), the international standard for quantification of cyber, technology and operational risk, as well as the FAIR Controls Analytics Model (FAIR-CAM™) that quantifies how controls affect the frequency and magnitude of risk.


Key points on automation of cyber risk quantification

So, can CRQ be automated?  Yes, if:

  • Scenario scoping is done very carefully
  • The model(s) being used — particularly the controls analytics piece — accounts for the complexity and nuance of the problem space, and
  • Data are used appropriately

If you can’t check all three of those boxes, then analyses results will not be accurate.  At scale.  And don’t fall victim to the misperception that, “Well, the numbers are close enough.”  Sometimes they may be, but many times they won’t be — and you won’t know which is which.

With that in mind, one of the challenges potential buyers face is validating the numbers they see in a product demo.  After all, it’s very easy to cherry-pick analyses that demonstrate reasonable-looking results.  That is a topic I’ll address in a future blog post. 

In the meantime, everyone needs to keep in mind that CRQ is a new discipline and market.  Consequently, it’s easy for solution providers to dive into the deep end without understanding some of the important but subtle difficulties of the problem space.  It’s also easy for potential customers to be bedazzled by eye candy and good marketing stories, all the while trusting that vendors have done their homework.  But vendors have a responsibility to do no harm, and buyers have a responsibility to be very skeptical of “Easy Buttons” in new disciplines like CRQ that can lead to poor decisions — at scale.

Read the entire blog post series on automating cyber risk quantification.

Read Jack’s Buyer’s Guide for Cyber Risk Quantification

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37