Attacking FAIR - A Reply by Jack Jones

IFAIR Model - Risk - Threat Event Frequency - Vulnerabilityt was bound to happen.  For years, Factor Analysis of Information Risk (FAIR™) was, for all intents and purposes, the only Cyber Risk Quantification (CRQ) model out there.

However, as the interest in CRQ has begun to explode in the marketplace, vendors have responded to the call.  Some of those vendors have chosen to rely on FAIR as their analytic model, while others have chosen to create their own proprietary models.  Still others have sought a middle ground — using a modified version of FAIR with proprietary sub-models. 


Jack Jones 2019 NACD Summit Small 2Jack Jones is Chairman of the FAIR Institute and creator of the FAIR standard for quantitative analysis of cyber risk. Jack is also the creator of the new FAIR Controls Analytics Model (FAIR-CAM™).


The difficulty these proprietary solutions face is that FAIR has over a decade-long head start in terms of evolution, refinement, market awareness, user base, etc.  Furthermore, being an open standard through the Open Group, FAIR also has the advantage of having been thoroughly vetted by the community.  There’s even a book that describes FAIR in detail, which is especially helpful in a profession that tends to shy away from proprietary methods.  Combined, this is a pretty tough competitive moat to deal with, so what do you do?  

When competing against something that has these kinds of advantages, one of the options is to attack it — find things about it that can be portrayed as weaknesses and hammer on those points in your marketing.  It’s even better if you can coin a catchy phrase like “FAIR Fatigue” to attach to your message.  Add into your marketing message all the things your solution does that supposedly solves the incumbent’s “weaknesses” and you’re ready to compete.  This is what we’re beginning to see in the CRQ landscape. 

3 Truths about 3 Objections to FAIR

So, in the interest of FAIR play (obviously, pun intended), let me make it easier for these folks by highlighting three things they can include in their marketing, as well as some associated facts (the rest of the story), which they probably won’t include:   

1. “FAIR is hard.”  The truth is that the cyber risk landscape is complex.  The clarity and open nature of the FAIR model simplifies the process of understanding and measuring this landscape (prior to FAIR, the prevailing belief was that cybersecurity risk could not be quantified).  In addition, FAIR’s flexible nature – which allows for quick and dirty analyses, down in the weeds analyses, or anywhere in between –   enables organizations to apply it in whatever way best fits their decision-making needs and resource constraints.  FAIR also helps to avoid common measurement mistakes, such as poor scoping, and enables you to “show your homework” when an executive questions your numbers (as they invariably will from time to time when presented with CRQ results). 

2. “FAIR requires manual data entry.”  This claim demonstrates one or more of the following:  a lack of critical thinking, a lack of research, or willful misrepresentation.  FAIR is a model.  You can feed it through whatever means you have available to you.  In some cases, data may be available via technology that can be accessed through an API.  In other cases, some data may be preloaded.  And still in other cases, the best source of data is a SME entering data in a form.  Take your pick. 


FAIR Analysis Fundamentals BadgeAdvance your career in risk management -

Learn FAIR in online courses, achieve certification as a FAIR analyst.

 


3. “FAIR Fatigue is real.”  In other words, some organizations tried FAIR and didn’t find it to be a good fit.  In my experience, this virtually always boils down to one or more of the following:  

  • An organization tries to boil their cybersecurity ocean rather than identifying and nailing a specific risk management decision support objective (e.g., ROI on big investments, etc.), and evolving the program from there.  I wrote a white paper about adopting FAIR a couple years ago, where I discuss this and other common strategies for successfully adopting CRQ. 
  • An organization obsesses over finding perfect data for their analyses.  Look, nobody likes uncertainty when it comes to measuring important things like cybersecurity risk, but the simple fact is that there will always be uncertainty (and anyone who says otherwise doesn’t understand risk measurement).  Another fact is that there are diminishing returns for additional data.  Douglas Hubbard discusses this in his work.  Organizations that don’t accept this fact are far more likely to burn themselves out when doing CRQ. 
  • An organization provides too much analytic detail in their executive reporting, or they just provide numbers and no explanation.  This can turn off the executive audience pretty quickly, which can diminish support for CRQ. 

Easy Button vs. Defensible Cyber Risk Analysis

The truth is, the existence of competing risk models is a good thing.  No models are perfect, so the market should be able to examine the available options and choose the one that seems to best fit an organization’s needs.  But that’s the problem.  The market can’t examine proprietary models to understand how they arrive at their results.  Too often, the focus will be on “easy button” claims and dashboard eye candy, without significant focus on whether a solution’s results are defensible.  I’ll have more to say about this in my next blog post.  Stay tuned…

FAIRCON22 Ad - Email

 

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37