A Solution for Measuring Inherent Risk

Using_FAIR_Model_Prioritize_Inherent_Risk 2If you search the FAIR Institute blog, you will find several posts about Inherent Risk, each highlighting fundamental problems with the standard definition for Inherent Risk and offering insights and advice regarding how to better define and use it.  To save you the trouble of finding and reading old posts, I’ll boil them down:

  • The typical definition for Inherent Risk is “The amount of risk that exists in the absence of any controls.” 
  • The problem is, there has never been an instance in the real world where no controls existed, therefore any measurement based on this definition would be fundamentally unreliable.  And if your Inherent Risk measurement is unreliable, the Residual Risk value you derive from it will also be unreliable.
  • Instead, these posts advise, you should think of Inherent Risk as “current risk” — i.e., the amount of risk that exists with current controls.  Alternatively, you could think of it as the amount of risk if only specific, bare minimum controls existed.

Jack Jones is the creator of Factor Analysis of Information Risk, the FAIR™ standard for quantitative risk analysis, and Chairman of the FAIR Institute.


These arguments and alternatives are on solid logical ground.  That said, the standard “no controls at all” definition of Inherent Risk is firmly ensconced in our profession’s psyche, standards, and processes, so alternative definitions — regardless of how sound they might be — are not likely to displace the old one any time soon. 

However, as I was putting together a university lecture on risk measurement a while back, I came up with an approach that enables us to use the standard definition and stand behind the results.  It also supports quantitative analysis and is pretty easy to understand and explain.  Let’s take a look…

A simple approach for Inherent Risk

To ground the discussion, let’s look at Inherent Risk in the context of a ransomware scenario — i.e., answering the question, “What’s the inherent risk associated with a ransomware event?”

The loss magnitude side of the equation

There are two options here, one that leverages FAIR™ and one that doesn’t.  We’ll look at the not-FAIR option first:

1. What would the losses be if your organization suffered a ransomware event that affected its key operating processes and that it could not recover from?  In other words, there are no backups of key data; not even a paper trail it could use to reconstruct key information, transactions, etc. (because anything that mitigates loss, like a paper trail, constitutes a “control”).  If your organization is commercial in nature, versus a government organization, it’s safe to assume that it goes out of business.  In other words, the worst-case loss magnitude is essentially the value of the business.

NOTE:  You may find that many scenarios analyzed using this approach will have worst-case loss magnitudes that equate to the value of the business.

2. Alternatively, if you’re using FAIR, you could simply sum the worst-case loss magnitude values from the different forms of loss within a FAIR analysis of a ransomware event and use that as the magnitude component of Inherent Risk.  This approach may not as purely reflect “no controls” as option 1, but it avoids the problem of almost every scenario’s loss magnitude equating to the collapse of the organization

This approach may also be a better fit for non-commercial organizations.


Learn FAIR risk analysis for cyber, technology and operational risk with training endorsed by the FAIR Institute


The probability side of the equation

This side of the equation boils down the likelihood of the event happening within the next twelve months.  After all, any time you’re talking about the probability of some event, you have to establish a time frame context. 

With this baseline assumption, and given the frequency of ransomware events these days, as well as the absence of any preventative controls (like secure system configurations, URL filtering, anti-malware, anti-phishing training, access privilege restrictions, etc.), it may be reasonable to assume there’s a 100% probability of this event occurring in the next twelve months (from a FAIR analysis perspective, this is equivalent to setting Loss Event Frequency (LEF) to 1.  In fact, it may be reasonable to assume this 100% probability holds true for any cyber event. 

Of course, if we were evaluating non-cyber events where the “natural” frequency is lower (e.g., an earthquake, a large meteor strike, etc.), then our probability of occurrence in the next twelve months will be less than 100%.


Become a FAIR Institute member, receive a free orientation session with a FAIR Enablement Specialist.


And the results are…

Given the above, the Inherent Risk for ransomware is either 100% x the value of the company, or 100% x the sum of the worst-case loss magnitude values from a FAIR analysis, depending upon which loss magnitude option you use.

Either way, we now have a way to measure Inherent Risk that is defensible and at least mostly aligns with the “no controls” definition of Inherent Risk.  It also enables quantification and a more data-driven approach to both sides of the risk equation. 

With this more solid foundation, we can more confidently derive Residual Risk – if we can accurately measure the risk reduction effects of our controls.  Naturally, this sets us up for a future blog post where the FAIR Controls Analytics Model (FAIR-CAM) comes into play.  Stay tuned…

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37