I’m often asked, “How does FAIR account for, or deal with, inherent risk?” This particular question doubles as one of my most favorite and least favorite, for different reasons. In this blog post, I’ll share why I’m so conflicted on this topic and answer the question about how FAIR can be used to effectively deal with the question of inherent risk.
Before we get into the meat of this though, we need to make certain that we’re all on the same page about what the term “inherent risk” means. The common definition of inherent risk is something along the lines of, “The amount of risk that exists in the absence of controls.”
The "bad news" about inherent risk
Inherent risk is generally used in two ways:
- To prioritize which business processes, applications, systems, etc. deserve the greatest amount of risk management attention. For example, a financial services organization might believe that the inherent risk associated with its treasury and cash management services is greater than the inherent risk associated with its headquarters cafeteria operation.
- It is also used as the basis for deriving residual risk (the amount of risk that remains after controls are accounted for) and/or articulating the value proposition of controls. For example, an organization might estimate the inherent risk to be “high” for a particular business process. When you take into account the condition of controls, the residual risk is believed to be “medium.”
Although these seem like logical and worthwhile objectives, there is a disconnect in practically applying inherent risk to meet these objectives.
Let’s look at that financial services firm that believes its treasury and cash management services represent greater inherent risk than the cafeteria. Now, keep in mind this part of the definition for inherent risk — “… in the absence of controls.” By definition, an absence of controls in the cafeteria means that sanitation, food preservation, food preparation and even hiring processes all go right out the window. I’m envisioning Hannibal Lector as head chef and/or people dying from food poisoning, either of which is arguably as bad or worse than cash being drained from the coffers.
In fact, as I’ve looked at various processes in various organizations, I have struggled to find any processes that, in the complete absence of controls, can’t make the claim of being “high risk.” This suggests that the first objective for inherent risk, high-level risk prioritization, is hard to achieve given the common definition for inherent risk.
My favorite question to ask of people who use inherent risk is to describe a “no controls” environment for the business process (or whatever) they’re rating. Without fail, they fail. The “no controls” environment they describe always still includes all kinds of controls — HR controls, environmental controls, administrative controls, governance controls, technology controls, etc. I have yet to encounter a useful and defensible description of a “no controls” environment, which casts a pretty big shadow on how people are determining inherent risk.
The second objective, deriving residual risk, is commonly achieved by subtracting the control score from the inherent risk score to arrive at a residual risk score (e.g., inherent risk of 8 minus control efficacy of 4 gives you a residual risk of 4). I’ve also seen multiplication used where, for example, an inherent risk of 8 is multiplied by an control efficacy value (using a percentage like 50%) to arrive at a residual risk of 4. Let’s set aside for a moment the rather substantial problem with performing math on ordinal scales. Instead, let’s focus on two things:
- Given the concerns I’ve described above (plus others I’ve left out for brevity’s sake), it’s pretty hard to logically defend using inherent risk as the basis for deriving residual risk.
- If the “no controls” environment isn’t really “no controls,” then the control efficacy rating is based on an ambiguous state.
Both of these points suggest that the second objective, deriving residual risk, is on flimsy ground as well.
After all of this, you might have the impression that I’m not a fan of inherent risk, but that’s not the case. I have actually learned (slowly) to appreciate its objectives and believe inherent risk can, in limited circumstances, be a useful concept. I just don’t believe the common approaches lend themselves to defensible results, which brings us to…
The "good news" about inherent risk
In order to make inherent risk useful and defensible we have to adjust our perspective on it. First, rid yourself of the notion of “no controls.” It’s simply not practical, necessary, or useful. Second, let’s reformulate our definition of inherent risk to be something like, “The amount of risk that exists when key controls fail.” Although on the surface this definition doesn’t seem like it’s all that much different, it actually forces us to make a pretty drastic change. Instead of deriving residual risk from inherent risk, we now flip that upside down and derive inherent risk from residual risk. Here’s how that works:
- Using FAIR, it’s straight forward to measure the current level of risk (the “residual” risk) for an asset, set of assets, or a business process.
- The next step is to identify which controls are most important in terms of managing the frequency and/or magnitude of loss within those scenarios.
- To derive “inherent risk” (by this new definition), you simply rerun the FAIR analysis based on the absence of those key controls. You can also be granular in this part of the analysis by subtracting one control at a time, or in groups, to identify which parts of the controls environment appear to have the greatest effect.
The primary advantages to this approach include:
- Your starting point (in this case, residual risk) is stronger because you’ve used a more rigorous analysis approach (With the traditional approach, I’ve never seen anyone estimate inherent risk using anything more than a wet finger in the air). This is critical for identifying where current risk mitigation efforts should be focused.
- By explicitly identifying and evaluating key controls you avoid the silliness of trying to deal with an ambiguous “no controls” state. Also, if you get granular in your controls analysis you benefit from understanding which controls are most important.
- You can actually defend your results.
Summing it up
Not everyone feels that formal inherent risk measurement is all that useful from a risk management perspective. Myself, I rarely take the time to measure it formally, but rather apply the concept informally at a more granular level.
For example, consider two databases; one with a large amount of sensitive consumer information and another with none. In this case it's easy to conclude that, within the context of a confidentiality breach, the one with a lot of sensitive information has more inherent risk than the other. Of course the other database might serve some critical business purpose that poses even more inherent risk, so it's important to keep context in mind. Regardless, these kinds of informal assessments of inherent risk are part of the everyday triage that takes place in risk management.
Where the principle of inherent risk goes wrong is when people try to "measure" it with the hypothetical no controls assumption, and then try to derive residual risk from that. As I've described above, that just doesn't hold water. So, if you are going to try to formally measure inherent risk (or if someone says you have to formally measure it) you might as well do it well.