Before we get into the meat of this though, we need to make certain that we’re all on the same page about what the term “inherent risk” means. The common definition of inherent risk is something along the lines of, “The amount of risk that exists in the absence of controls.”
Inherent risk is generally used in two ways:
Although these seem like logical and worthwhile objectives, there is a disconnect in practically applying inherent risk to meet these objectives.
Let’s look at that financial services firm that believes its treasury and cash management services represent greater inherent risk than the cafeteria. Now, keep in mind this part of the definition for inherent risk — “… in the absence of controls.” By definition, an absence of controls in the cafeteria means that sanitation, food preservation, food preparation and even hiring processes all go right out the window. I’m envisioning Hannibal Lector as head chef and/or people dying from food poisoning, either of which is arguably as bad or worse than cash being drained from the coffers.
In fact, as I’ve looked at various processes in various organizations, I have struggled to find any processes that, in the complete absence of controls, can’t make the claim of being “high risk.” This suggests that the first objective for inherent risk, high-level risk prioritization, is hard to achieve given the common definition for inherent risk.
My favorite question to ask of people who use inherent risk is to describe a “no controls” environment for the business process (or whatever) they’re rating. Without fail, they fail. The “no controls” environment they describe always still includes all kinds of controls — HR controls, environmental controls, administrative controls, governance controls, technology controls, etc. I have yet to encounter a useful and defensible description of a “no controls” environment, which casts a pretty big shadow on how people are determining inherent risk.
The second objective, deriving residual risk, is commonly achieved by subtracting the control score from the inherent risk score to arrive at a residual risk score (e.g., inherent risk of 8 minus control efficacy of 4 gives you a residual risk of 4). I’ve also seen multiplication used where, for example, an inherent risk of 8 is multiplied by an control efficacy value (using a percentage like 50%) to arrive at a residual risk of 4. Let’s set aside for a moment the rather substantial problem with performing math on ordinal scales. Instead, let’s focus on two things:
Both of these points suggest that the second objective, deriving residual risk, is on flimsy ground as well.
After all of this, you might have the impression that I’m not a fan of inherent risk, but that’s not the case. I have actually learned (slowly) to appreciate its objectives and believe inherent risk can, in limited circumstances, be a useful concept. I just don’t believe the common approaches lend themselves to defensible results, which brings us to…
The "good news" about inherent risk
In order to make inherent risk useful and defensible we have to adjust our perspective on it. First, rid yourself of the notion of “no controls.” It’s simply not practical, necessary, or useful. Second, let’s reformulate our definition of inherent risk to be something like, “The amount of risk that exists when key controls fail.” Although on the surface this definition doesn’t seem like it’s all that much different, it actually forces us to make a pretty drastic change. Instead of deriving residual risk from inherent risk, we now flip that upside down and derive inherent risk from residual risk. Here’s how that works:
The primary advantages to this approach include:
Summing it up
Not everyone feels that formal inherent risk measurement is all that useful from a risk management perspective. Myself, I rarely take the time to measure it formally, but rather apply the concept informally at a more granular level.
For example, consider two databases; one with a large amount of sensitive consumer information and another with none. In this case it's easy to conclude that, within the context of a confidentiality breach, the one with a lot of sensitive information has more inherent risk than the other. Of course the other database might serve some critical business purpose that poses even more inherent risk, so it's important to keep context in mind. Regardless, these kinds of informal assessments of inherent risk are part of the everyday triage that takes place in risk management.
Where the principle of inherent risk goes wrong is when people try to "measure" it with the hypothetical no controls assumption, and then try to derive residual risk from that. As I've described above, that just doesn't hold water. So, if you are going to try to formally measure inherent risk (or if someone says you have to formally measure it) you might as well do it well.