As an advocate for FAIR, I spend a great amount of time preaching the benefits of quantitative risk analysis over the qualitative approach. Ranking of risks 1-5 or red-yellow-green based on subjective judgments doesn’t measure up (literally) to a standard model like FAIR that produces consistent results expressed as probabilities.
Learn more: Qualitative vs. Quantitative Analysis for Cyber Risk: What’s the Difference?
But there’s also a hidden hazard to qualitative analysis; it can be easily gamed to skew results. Wait a minute, you say, can’t a quantitative analysis be gamed, too? Sure, but not easily because the inputs to the analysis in hard numbers are there for everyone to see and challenge.
Here are some qualitative analyses gone wrong – don’t go there!
1. Underestimating risk to avoid mitigation
The whole reason we, as a profession, began assessing risk scenarios was to understand how detrimental risks may be to the organization and as such, what should be done to mitigate them. With that thought in mind, the first example is how qualitative methods have been used to underestimate “risk” so that no action is required. Given that qualitative risk analysis lacks the rigor and defensibility of an accredited quantitative model (like FAIR), a risk can easily be labeled as “low”.
By doing so, the risk will go unchallenged and continue to fly under the radar of management. Unfortunately, an example of a phrase I’ve commonly heard is “Can we rate the risk as ‘low’ so that we don’t have to mitigate?” I think we could all agree this is not an effective way to manage risk within any organization.
The FAIR Institute was named one of the three Most Important Industry Organizations of the Last 30 Years by SC Media.
2. Overstating risk to avoid condemnation
On the other end of the spectrum, we have the avoid-fault-at-all-costs-analysis. Given that the extent of rigor in a qualitative analysis is often a wet finger waved in the air (“feels like medium!”), the analyst has the ability to overstate the amount of risk associated with the scenario.
The reason an analyst would think of doing such a thing is to avoid having his or her name associated with an event where the scenario under analysis comes to fruition. Should this happen and the analyst has incorrectly underestimated the risk, the organization would not be prepared, and the fear is that responsibility would fall on the analyst's shoulders.
An example of a phrase that represents this second way to game the system is “Let’s call it high, because I don’t have enough information to call it low.”
3. Utilizing a risk rating to further an agenda
Saving the best for last, my personal least-favorite example of gaming the system is the use-the-risk-rating-to-further-my-own-agenda analysis.
At first glance, this may sound similar to the understating and overstating varieties, but this is more insidious. You can hide behind qualitative analysis if you want to – it can be difficult to understand the results of an analysis when the method used to derive them was the opinion of the analyst.
This means that aside from the “why the heck are we communicating loss to the organization in colors?” (a question I hope you are getting asked), there is very little rigor to dig into and understand the analysis and results. As such, when you’re vying for a fancy and expensive new control improvement or trying to settle a debate in the organization, you can choose the risk rating that best suits your agenda.
This false representation of risk, in my opinion, is the most damaging. To justify a misguided analysis, I have heard phrases such as “It’s a good thing I suggested that control investment, since this risk is high.” Ultimately, this can misappropriate valuable resources from other areas within the business that could have a better ROI.
Coming to National Harbor, MD, near Washington, DC, September 24 & 25: the 2019 FAIR Conference, the year's big event for learning the most advanced techniques in risk management and networking with the best thinkers in the risk profession. Register now!