FAIR Institute Blog

Risk Analysis and Worst-Case Thinking

[fa icon="calendar"] Apr 22, 2021 8:08:35 AM / by Osama Salah

The Sky Is Falling - Risk Analysis and Worst-Case ThinkingThe generally accepted model for risk is that it is a function of frequency (some refer to it as probability or likelihood, i.e., how often the loss event will probably occur in a given time frame) and magnitude (how bad the event will probably be, consequences).

Every future loss event has a probable frequency of occurrence and a probable loss magnitude. And we say "probable" because we cannot predict with absolute certainty when the event will occur, how often in a given time frame, or the exact loss magnitude. Based on the available information, evidence and inference, we can make reasonable estimates, expressed as ranges or probability distributions about both variables.

I sometimes come across the point of view that we should ignore probabilities and instead focus only on quantifying the worst case. 

The general argument goes something like this: "Cybersecurity threats are constantly changing. The little historical data we have is useless to make predictions on future events. Why bother sharing something with management that is going to be "wrong" anyhow. Better we do our planning only on worst-case scenarios. This way, we don't have to deal with probabilities and are always ready for the worst."


FAIR Community Expert Contributor Badge 2FAIR Community Expert Contributor

Osama Salah is IT Security Specialist at the Abu Dhabi Department of Finance and the 2019 honoree of the FAIR Ambassador Award for his education work on behalf of FAIR in the United Arab Emirates. 

Read the original version of this post on LinkedIn. 

Read more blog posts by Osama Salah.


Cybersecurity Threats Are Constantly Changing

The argument is that cybersecurity threats are changing too fast for us to deal with them. I usually use "threat" as in "threat actor." For this article's sake, let's use it also to incorporate the capabilities, tools, and techniques of the threat actor.

So the threat actors are unpredictable, including their capabilities, tools, and attack techniques as they are supposedly constantly changing and continuously surprising us. It would be futile to make any reasonable estimates from that perspective as there is too much uncertainty.

We all heard variations of his argument and, more often than not, as part of a FUD approach (intentionally or unintentionally) to justify getting funding for new controls. But does that reflect reality?

What threats have we seen in the last 5 years that were unimaginable in the 5 years prior? What took cybersecurity experts by surprise? I can't think of anything. Are threats changing that rapidly that we can't keep up or are they evolving at a pace where we can recognize the change and trend, and for one reason or the other, we don't take action, not fast enough, or don't take the right action?

For example, consider ransomware. It is always in the news and mentioned as a major cybersecurity "risk" on so many top 10 risk lists. Did ransomware suddenly sneak up on us and take us by surprise?


Learn FAIR™ and quantitative cyber risk analysis through the FAIR Institute - sign up for FAIR Fundamentals Training now.


The first known ransomware is the "AIDS Trojan," in 1989. So we have known about this type of threat for quite some time. If we consider ransomware as "just" a specific type of malware, we have known about that a few years earlier. Ransomware spiked around 2008 enabled by bitcoin, and we have been suffering from it ever since. Just in case COVID quarantine has messed up your sense of time, it's now 2021.

That does not sound like a threat that materialized out of nowhere. 

Other recent cases are the SolarWinds/Sunburst attack which is "just" another supply chain attack. No part of it is novel or unsurprising. It has been described as a Black Swan, which it isn't!

The current Exchange Server Attacks are "just" another zero-day vulnerability exploitation. Hardly surprising.

The MITRE ATT&CK Matrix shows 14 tactics. These are similar to the cyber kill chain phases of an attack. Did this look different last year or the year before? It also lists about 178 techniques. Was it different last year or the year before? Have there been any surprising changes? 

This argument of constant unpredictable change does not hold up. I seriously doubt that our cybersecurity challenges are because of this supposedly rapid change in the threat landscape. I guess that it has more to do with the fact that cybersecurity is a complex problem. That this complexity does not stem from some rapid, unpredictable change but from the large number of components that make up that complex system and the sometimes unpredictable interactions between them. 

You Can't Get Probability Right Anyway. Better Avoid It

The other part of the argument is that trying to estimate probability is impossible with any useful level of accuracy. Why bother talking to management about something that we are uncertain about? Instead, we should talk about the worst-case scenario. We can define the worst-case scenario and plan accordingly.

First of all, the above thinking seems to express probability in single-point estimates like "We will have two ransomware loss events in the next two years." Single-point estimates ignore expressing uncertainty and thus are usually wrong.

So yes, don't make single-point estimates. Instead of ignoring uncertainty, embrace and deal with it. 


Join the FAIR Institute and join the conversation on the future of risk management


Management understands that the future is by definition uncertain; they won't be shocked about anyone actually expressing it. When we estimate probabilities, we express them as a range, including the best, most likely, and worst cases and even with an expression of how confident we are in this estimate. We don't ignore them; we draw a complete picture for management to have all the information to support decision making. If management is strongly risk-averse, they might choose to plan for the worst-case; that's their choice. If management is willing to take on more risk, we should enable them to do that in a rational, defensible way. That information should be shared with them, not kept from them.

Not all worst-case scenarios are created equal. What do you do with two or more loss events that end up having the same worst-case outcome? Wouldn't it beneficial to know if one is estimated to occur more often than the other? Shouldn't that be considered in the decision-making? 

What if the worst-case doesn't happen and we see multiple cases of far lower impact play out? How do we explain that to management? Don't we risk being accused of not having presented all available information for decision-making, pushing our own agenda to procure more tools, or having wasted resources that the business could have invested where they were needed more? 

We Don't Have Enough Data and Historical Data Is Useless

The argument is basically that to even consider estimating probabilities; we would need a lot of data that we don't have. And even if we have some data, then it's "useless" historical data that is no longer relevant in a supposedly constantly changing world.

It's exactly because we have imperfect data that we need to measure and quantify and deal with uncertainty, not ignore it.

If you have read any of the books of Douglas Hubbard on measurement, you know the assumptions he recommends we make:

  1. Your problem is not as unique as you think.

  2. You have more data than you think.

  3. You need less data than you think.

  4. There is a useful measurement that is much simpler than you think.

Historical data isn't by default useless unless we use it wrong. 

 

The Sky Is Falling - RedRisk Avoidance vs. Risk Taking

Risk management, as frequently practiced, is focused on avoiding risk rather than enabling the organization to take as much risk as is justifiable. Worst-case thinking allows this risk avoidance culture to emerge and flourish. This can lead to unnecessary risk avoidance, waste of valuable resources, and losing out on opportunities to support achieving enterprise objectives. Businesses grow through risk-taking. Unnecessary risk avoidance causes stagnation. The survival of organizations is based on taking the right risks.

 

What About Resilience?

A popular trend is to frame the discussion around "cyber resilience" rather than just "cybersecurity." This appears to be at odds with worst-case thinking. I see the need to be prepared to respond to unforeseeable events at the core of cyber resilience. Being prepared for the worst is one way, but does it still qualify as "unforeseeable" since that particular scenario has been defined? The point really is that if you condition yourself to think in absolutes, you risk losing the ability to imagine what else could unfold and prepare for it.

 

Conclusion

To conclude, there is value in thinking about worst-case scenarios. We do it, for example, in business continuity planning, and it helps surface other issues that might have remained unaddressed. However, there is no value in ignoring uncertainty. Instead, we need to look at the different ways of dealing with it.

Topics: Member Content

Osama Salah

Written by Osama Salah

Join the FAIR Community