What About "Positive Risk"? - Part 2

Positive Risk Part 2.jpgIn the first post in this series, I said there were two belief systems that drive the notion of “positive risk” within our profession. In that post, I focused on why the first of those belief systems — the notion that some business-related risk doesn’t have an upside — is misguided. In this post, I’ll discuss the second belief system, which subscribes to a definition for risk that runs along the lines of “risk = uncertainty”.

BTW — There will be one more blog in this series, which will focus specifically on the ISO 31000 definition for risk.  

Risk = uncertainty (or variance)?

Some definitions for risk equate risk with the uncertainty of an outcome (or similarly, variance from an objective), and because outcomes can fall short of or exceed expectations/objectives, this gives rise to the notion that risk can be “positive.” On the surface this seems logical (but with a question mark that I'll discuss further on). But being logically sound doesn’t automatically equate to being practical or even particularly useful to us as risk professionals.

The baseline matters

A stakeholder’s “baseline” matters a lot when selecting a definition for risk. In other words, if a CEO or other senior stakeholder thinks of “risk” in terms of positive or negative variations from a defined objective, then “risk = uncertainty” may be the appropriate definition to use. If, however, a stakeholder thinks of “risk” in terms of adverse events, then the most appropriate definition is going to be something along the lines of “the frequency and magnitude of loss.” 

Recognizing which baseline your stakeholders operate from is crucial in order to know which risk definition to operate from. Speaking from my own experience, in almost 30 years as a risk management professional, the stakeholders I’ve served have never come at the question of risk with a baseline of “risk = uncertainty”. This explains why FAIR’s definition for risk is what it is — The probable frequency and probable magnitude of future loss — it fits the problems my stakeholders have expected me to help them manage. 

Actually applying the definition 

In numerous conversations with colleagues and senior executives over the years I have yet to encounter someone who operates from the “risk = uncertainty” baseline. A handful of information security professionals and other risk professionals I’ve spoken to profess to subscribe to that definition, but when you press them to give examples of how they apply it to their work they have never been able to provide a practical example that aligns with the definition. Sometimes they’ll discuss how Monte Carlo and distributions can be used to with uncertain data to measure the potential for either gains or losses, or how outcomes can exceed or fall short of an objective, but that is not the same thing as actualizing in practice the “risk = uncertainty” definition. Their statements have simply been examples of concepts and methods that involve uncertainty and the potential for gain. I'm still waiting to see practical examples of the "risk = uncertainty" definition being used to answer the problems we're faced with in our day-to-day work. Until I do, it's simply a nice sounding theoretic construct that some people have bought into but don't actually use.

How people behave

To at least some extent, the “risk = uncertainty” perspective stems from an economist point of view, which says that people/organizations make decisions and take actions in large part to manage the uncertainty of their future. A classic example is insurance. Most people prefer the certainty of premium costs and coverages that come with insurance, over the uncertainty that exists with not having insurance. There are also utility functions economists use that illustrate the relationship between uncertainty about adverse events (or gains) and how people behave. In a perfect world, if we could reduce uncertainty about future events to zero, people could simply treat what will transpire in this perfectly known future, as a cost. In a perfect world…

In defense of the economist perspective, uncertainty can be measured (or expressed) using confidence intervals and such. For example, you might express uncertainty regarding the probability of an adverse event (or a positive outcome) by giving a range that you’re 90% confident in — e.g., likelihood of occurrence is between 50% and 70%. Furthermore, greater or lesser certainty does affect how people make decisions, which leads to…  

…buckets of uncertainty

A construct often associated with this “risk = uncertainty” belief system divides the world into two buckets — those adverse events which are certain to occur, and those which may occur. The first bucket is sometimes called “Certainties” and the other bucket “Risks.” A key assumption underlying this approach is that those “certain” events are, in fact, certain — i.e., the probability is 100%. Of course, logically we know that there can be no perfect certainty about the future, but this two-bucket approach conceptually simplifies things and it aligns with the economist perspective about how people prefer to behave — i.e., that highly certain outcomes are treated as costs (or, I suppose, in the case of positive outcomes, certain gain). Oddly enough (and to my point about how people actually operationalize a risk definition in practice), I have yet to see these Certainties and Risks buckets described within anything but the context of negative outcomes. 

From a practical risk management perspective — at least in the cyber risk landscape — even if you gravitate to these buckets-o’-risk there aren’t that many things we have to wrestle with that fall into the Certainty bucket. This is especially true for the large impact events that almost invariably have lower frequencies. As a result, I haven’t found these conceptual buckets to be useful.

The important role of uncertainty

Despite my comments above, uncertainty does play a critical role in risk measurement and management. In fact, one of the biggest problems with qualitative (e.g., “Our risk is Medium”) and misleadingly precise (e.g., “Our risk is 3.4” or “Our risk is 587”) risk measurements is that they don’t convey the degree of uncertainty in the measurement. Does a “Medium risk” rating represent the best case, worst case, or most probable case? Is it “barely medium?” What is the probability that an issue that has been rated “Medium risk” is actually “High risk”? The same questions apply for precise risk ratings of “3.4” or “587”. This is especially important because of what economists have illustrated about uncertainty’s role in human behavior, it's also why tools and techniques like Monte Carlo, calibrated estimation, Bayesian analyses, etc. are so important.

Now – about that earlier question mark, regarding logical soundness...  

In order for risk management decisions to qualify as “well-informed”, decision-makers have to understand not only how much risk exists, but also how much uncertainty surrounds a risk measurement. But wait – right there. Did you catch that? I said "uncertainty surrounds a risk measurement." If you dissect that phrase it implies that uncertainty is distinct from the thing being measured – in this case, risk. In other words, uncertainty in a measurement (e.g., a variance of blah %) isn't the same thing as the thing being measured (e.g., the frequency and magnitude of loss). It is simply an important characteristic of the measurement. This, by itself, would seem to call into question the validity of "risk = uncertainty" as a definition because that definition leaves out the very thing there's uncertainty about. My gut is telling me this is a logical flaw with "risk = uncertainty" but I haven't had a chance to mull this over as thoroughly as I'd like yet, so I'll leave it as a question mark in my mind for now.

Summary for this belief system

If your stakeholder’s baseline is “variance from an intended outcome/objective”, then using “risk = uncertainty” as your risk definition might be exactly the way to go (although I'm still curious about how you reconcile the "what's being measured" question mark). And if this is the definition you're using because it represents your stakeholder's baseline, then yes, this definition does support the notion of “positive risk.” I'd love to see examples of how you use it in practice though – the problems you solve, how you're measuring and communicating it to your stakeholders, etc....

If, however, your stakeholder’s baseline is something along the lines of “managing adverse events”, then “risk = uncertainty” is not a useful definition for risk, which means that “positive risk” is not viable.  

At the end of the day…

…whatever risk definition or belief system you subscribe to should at least meet the following criteria:

  • It should stand up under logical scrutiny
  • It should enable you to measure risk clearly and pragmatically
  • It should help you to cost-effectively manage the problems that your stakeholders expect you, as a risk management professional, to manage.

I hope what I’m providing in this blog post series is shedding additional light on the ways in which some perspectives on/definitions for risk don’t appear to meet these criteria. If nothing else, I at least hope that you are in a better position to make an informed choice about how you define, analyze, measure, and communicate about risk.

 

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37