One of the questions I commonly encounter is "How do you take something like FAIR and apply it to a big problem, like measuring the aggregate risk within an entire organization?"
The question often arises, “How is FAIR different from (or better than) a framework like NIST’s Cybersecurity Framework (CSF)?” The simple answer is: FAIR isn’t inherently better or worse; it is fundamentally different and, in fact, complementary.
In part 2 of this series, I discussed the obstacles we most commonly encounter as organizations begin to adopt more mature risk measurement methods — quantification in particular.
In part 1 of this series, I said that quantification was the “easy” part of adopting a more mature approach to risk analysis, and that implementing organizational change was the hard part. In this post, I’ll share the most prominent obstacles I’ve witnessed
In many of my conversations with organizations, I hear the same concern, “We are not sure that we are mature enough to do quantified risk analysis.”
In part 4 of this series, I shared an approach that can help an organization identify its most significant loss event scenarios. This is (or should be) the first step in gaining a handle on their risk landscape. In this post, I’ll discuss how organizations can more reliably identify the control deficiencies in their environment that are the largest contributors to the risk in their environment.
Not infrequently, we’ll be asked whether FAIR “accounts for” scenarios that are unforeseen. The answer is yes – and no, depending on what’s meant by “unknown unknowns”. Allow me to explain, starting with the “no” answer and then the “yes”.
Minimizing unknown unknowns
The bottom line of this blog series is that in order to prioritize effectively, organizations have to have a clear picture of their loss event landscape. Developing a clear and logical taxonomy can be helpful in understanding such a complex landscape and minimizing the odds of overlooking something important but not obvious.
Begin with the end in mind
You may be familiar with Stephen Covey’s 2nd habit of highly effective people, “Begin with the end in mind.” I’m going to borrow that and refine it for our purposes. Specifically, we have to be clear on what the objective of any risk management program should be: to cost-effectively position the organization to experience an acceptable frequency and magnitude of loss events. If we can agree on that (and I’ve yet to hear a logical argument against it), then we can begin to approach prioritization effectively.
It all boils down to…
Risk. The common thread, and thus our normalized measure for each of the “risks” in any top ten list, is that they all contribute to how much risk the organization has. Consequently, all we should do is measure their contribution to the organization’s overall risk and stack rank them. On the surface this seems mind numbingly obvious, but there are two significant challenges: