It all boils down to…
Risk. The common thread, and thus our normalized measure for each of the “risks” in any top ten list, is that they all contribute to how much risk the organization has. Consequently, all we should do is measure their contribution to the organization’s overall risk and stack rank them. On the surface this seems mind numbingly obvious, but there are two significant challenges:
- Comparisons have to assume a large degree of independence between the elements being compared (or at least any relationships must be accounted for)
- Most organizations don’t measure information security or operational risk accurately or in a manner that is useful for ranking
Risk dependencies and relationships
Let's refer to the list of “risks” from Part 1.
- Since social engineering is a common method used by various threat communities, it is strongly related to cyber criminals and state-sponsored hackers, and sometimes even to the insider threat
- Furthermore, user awareness is typically considered a key determinant in the likelihood of successful compromise by social engineering.
- User awareness also often plays a critical role in data leakage, as well as the risk associated with mobile technologies, third-parties, web application vulnerabilities and cloud computing, each of which may potentially be leveraged by the various threat communities in the list.
So in the span of three bullets, I’ve established complex interrelationships between every single element in our list. As a result, any effort to stack rank these through some sort of wet finger in the air is doomed.
Furthermore, we could replace things in this list with other elements of the risk landscape, like:
- Patch management
…and have the same relationship challenges and difficulty defending/explaining why something did or did not make the top ten list.
Rather than trying to pull apart and evaluate the complex relationships in lists like this, in a latter part of this series I’ll introduce a different way to construct a list of top risk concerns.
When it comes to prioritizing “risks”, one considerable challenge is that risk is typically measured qualitatively using an ordinal scale (e.g., 1 thru 5, low thru high, etc.). Why is this a problem? First, there are inherent challenges in measuring risk qualitatively (more on this in a separate blog). Then, in order for anything to crack the “top ten” list, or perhaps even a "top twenty" list, the stated risk would likely fall into the “high risk” bucket. And because there’s no reliable way of using ordinal scales to differentiate things that fall within the same risk level (especially if they have interrelationships like our list does), we still can’t reliably focus on the things that matter most.
Another challenge is the inconsistency in how people define and approach risk measurement. Some people think of risk as the worst-case potential impact (regardless of probability). Others think of it as the probability of an event (with inconsistent definitions of what constitutes an “event”). Still others think of control deficiencies as “risks” and rate them based on some degree of severity. And yet others approach it as a combination of the above. With this kind of inconsistency, any hope of measuring risk and prioritizing effectively is a pipe dream.
Having made my argument for why organizations are not prioritizing effectively today, in Part 3 of this series I’ll begin to lay the foundation for effectively identifying your organization’s top risk concerns.