In part 4 of this series, I shared an approach that can help an organization identify its most significant loss event scenarios. This is (or should be) the first step in gaining a handle on their risk landscape. In this post, I’ll discuss how organizations can more reliably identify the control deficiencies in their environment that are the largest contributors to the risk in their environment.
You’re already part of the way there
In the process of performing the analyses described in part 4, you will undoubtedly have identified insufficient controls. Maybe it’s the fact that access privileges for some systems or applications aren’t reliably adjusted when personnel leave the organization or change roles. Or maybe it’s a weak authentication protocol, occasional default passwords, unpatched systems, weak logging, or unreliable recovery processes.
Figuring out which deficiencies are most significant is simply a matter of re-analyzing the scenarios but with control values that reflect an improved state (e.g., more reliable access privilege conditions). By comparing the before and after state, the risk reduction from improvements becomes obvious. After comparing all of the deficient controls in the top loss event scenarios, it’s relatively simple to identify which deficiencies float to the top of the heap. Add a cost analysis and you can also determine which control improvements provide the best ROI.
The control deficiencies identified through this process are, however, strongly focused on what we refer to as “asset-level controls” (a.k.a., “scenario-level controls”). In other words, controls that directly affect the frequency and/or magnitude of loss. This scenario analysis process usually does not directly identify risk management deficiencies in execution (e.g., policies and awareness) or decision-making (e.g., visibility, analysis, and reporting). For these deficiencies, we have to ask different questions.
The 2nd most important question
If “How much risk?” is the first most important question in risk management, the second most important question is, “Why?” Why does the organization have the types and levels of risk that it does? Those deficient controls that were identified during the scenario analysis phase didn’t magically appear out of thin air. Decisions were made (or not made) and actions taken (or not taken) that resulted in deficient asset-level control conditions. Unless the organization identifies and treats the root causes behind those deficiencies, it is doomed to relive them through what I refer to as “risk management groundhog day”.
At this point in the controls analysis process, organizations should perform root cause analyses to determine the reasons why its control deficiencies exist. When done well, this process will likely identify a small handful of problems (two or three, perhaps) that contribute strongly to most of the asset-level control deficiencies. These should clearly make the top 10 list because they enable strategic and systemic improvements versus simply the short-term loss exposure reduction that you get from fixing deficient asset-level controls.
By the way, I have yet to encounter an organization that performs solid root cause analysis in the risk space. All I have seen to-date are superficial proximate cause analyses. In a future post, I’ll share an approach for performing much more effective root cause analyses. If you don’t want to wait for that post, there’s an infographic on the RiskLens resources page that provides an overview. I also discuss it in the book, "Measuring and Managing Information Risk: a FAIR Approach".
The 3rd most important question
Among the things that I didn’t like about being a CISO (and there were several), surprises topped the list. All too often, I would learn after-the-fact about some aspect of the organization’s risk landscape that was materially important. To avoid surprises, an important controls-related question that has to be evaluated to flesh-out the deficient controls list is, “What don’t we know?” Remember in my earlier posts, where I talked about “clueless unknowns”? Well, this is where we attack that problem. Does the organization have a good understanding of:
- Where its key assets exist
- What its key points of attack are (e.g., points of entry into the network)
- New technologies or business processes that are being implemented
- What’s happening in the threat landscape (e.g., evolving capabilities, methods, and changes in frequency of events)
- Changes in the controls environment
- Changes that affect impact (e.g., evolving regulations, etc)
If the answer to any of these questions is sketchy, then the organization almost certainly should add it to the list, because it's very hard to make well informed decisions about risk if visibility into the risk landscape is bad. (In case you're wondering why visibility falls into a discussion regarding controls, it's because visibility is categorized as a decision-making control within FAIR.)
Stack-ranking control deficiencies
Remember my earlier posts, where I pointed out the importance of being able to stack rank things in order to prioritize effectively? Well, some of you may be asking yourself the question, “How do I compare control deficiencies identified through the “why” and “what don’t we know” processes against each other and against the asset-level control deficiencies?” Unfortunately, the process for doing that goes well beyond what I can convey in a series of blog posts.
Absent being able to compare these three types of deficiencies against one another in a single list, your best option is to create three “top deficient controls” lists:
- One for asset-level control deficiencies (which are easy to compare based on the evaluated benefits of improving them)
- One for the most significant root-causes (of which there will probably be just a handful), which can be compared against each other in terms of which ones are most relevant to the biggest asset-level control deficiencies
- One for the top visibility deficiencies, which can be roughly compared against one another based on the severity of the deficiency (e.g., the organization has marginal visibility into threats against a smaller subset of its environment, versus almost no visibility into the control conditions of a broader subset of its environment)
If you want to, you can then take the top three or four from each list to construct your top ten control deficiencies list. They can't be stack ranked, but at least they can be ranked in their own category.
Regardless of whether you use one list for control deficiencies, or three, the simple fact is that the process of evaluating the controls landscape in this fashion will significantly improve an organization’s ability to focus on those deficiencies that matter most in its risk landscape.
But what about…?
A question I anticipate from readers is, “How can I leverage the “risk assessment” data my organization already has?” Good question, and I wish I had a feel-good answer for you. Today, most of the “risk assessments” performed in the industry are going to be based on controls checklist frameworks like NIST CSF, PCI, FFIEC CAT, ISO2700x, homegrown lists, etc. Although these assessments can be useful in identifying control deficiencies, they don’t help with measuring risk or prioritizing the deficiencies in any real sense. My suggestion is use them to:
- Satisfy the auditors, regulators, or other checklist-focused stakeholders
- Compare those results against the deficiencies identified through the processes that I have described. You never know. The checklists may identify something you’ve overlooked. That said, if you’ve followed the process I’ve described, it is unlikely that a deficiency identified through the checklist and not through the process is going to qualify as one of your top concerns.
Wrapping it up
This is the last blog post in this series. I hope you’ve found it interesting and worthwhile. As we continue to work with and learn from members of this forum, we’ll provide new insights and refinements so that everyone can benefit. In the meantime, please let me know if you have any questions or feedback via the comment box below.