Risk Models Matter

Let’s say you’re approaching an intersection and the traffic signal turns yellow. What do you do - slow down and stop, or hit the accelerator? The answer for most people, ultimately, is; “It depends." How fast am I going? How far am I from the intersection? Is there someone close behind me? Is there a police cruiser at the intersection? What is the road condition? Am I in a hurry to get somewhere? These are just a few of the considerations that may flash through our minds in an instant - at least some of them subconsciously. We then make a decision and act on that decision.


In that instant when we see the signal change color we take in and process a remarkable amount of data and analyze the scenario using whatever mental model we’ve developed through experience and education. We then instantly apply the results of that analysis against our own tolerance for the different forms of risk that are in play (health/safety vs. legal, etc.). If our data and model are reasonably complete and accurate, then we probably survive these events with an acceptable frequency and magnitude of loss.

blog_1_photo.jpg


That “mental model” is a construct that represents our understanding of how the different elements in the decision play together. If our mental model is missing key elements -- e.g., the effect of icy road conditions on our ability to stop -- then our decision is much more likely to have an undesirable outcome. The same is true if our model contains erroneous or inaccurate structural elements or relationships -- e.g., a belief that icy roads will improve our ability to stop.


When we’re faced with a decision regarding information security we will also apply a model. The model might be an informal mental model or something more formal like FAIR, TARA, CVSS or a host of other candidates. So the question we want to ask ourselves is;“How accurately does the model we’re using represent the problem we’re trying to understand?”


Where models fit

It’s implied above, but the strategic role risk models play in an organization’s ability to be successful is outlined below:


Effective management decisions are predicated on…
Effective comparisons between the issues/options that are in play, which are predicated on...
The ability to measure the issues/options in a meaningful way, which is predicated on…
An accurate model (understanding) of the problem and its elements


The above is true regardless of whether you’re using an informal mental model or something formally defined. Consequently, you can’t expect to consistently and effectively manage a complex problem space if the underlying model for measurement and comparison is badly broken.


An example


For the sake of brevity, this blog post will not include a blow-by-blow analysis -- I’ll save that for another time. I will cite an example, from when I was a CISO, of how a flawed model almost had a significant impact on my employer.


We’d brought in a big-4 consulting firm to perform an attack and penetration exercise against us. At one point in the exercise they came to the table claiming that they’d identified a number of “high risk” issues that needed to be addressed immediately. I took one look at those issues, applied a quick mental sniff-test, and told them they were wrong. I didn’t believe any of those issues represented a level of risk that warranted a high impact (to the business) response. They agreed to sit down and review the issues with me so that they could show me the error of my ways. However, after we broke down each of the issues in detail using FAIR, they conceded that none of the issues warranted an immediate, high-impact response.


Their original analysis (measurement) of the issues was shown to be based on an inaccurate model of the problem (risk). This flawed measurement led to an inaccurate comparison of these issues versus the other issues and priorities the business faced, which would have had a significant negative effect on the business if we hadn’t recognized the flawed analysis. (The flaws in their model involved how they treated threat event frequency and loss magnitude.)


Now, please don’t interpret this as an indictment of big-4 firms. They have a lot of very bright people who do marvelous work. And besides, I’m pretty confident the scenario would have played out similarly with almost any firm because the big-4 firm was using a very common assessment method. The point is, if your model is broken badly enough, the results can significantly affect your organization.


There are models and then there are “models”


As I see it, there are three types of “models” being used in our industry.


Checklist “models” (e.g., ISO, PCI, etc.)
Maturity models (e.g., SEI’s CMM for software development)
Analytic models (e.g., FAIR, TARA, CVSS, etc.)


It’s a matter of opinion ( or religious debate) whether checklists qualify as models. It doesn’t matter to me what they’re called as long as we understand what they are. Checklists are simply a set of security and/or risk management elements somebody believes are relevant and important. Presumably, if you follow the checklist you’re better off than if you don’t follow the checklist, and for most of the checklists in our industry, I’d agree (up to a point). So if you’re looking for a quick and dirty “are we generally doing the kinds of things we should be doing” litmus test, then checklists are fine. They’re also fine for comparing one organization against another (which can be a lemming exercise), and showing progress against, for example, last year’s checklist results. The downside to checklists is that they tend to be one-size-fits-all, they don’t help us prioritize or compare our options, and they don’t help us understand why the different elements are important or how important they are.


The vast majority of maturity models focus on measuring process effectiveness and process improvement on a relative basis -- two very worthwhile objectives. What they don’t do is explain the practical effect of process improvements -- the “why” or “how much”. Consequently, similar to checklists, maturity models don’t help us prioritize or compare options.


Analytic models (some might call them scientific models) attempt to describe how things work. If these models are designed well and used well, they enable the practical and useful measurement of complex systems (systems in the scientific sense vs. the IT sense), explanation of cause and effect, and sophisticated what-if analyses. In other words -- if you want answers to questions like: 


“How much risk do we have?”
“How much less/more risk will we have if…?”
“Which of these issues is most significant and by how much?”
“Which of these mitigation options is likely to be most cost-effective?”


... then analytic models are the way to go. If, however, they’re designed badly or used poorly, then they can very easily lead to inaccurate conclusions and poor decisions.


Bottom line


All three approaches have their benefits and limitations. For most organizations, some combination of them will be the best bet for effective risk management. Speaking of which...


Does organizational maturity play a role?


The short answer is “probably.”


More mature organizations generally have a more complete picture of their risk landscape -- i.e., they likely have better visibility into what their assets are, where their assets are, the control conditions surrounding their assets, the threat landscape, and the loss implications from events. This information should enable them to provide more precise data for analyses with better confidence than less mature organizations. That said, less mature organizations can still get accurate and useful results from analyses -- just often with less precision. And, having gone through an analysis, the less mature organization can acquire a very clear idea of where their information gaps are, the significance of those gaps, and what can be done to fill those gaps.


Organizational maturity also can play a role in how much emphasis an organization will likely place on checklists vs. maturity models vs. analytic models. An organization that’s very immature may place primary emphasis on checklists, just to get things moving in the right general direction. They may only use maturity models to gauge where a few key processes are today and set preliminary goals for improvement. They also may initially limit the use of analytic models to key, high-impact decisions.


More mature organizations often are looking to become more cost-effective and/or need to make business cases for continued improvement. Good risk analyses can make that possible. Speaking from personal experience, once you get your security/risk organization to a point where management no longer views it as the brightest/hottest fire burning in their landscape, it can become very difficult to get their attention (unless that attention comes in the form of cutbacks...). Of course, even some “mature” organizations don’t have their security/risk ducks in a row and struggle to get management to care. For these organizations, being able to explain “why” and “how much” through good risk analyses can be very important. Conversely, taking lame risk analyses results to management can erode credibility and make future dialog even tougher.

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37