Triaging Risk: A Year In The Life Of OpenFAIR

Triaging Risk - A Year In The Life Of OpenFAIR.jpgAbout a year ago, one guy came up with a great idea:

“What if we measured our customers against the Critical Security Controls (CSC)? Then we could have some consistency between contracts, and start to provide some benchmarking. It would help our customers measure their maturity, set priorities, and understand their risk exposure.”

Okay to me, except, it was really just a bog standard controls assessment. Where exactly were we going to get a measurement of risk in this? 

A quick note about my employer. MNP is the 5th largest accounting, tax and consulting firm in Canada. We provide many services to our customers, including information risk assessment services. When I speak below about customers here I am not referring to internal clients; I'm referring to our external clients. 

So I voiced my skepticism. Their response was that the controls in the CSC were specifically selected based on what would most likely assist with breaches; either preventing, detecting or responding. I do have to say I was impressed, most control frameworks are a bit like throwing everything in the kitchen sink at the problem and hoping that it works. Not exactly in line with my new school of information security thinking.

“Well that’s good,” I said, “but what if we do an actual risk analysis. We could combine what you have with actual industry data from the Verizon DBIR and do a risk analysis based on OpenFAIR.” No one at my firm had yet heard of the FAIR risk standard. We were still stuck in the bad old days of qualitative methods and matrix “multiplication” to derive risk. You know: high x medium = extra medium (I guess).

So I walked them through the FAIR risk model and they bought into it. For our first iteration, we adopted a quasi-qualitative approach much like the BRAG approach outlined in the FAIR Book. But instead of selecting threat actors, we used the ones in the DBIR. Instead of calibrating people, we used the incident and breach numbers from the DBIR. Instead of asking them what controls they had in place and how effective they were, we based it all on the CSC and assigned our own effectiveness measures. Throw all of this into a spreadsheet to make the analysis a bit easier and voila: a high level, repeatable, risk triage solution.

And that is precisely how it is pitched: it's a high-level health check that helps you understand your security stance:

  • Do you want to benchmark? It gives you that. 
  • Do you want to know what threats are likely going to act against you? It tells you that. 
  • Do you want to know what your biggest risks are? Based on actual industry data and your controls? It tells you that. 
  • Do you want to know what you should deploy to maximize your risk reduction? It tells you that.

Is this all you need to do? Of course not, but it’s a very good start. It's a large step away from managing security with your finger in the wind, or vendor pitches, or what the media report as the vulnerability du jour. It’s all built in Excel and can be delivered in roughly 5-7 days. FAIR doesn’t have to be a full-blown, deep dive, quantitative risk analysis to be effective. My firm is speaking from over a year's worth of experience.

Our customers, whether IT manager, CISOs, or board members, love this. But they have been asking for improvements. Things like adding different control frameworks, incorporating their own data, etc. No one has asked that the risk analysis be changed. FAIR for the win! 

Until next time.

- J

 

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37