FAIR Institute Blog

Triaging Risk: A Year In The Life Of OpenFAIR - Part 2

[fa icon="calendar"] Feb 28, 2017 8:15:00 AM / by Jason Murray

Jason Murray

Triaging Risk - A Year In The Life Of OpenFAIR - Part 2.jpgLast time on "A Year in the Life of OpenFAIR," we covered the establishment of an internal risk triage tool that my firm developed. We went to market with the resulting analysis results, and the response has been very good. As with anything when you start to do it on a recurring basis, you start to learn what works, what doesn't, what can be kept, and what needs to be improved. Now, I am not saying that OpenFAIR needs to be improved, not at all. What I am saying is that our initial tool wasn't a pure implementation of OpenFAIR. We took some shortcuts and made some choices based on what we felt were the practicalities of delivering it in the field.

One of the first challenges we faced was how to incorporate secondary losses as distinct from primary losses. There was some difficulty in understanding what information we would need to collect and how we would incorporate it into our analysis. Keep in mind that we leveraged some very old FAIR papers and the concept wasn't quite as clear as it is in the later papers and the FAIR Book. Another consideration is that while we are very good at performing control assessments, incorporating loss into the picture was more difficult, let alone two types of loss that are connected. So, we punted.

What we did was to collect secondary loss amounts during our interviews but only calculate risk based on the primary loss types. It was a kludge and we immediately recognized the shortcoming. We found that the primary losses were not very significant in terms of dollar amounts. It was the secondary ones, particularly reputation, that was showing up in a large way. So, what we did was to pick a "reasonable" secondary loss expectancy that was static across all our assessments. That struck the right balance, and we are still using this to this day. It's still a kludge but is workable. Given more and more "cost of breach" type data available, we may be able to jump right to the final loss magnitude and skip all this messiness. 

The next major challenge we faced was how to evaluate the strength of controls. This was impacted by the fact that we were prescriptive in which controls we would include or not. If you'll recall, we only evaluate our clients against the Critical Security Controls (CSC). They may have a perfectly good control that is in PCI DSS or ISO27002, but they get no credit for that unless it is also in the CSC, and there is no guarantee of that since the overlap between control frameworks is not perfect. But, also recall, the reason we did this was so we could benchmark our clients and their industries against each other. Your score was initially binary: you either had the control in place as written, or you didn't (NOTE we didn't perform an audit of controls, we took our client's word for it). What we did on our end was to determine which controls were effective against which threat actors (from the DBIR) and how effective we think they would be on a scale from Very Low - Very High. At the end of a long day of interviews with a client, we'd be left with a list of which controls are in place, and how strong we think they would be. But how to determine that aggregate strength of all the controls together acting in concert. Do we take the maximum? Median? Mode? There was no algorithmic approach that we could apply. We had to rely on our judgment. If we have a lot of Highs then the strength can safely be said to be High. If we have a lot of Mediums, but only one Very High, and that Very High is a control that is rather limited in applicability, then the overall strength is likely to be Medium.

But this approach has problems. Notably: 

  • How do you incorporate other control frameworks? 
  • Yes/No leaves no room for partially implemented controls. 
  • Our best judgment is not calibrated.

We are working on each of these for our next iteration of the tool:

  • We are looking at merging a few frameworks into an uberframework. Right now, we are looking at ISO27001/2 as the foundation, then add CSC and PCI DSS. This will provide more coverage of the universe of possible controls, without trying to be exhaustive. We are also contemplating incorporating some industry specific frameworks for market segments we want to target. 
  • We have incorporated partial scores into the tool such as, No, 25%, 50%, 75%, Yes. This provides some granularity without having to jump completely to quantitative methods. Which is something we do want to avoid with this tool. 
  • When we add the plethora of other controls (bullet 1) we will have to come to an agreement within the team as to how effective we think they are. At this time, we can take everyone through a Hubbard-esque calibration exercise. 

The "mark 2" of this internal tool should be much improved, even more defensible, but still lightweight enough to be done relatively quickly in order to meet the intent of providing a high level "health check" for our customers. 


Topics: FAIR, Risk Management

Jason Murray

Written by Jason Murray

Jason Murray has almost 20 years’ experience in information security. Jason brings extensive technical knowledge of computing and networking systems in a wide range of scales from single desktops to corporate environments. Jason has a deep understanding of information security, networking, and related information technologies allowing him to quickly and knowledgeably inspect system architectures, identify vulnerabilities, assess risks and recommend safeguards to reduce and mitigate risk to information assets. His understanding of security and privacy policies, processes, procedures, and other controls allows him to assess and make recommendations on the non-technical aspects of information security within an organization.


Subscribe to Email Updates

Learn How FAIR Can Help You
Make Better Business Decisions

Recent Posts