This month’s FAIR Institute Data Utilization and Cyber Risk workgroup calls had excellent attendance and some great dialog. I’m always pleased/impressed with the quality of thinking people bring to the these calls.
The Data Utilization call began with discussing the opportunities and challenges of leveraging data within GRC tools when doing risk analysis. The bottom line is that those tools have a ton of untapped potential, but they need some adjustments (and we need to be smarter in how we use them) before their potential can be realized. Quite a while ago I wrote a white paper on this (“Failure of GRC”), which is available on the FAIR Institute Members resources page.
We also discussed the opportunities and challenges associated with potential sources of industry data. Specifically, organizations would benefit from industry averages or other quantitative measures of things like attack frequencies of different types (e.g., phishing, ransomware, DDoS, etc.). This type of data would strengthen their ability to do higher quality risk analysis. The problem, unfortunately, is that in order to have good data of that sort we first have to normalize the schema for collecting and analyzing the data. That’s a role the FAIR risk model could fill.
During the next Data Utilization workgroup call (sometime in April), we’ll walk through a risk analysis or two. The intent is to identify and discuss potential sources of data that organizations might have to support these analyses. In doing so, we’ll encounter and work through the challenges associated with security-related data. It should be pretty interesting/enlightening…
The Cyber Risk workgroup call began with a review of the Cyber Risk Management Maturity Model white paper I wrote for the FAIR Institute that is available on the Members resources page. Here again, some great observations were made by participants. One person offered (not unkindly) that it seemed to be an immature maturity model. In a sense, he’s right — it is brand new and undoubtedly has room for refinement. I guess the question is, "Immature compared to what?" As it’s been applied to a number of organizations, it has consistently opened eyes and shed light on important aspects of risk management that are largely ignored. It also helps to explain why organizations struggle so much with prioritization, communicating the value of security, and with reliving the same problems over and over. In a future blog post, I'll provide a comparison of this model versus the common ones in the industry – and why the differences are so important.
The maturity model white paper examines twelve elements that drive the two dimensions of risk management maturity that my research has found to be fundamental: well-informed decision-making, and reliable execution. Many of the elements underlying these dimensions are missing from the risk management program assessment frameworks and maturity models commonly used today. Because of these differences, organizations that score decently when measured against the common frameworks typically crash and burn when examined through this lens. None of these organizations, however, have disputed the results. In every case, they’ve agreed with both the results and their importance.
By summer, the FAIR Institute will apply this model via a survey of hundreds of organizations. The intent is to begin developing industry benchmarks and generate an annual report.
Topic(s) for the next Cyber Risk workgroup call (also taking place sometime in April) are yet to be determined. We’ll do a little crowdsourcing of the workgroup membership to identify and prioritize the options, so more to come…
BTW – the question was asked whether these workgroup meetings would qualify as CPE's. I'm quite certain they would.