FAIR standard creator Jack Jones spoke this week at the 2022 RSA Conference with the message that the future of risk measurement and management is (drum roll) artificial intelligence and automation. You might have heard the same in vendor booths on the show floor, but not like Jack told it: The industry won’t get there without a major shift left
In the previous post, I provided examples of some controls-related data that can’t be used to support automated cyber risk quantification (CRQ). But the news isn’t all bad. There are some data that can be used to support CRQ.
I covered a lot of ground in the previous posts, and rather than summarize them here I’ll assume you’ve read those posts already. So, let’s dive into the last analytic dimension…
In the previous two posts, I briefly discussed that:
- The CRQ market is rapidly growing, and there’s a strong desire to automate CRQ analysis...
In Part 1 of this series, I discussed that the market for cyber risk quantification (particularly automated CRQ) is growing rapidly, but that automation, done poorly, can to more harm than good. In this post, I’ll begin to discuss what it takes to automate CRQ responsibly.
Until recently, it’s mostly been organizations with visionary and early adopter tendencies who have embraced cyber risk quantification (CRQ). They understood the value and were willing to deal with the challenges.
In writing the FAIR-CAM™ white paper, I took a short detour from the complex landscape of cybersecurity to explain the new FAIR Controls Analytics Model™ with an analogy that almost anyone can relate to.
The Apache Log4j security vulnerability uncovered recently is every cybersecurity defender’s nightmare - a zero-day exploited in a practically ubiquitous software library. Because zero-day exploits aren’t going away anytime soon, it’s important for organizations to increase their resilience to this type of change in the risk landscape.
In my last blog post on qualitative risk measurement, I discussed three key aspects that often make the difference between good measurements and bad measurements — scope, model, and data. I also pointed out that these apply to both qualitative and quantitative risk measurement.
What makes for a high-quality qualitative risk measurement? The answer is simple. We just have to go back to the scope, model, and data elements