FAIR-CAM is a model that:
>>Categorizes controls by type and function
>>Sets them in relation to each other, clarifying their interplay
>>Accounts for the direct and indirect effect of controls on risk
>>Assigns units of measurement for control performance enabling a quantitative approach for reliable analysis of the effectiveness of controls and controls systems.
See the FAIR-CAM documentation
In 2022, Jack delivered more significant messages to the profession in writing and speeches, in a cautionary but ultimately hopeful tone. Here are a few of Jack’s insights:
At too many organizations, risk “analysis” is open to anyone who wants to sit at the table and pick a color: red, yellow, or green. “Risk analysis and measurement should be considered a distinct discipline, just as forensics, penetration testing, DevSecOps and others are,” Jack said.
“1. A clear scope of what’s being measured — e.g., the asset(s) at risk, the relevant threat(s), the type of event (outage, data compromise, fraud, etc.).
2. An analytic model (e.g., FAIR), which identifies the parameters needed to perform the analysis, and how data are used to generate a result.
3. Data, which can (ideally) be empirical data, or simply subject matter expert estimates.
On the surface, these don’t sound too intimidating, but all three need to be done well to get accurate results. And that isn’t as easy as one might imagine.”
--Blog Post Series: Automating Cyber Risk Quantification
“We need to get our act together now. One of first steps is admitting that we haven’t been doing risk measurement well so far, and if we automate what we’ve been doing, what do we get? Wrong answers faster.
“All risk measurement requires making assumptions. An automated solution simply moves those assumptions and biases from the people doing the risk measurement to the automation solution designers, and if automation builds in wrong assumptions, then risk measurements are almost certain to be wrong.”
“Why is controls modeling problematic? After all, don’t controls boil down to reducing the frequency or magnitude of loss? Yes, but the devil is in the details. If we don’t understand and account for the mechanisms by which controls affect risk, then the analytic results won’t be accurate, and we won’t be able to make good decisions.”
--Blog Post Series: Automating Cyber Risk Quantification
“We have to do our homework to ensure that our measurement methods stand up to scrutiny. It is easy to come up with numbers that will look reasonable to the uninitiated, but which can’t be defended. And unfortunately, there is no cost to the people who are measuring risk poorly now. The cost is all borne by the decision-makers and stakeholders who rely on those measurements.”
“Open standards for complex things like encryption exist for a reason; that reason being it’s very difficult to get it right, and very easy to get it very wrong. And unfortunately, because cybersecurity risk measurement is complex and nuanced, those who aren’t deeply familiar with all of the different ways it can go wrong won’t be able to evaluate a solution to know whether it can be trusted.”
-- Why Cyber Risk Quantification (CRQ) Demos Aren't Enough
“There’s no reason for our profession to feel bad about being immature in its approach to risk measurement. Every profession evolves from lower levels of maturity to higher. There’s only cause for shame if we don’t look at this honestly and take the steps to correct it.
“In fact, it’s an opportunity. How often do people in a profession have an opportunity to make tremendous leaps in how that profession functions? It’s exceedingly rare.
“So, it’s a huge opportunity for us but it’s also a huge responsibility. We have to do our homework. We should embrace that and take it really seriously.”