There was a question recently on the FAIR Institute Members LinkedIn forum regarding “unknowns”, specifically, “How do we analyze the risk of not knowing what threats and vulnerabilities we might not be aware of that could lead to losses?”
This 5th post in this series comes to you courtesy of useful feedback I received from leaders within the NIST CSF program team.
In the first two posts of this series I talked about how most organizations seem to characterize themselves as having a “Medium-Low” risk appetite,
In Part 1 of this series I shared that most organizations seem to, almost by default, characterize themselves as having a “Medium-Low” risk appetite.
People regularly ask questions regarding FAIR’s difficulty and the difficulty of quantitative risk analysis in general.
As with so many other terms in the risk management profession, there seems to be a fair amount of squishiness and inconsistency in how risk appetite (and its close cousin, risk tolerance) are defined and used.
I’m often asked, “How does FAIR account for, or deal with, inherent risk?” This particular question doubles as one of my most favorite and least favorite, for different reasons.
Adding the “So What?”
It’s easy to understand that higher levels of maturity in various controls or risk management functions should equate to less risk. The challenge comes in measuring how much risk will be reduced by certain improvements.
A round peg in a round hole
As I mentioned in Part 2 of this series, frameworks like NIST CSF (and PCI DSS, ISO 27xxx, FFIEC CAT, etc.) have inherent limitations regarding their ability to help organizations measure risk, prioritize their concerns, or communicate the true value proposition of cyber security improvements.The good news is that these missing capabilities are where FAIR shines. That said, there are challenges…