Jack Jones…creator of the FAIR model (that’s Factor Analysis of Information Risk)…author of the FAIR book Measuring and Managing Information Risk: A FAIR Approach…chairman of the FAIR Institute…and the leading evangelist for effective risk measurement based on critical thinking. For a quick education on Jack’s thinking and the FAIR approach to risk, check out this reading list of Jack’s 10 most popular writings on the FAIR Institute blog.
Warning: If you’re a risk professional new to Jack and FAIR, prepare for business-as-usual to be challenged.
A must-read if your organization follows the popular National Institute of Standards and Technology’s Cybersecurity Framework. Jack is a fan of the Framework for its adaptability and focus on an organization’s maturity level on implementing controls. But, as he shows, NIST CSF can’t tell an organization whether the costs of maturing to various levels would be offset by the reduction in risk—that’s where FAIR picks up the analysis, and Jack presents some detailed examples of how to make that work.
Jack leads you through one of the toughest conceptual problems in risk analysis: inherent vs. residual risk or what’s risk in the absence of controls. Jack comes up with a fresh, practical definition that also lends itself to FAIR analysis.
Jack tackles a lesson that FAIR newbies most often struggle to grasp. FAIR gives us a model to clearly define risk, but equally important is scoping the risk analysis, defining the elements that contribute to a potential loss event: the assets at risk, the threats, the controls, etc. Jack suggests some tips on how to break down the complexity of scoping with “sub-models.
A member of the FAIR Institute LinkedIn forum asked Jack for advice on this perennial dilemma for CISOs and risk analysts: Fill up a risk register with every concern or audit finding, or cut some entries and possibly miss something that turns into a large issue? Jack applies some FAIR-style critical thinking to help you separate what are and are not risks in planning your register.
Jack applauds this guidance from NIST on conducting risk assessments as being FAIR-like in conception. But watch out: He also identifies a flaw that will often generate inaccurate results in your risk measurements.
It’s conventional wisdom in risk analysis: “You can’t measure reputation risk.” But you can measure reputation damage, Jack answers, and goes on to explain why FAIR is so effective at dealing with this very real outcome of a loss event, while risk professionals typically throw up their hands and walk away from the problem.
Risk management professionals use the word “risk” in different ways, a major challenge to consistent communication and measurement for the profession. In this short whitepaper, Jack lays out a simple, logical definition for risk that cuts through the confusion.
In a five-part series, Jack gives some solid advice on how to put together a meaningful “top 10 cyber risks list” for your organization, starting with a structured approach to identifying the assets at risk and the threat elements, working toward a formula to prioritize based on your most significant potential losses.
It’s a common objection: We don’t have enough data to do quantitative analysis. In fact, as Jack writes, you have more data than you think and you need less data than you think. In this post, he introduces you to “calibrated estimation”, the art of measuring when measurement seems hopeless.
Another must-read: The introduction to Jack’s eBook, An Executive’s Guide to Cyber Risk Economics, on how to apply economic principles and metrics to cyber-related decisions. If you’re a CISO, CIO, CEO or board member seeking an organization-level view of cyber, this is your starting point.