Douglas Hubbard, the risk analysis thought leader, author of How to Measure Anything: Finding the Value of Intangibles in Business, and major influence on the FAIR™ movement, leads a one-day seminar before the 2022 FAIR Conference to build a key skill for quantitative cyber risk analysis: calibrated estimation. Part 1 of the day features personal calibration exercises and Part 2 covers advanced calibration, including optimizing a risk analysis team.
Register now for the course Calibrated Probability Assessments for Cybersecurity with Doug Hubbard. You must first purchase a conference ticket, then select this course as an option to add to the conference fee. The conference runs Tuesday and Wednesday, September 27-28 at the Mandarin Oriental Hotel in Washington, DC. Hubbard’s course is Monday, September 25, from 9 AM to 5 PM at the conference site.
What Is Calibrated Estimation?
Business decision-makers frequently rely on stochastic models such as Monte Carlo simulation for risk analysis, using the subjective estimates of in-house experts for inputs. But studies have found that experts in most every field are likely to be overconfident (and sometimes underconfident) in their estimates and, as a result, the models consistently mis-estimate risk.
These experts can be trained to be in the sweet spot between over- and underconfident in estimation – and are said to be “calibrated.” For instance, a calibrated estimator will be right 90% of the time they say they are 90% confident of an estimate.
How Is Calibrated Estimation Taught?
Hubbard has a bag full of techniques to overcome the biases or bad habits of the subject-matter experts, such as the “equivalent bet” method: Imagine you’ve made an estimation and are given the choice to bet on your estimate or – not so confident -- take your chances spinning a big wheel of fortune. The game continues till you cannot choose between your estimate or the wheel – at which point you are 90% confident.
How Is Calibrated Estimation Used in FAIR Risk Analysis?
Filling in the factors for Factor Analysis of Information Risk (FAIR) can rely heavily on the subjective opinions of experts, and often in cyber risk, experts are operating with little or no data. With calibrated estimation, as Hubbard famously says, you have more data than you think, and you need less data than you think. In that spirit, FAIR creator Jack Jones says that the goal of FAIR risk analysis is not to arrive at a precise estimate (which can be precisely wrong) but to reduce uncertainty by improving accuracy in risk analysis. FAIR analysis also uses Monte Carlo simulations to express risk as a range of probable outcomes to account for uncertainty in the underlying estimates.
See Douglas Hubbard in action in videos from past FAIR Conferences: