There are a lot of reasons why some people believe measuring cyber risk isn’t possible — from misperceptions about data shortage, to the fallacy about intelligent adversaries, to the inconsistencies that commonly occur when two different people get two different answers when measuring the same risk.
In the first two posts of this series, I discussed questions regarding how to make estimates when data is sparse or missing altogether, and how to account for the fact that historical data may not perfectly reflect the future. In this post, I’ll walk through an example risk analysis that is challenged in both of those respects.
In my previous post (No Data? No Problem) I discussed the question, “How do you make estimates when you have no data?” This post focuses on a related question – whether historical data can be relied upon to reflect the future.
A member of the FAIR Institute LinkedIn forum asked an important question the other day:
“I was wondering if there are any guidelines, rules-of-thumb, etc. on how to decide when something should end up in a risk register or should be handled differently.
Jack Jones led the discussion at this month’s meeting of the FAIR Institute’s Data Utilization Work Group, including fielding this question from a FAIR Institute member about data breaches. Jack is the Institute’s Chairman and the co-author of Measuring and Managing Information Risk: A FAIR Approach.
This month’s FAIR Institute Data Utilization and Cyber Risk workgroup calls had excellent attendance and some great dialog. I’m always pleased/impressed with the quality of thinking people bring to the these calls.
Well, the annual pilgrimage to San Francisco and the RSA conference is underway.
Last week we held the second Cyber Risk Workgroup call, with excellent attendance and active engagement. During the call, we discussed the white paper I wrote regarding “Clarifying Risks”.