I regularly read blog posts or encounter people in our profession who dismiss quantitative cyber risk measurement as “guessing”, or “nothing more than feelings” (cue the Morris Albert song). Since this is such a common concern, I thought it would be worthwhile to examine this issue of what's subjective, what's objective and what falls between.
Some of you may recall a series of posts I wrote on this topic last year. In the third post of that series I said I’d write another post that lays the foundation for dealing with risk appetite more effectively. Well, here we are a year later and I’m finally going to fulfill that promise. Hopefully, you’ll find the wait worthwhile.
In a recent survey, information security professionals identified reputational damage as the most costly form of loss from cyber events. But is it really? In this first post in a series I’ll lay some groundwork that should help us evaluate the potential impact of cyber event-related loss of reputation.
Recently, the Wall Street Journal (WSJ.com) published two charts from Juniper Research that paint a disheartening picture of the state of cybersecurity. One chart shows a projection of cybersecurity spending increasing (more or less linearly) over the coming five years, while the other chart projects a more exponential-looking growth in cybersecurity losses over that same timespan.
There are a lot of reasons why some people believe measuring cyber risk isn’t possible — from misperceptions about data shortage, to the fallacy about intelligent adversaries, to the inconsistencies that commonly occur when two different people get two different answers when measuring the same risk.
In the first two posts of this series, I discussed questions regarding how to make estimates when data is sparse or missing altogether, and how to account for the fact that historical data may not perfectly reflect the future. In this post, I’ll walk through an example risk analysis that is challenged in both of those respects.
In my previous post (No Data? No Problem) I discussed the question, “How do you make estimates when you have no data?” This post focuses on a related question – whether historical data can be relied upon to reflect the future.
A member of the FAIR Institute LinkedIn forum asked an important question the other day:
“I was wondering if there are any guidelines, rules-of-thumb, etc. on how to decide when something should end up in a risk register or should be handled differently.
Jack Jones led the discussion at this month’s meeting of the FAIR Institute’s Data Utilization Work Group, including fielding this question from a FAIR Institute member about data breaches. Jack is the Institute’s Chairman and the co-author of Measuring and Managing Information Risk: A FAIR Approach.