In the first post of this series, I focused on answering a commonly expressed concern about the reliability of cyber risk measurement. At the end of that post, I mentioned that some readers might draw a distinction between an example I gave and the real world of cyber risk measurement.
The Wall Street Journal recently referenced a research report published by Ponemon Institute entitled The True Cost of Compliance With Data Protection Regulations. After reading the report I’ve come to the conclusion that although the research objective was admirable, it completely missed the target.
When I was recently asked to write a blog post making cyber and technology risk predictions for 2018, I balked. If you’ve read (and you should read) Superforecasting: The Art and Science of Prediction (Dan Gardner and Philip Tetlock), you’ll understand why.
I regularly read blog posts or encounter people in our profession who dismiss quantitative cyber risk measurement as “guessing”, or “nothing more than feelings” (cue the Morris Albert song). Since this is such a common concern, I thought it would be worthwhile to examine this issue of what's subjective, what's objective and what falls between.
Some of you may recall a series of posts I wrote on this topic last year. In the third post of that series I said I’d write another post that lays the foundation for dealing with risk appetite more effectively. Well, here we are a year later and I’m finally going to fulfill that promise. Hopefully, you’ll find the wait worthwhile.
In a recent survey, information security professionals identified reputational damage as the most costly form of loss from cyber events. But is it really? In this first post in a series I’ll lay some groundwork that should help us evaluate the potential impact of cyber event-related loss of reputation.
Recently, the Wall Street Journal (WSJ.com) published two charts from Juniper Research that paint a disheartening picture of the state of cybersecurity. One chart shows a projection of cybersecurity spending increasing (more or less linearly) over the coming five years, while the other chart projects a more exponential-looking growth in cybersecurity losses over that same timespan.
There are a lot of reasons why some people believe measuring cyber risk isn’t possible — from misperceptions about data shortage, to the fallacy about intelligent adversaries, to the inconsistencies that commonly occur when two different people get two different answers when measuring the same risk.
In the first two posts of this series, I discussed questions regarding how to make estimates when data is sparse or missing altogether, and how to account for the fact that historical data may not perfectly reflect the future. In this post, I’ll walk through an example risk analysis that is challenged in both of those respects.
In my previous post (No Data? No Problem) I discussed the question, “How do you make estimates when you have no data?” This post focuses on a related question – whether historical data can be relied upon to reflect the future.