A few days ago I had the privilege of providing the opening keynote address at an IANS event in Dallas. If you’re not familiar with IANS (Institute for Applied Network Security), I encourage you to look into it as I believe it serves a very useful purpose and is working hard to be forward-looking. Regardless, one of the questions that was discussed at this event was how much of a CISO’s focus should be on business versus technology.
This last post in the series will focus on briefly summarizing and answering the thoughts/concerns posted by Martin Huddleston in his comments following Part 2. I felt this follow-up post was warranted because some readers seemed to misinterpret Martin’s comments as an indictment
In the first post of this series, I focused on answering a commonly expressed concern about the reliability of cyber risk measurement. At the end of that post, I mentioned that some readers might draw a distinction between an example I gave and the real world of cyber risk measurement.
The Wall Street Journal recently referenced a research report published by Ponemon Institute entitled The True Cost of Compliance With Data Protection Regulations. After reading the report I’ve come to the conclusion that although the research objective was admirable, it completely missed the target.
When I was recently asked to write a blog post making cyber and technology risk predictions for 2018, I balked. If you’ve read (and you should read) Superforecasting: The Art and Science of Prediction (Dan Gardner and Philip Tetlock), you’ll understand why.
I regularly read blog posts or encounter people in our profession who dismiss quantitative cyber risk measurement as “guessing”, or “nothing more than feelings” (cue the Morris Albert song). Since this is such a common concern, I thought it would be worthwhile to examine this issue of what's subjective, what's objective and what falls between.
Some of you may recall a series of posts I wrote on this topic last year. In the third post of that series I said I’d write another post that lays the foundation for dealing with risk appetite more effectively. Well, here we are a year later and I’m finally going to fulfill that promise. Hopefully, you’ll find the wait worthwhile.
In a recent survey, information security professionals identified reputational damage as the most costly form of loss from cyber events. But is it really? In this first post in a series I’ll lay some groundwork that should help us evaluate the potential impact of cyber event-related loss of reputation.
Recently, the Wall Street Journal (WSJ.com) published two charts from Juniper Research that paint a disheartening picture of the state of cybersecurity. One chart shows a projection of cybersecurity spending increasing (more or less linearly) over the coming five years, while the other chart projects a more exponential-looking growth in cybersecurity losses over that same timespan.
There are a lot of reasons why some people believe measuring cyber risk isn’t possible — from misperceptions about data shortage, to the fallacy about intelligent adversaries, to the inconsistencies that commonly occur when two different people get two different answers when measuring the same risk.