Until recently, it’s mostly been organizations with visionary and early adopter tendencies who have embraced cyber risk quantification (CRQ). They understood the value and were willing to deal with the challenges.
However, as the benefits of CRQ have become more apparent, more organizations have begun to investigate what it is and how to adopt it. These organizations are, however, often more pragmatically inclined; wanting to get maximum value for minimum effort. They’d prefer to let technology do the work rather than hire or train analysts, which is a perfectly logical objective. For that matter, even the visionaries and early adopters have begun to clamor for automation so they can scale up their programs and gain even greater value.
In this blog post series, I’ll describe at a high level what it takes to automate CRQ. At the same time, I’ll describe some of the many ways in which it can go wrong.
Jack Jones is the creator of FAIR™ (Factor Analysis of Information Risk), the international standard for quantification of cyber, technology and operational risk, as well as the FAIR Controls Analytics Model (FAIR-CAM™) that quantifies how controls affect the frequency and magnitude of risk.
First, do no harm
I’m going to assume we’re all in agreement that the only purpose for CRQ is to help our organizations make better-informed cybersecurity decisions, and in doing so, be better protected at lower cost. In order to accomplish this, CRQ results need to be accurate.
Not precise; accurate. For those who might be unclear about the distinction, you should think of precision as “exactness” and accuracy as “truthfulness.” You can have very exact answers that aren’t truthful, for example I could claim to be exactly 6’3” tall, but in fact I’m 5’9” tall, so my claim would be inaccurate. Alternatively, I could claim to be between 5’6” and 6’0” tall, which would be accurate, but not highly precise.
The point is, an automated CRQ technology that spits out inaccurate results can do more harm than good by driving poor decisions. If, for example, it tells a CISO that the most important investment to make is X, when in fact Y is a much better investment, then the technology has failed in its purpose. At best it’s resulted in wasted resources. At worst; both wasted resources and greater risk.
But wait a minute. It’s also true that analyses performed by people can be inaccurate, which is why it’s so important to have well-trained personnel doing analyses — to help ensure accuracy. The difference is that with automation, inaccuracy is often replicated at scale, exponentially increasing its negative effects. So, when we automate, we need to do so carefully, which is what the next post in this series is about. Stay tuned…