Our Addiction to "Zero Cost" Risk Measurement

Easy-buttonOne of the significant hurdles we have to overcome as a profession is our addiction to “zero cost” risk measurement.  Let me explain…

As organizations begin thinking about adopting FAIR quantitative risk analysis, it’s not uncommon to hear comments like, “We don’t have time for that” and/or “We just want it to be done automatically using data from our security tools”. 

What they’re reacting to is the realization that effective risk measurement involves analysis, which involves thought and effort.  To-date all they’ve had to do is have someone — pretty much anyone — wave a wet finger in the air and proclaim high/medium/low risk.  No muss, no fuss, and no cost in terms of effort. 

A dose of data-related reality…

With all of the security-related telemetry being spit out by the tools we use as a profession, it’s easy to understand why people believe it should be possible to do real-time automated risk analysis.  And in fact you don’t have to look very hard to find technology providers who claim to do just that.  The good news is that there are pieces of our risk landscape that can, hypothetically, be analyzed in near real-time.  The bad news is two-fold:

  • Much of our problem space cannot be analyzed automagically because telemetry doesn’t exist for required data elements
  • It’s incredibly easy to screw up automated analyses because a lot of the telemetry data is not very good or isn't specific enough, and because algorithms tend to be very sensitive to landscape dynamics

I discuss these in more detail in a white paper I wrote a year and a half ago ("Effectively Leveraging Data in FAIR Analyses") which you can find in the Resource Library/White Papers section of LINK, the members-only resource of the FAIR Institute (and for those of you who don’t already know, membership is free).

Something else to think about when it comes to automated analysis is… well, to put it bluntly… snake oil.  Maybe that’s too harsh a description though, at least in many cases.  The unfortunate fact is that with “risk measurement” getting so much press these days it seems like every vendor does it.  But do they really?  In many cases the answer is either absolutely not, or it isn’t clear.  If tools are basing their risk measurements off of CVSS results or NIST CSF I’d be extremely skeptical.  And if the underlying models are black box and proprietary you have to ask yourself whether that’s any different, really, than relying on a proprietary encryption algorithm.

Pay now or pay later…

The actual costs incurred from the typical “zero cost” risk measurement come later in the form of an inability to prioritize effectively and an inability to get the best bang-for-the-buck on risk management investments.  Inaccurate prioritization actually imposes two costs — resources wasted on low risk issues, and delays in dealing with high risk issues because we’re paying attention to low risk stuff, which is perhaps the scariest of the two.

The bottom line is that our problem space is complex and dynamic, and we have limited resources for dealing with it.  This means that being able to prioritize and invest well are crucial to an organization’s risk management success.  Both of those are inherently a function of measurement quality — and you get what you pay for…

To paraphrase Albert Einstein – "Make things as simple as possible and no simpler."

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37