With large companies under near constant attack from malware, phishing, and hacking attempts, getting an estimate on cybersecurity risk means reaching a clear understanding of how many of the massive number of threats actually turn into losses.
In this short video, I explain how to factor in Loss Event Frequency (LEF) in a FAIR risk analysis, even if you don't have solid data to work with from your organization.
Thanks for reporting a problem. We'll attach technical data about this session to help us figure out the issue. Which of these best describes the problem?
Any other details or context?
TRANSCRIPT for "Loss Event Frequency Explained"
If you remember from my previous post, we touched on what vulnerability is within a FAIR analysis.
This time we will discuss how Loss Event Frequency plays a role within the FAIR model.
First, let’s remember what the FAIR definition of a risk is, the probable frequency and probable magnitude of future loss.
Loss event frequency on the other hand, is defined as the probable frequency, within a given timeframe, that a threat action will result in loss. So for us to have risk, we need to have a frequency at which a threat agent acts and when they do act how much loss we might see. To drive this home let’s take a look at an example.
You are worried about a SQL injection on your company’s main internet facing application causing a data breach, but you realize this has never occurred before. You are worried you may not have enough information to work at Loss Event Frequency but fear not — this doesn’t mean you can’t. I am sure you are wondering: How?
Before we get into the how, we need to remember that some of the time within a FAIR analysis, assumptions need to be made. This is especially apparent when an event has never occurred at your organization before. I was able to find a good blog post about this very thing on the RiskLens website (Assumptions Are a Powerful Thing). The 2 main take aways are ensuring that the scope of your analysis is well vetted, and outlining what is and what is not being included in your analysis. Doing both of these will help shape your assumptions. I would encourage you to read it at your leisure. So how do you estimate something that has never happened before?
There are many ways to do this however some ideas you can use in order to estimate your Loss Event Frequency are:
Use the Imperva Web Application Attack Report or WAAR and correlate that to the Verizion DBIR Report
Talk with the experts within your organization which could be your Threat Intel or similar team
Talk with your peers in other organizations
Another idea is to…
Look at your industry as a whole and your role within that industry, “Are you a big fish in small pond or vice versa?”; you may not be susceptible as you may first think.
So what would a calibrated estimate look like? According to the Imperva report an organization could see anywhere from 3 to 15 SQL injections a year against a web facing application. It also mentioned that the retail industry would be on the higher end of that spectrum. Our company is not a retailer, so if we leveraged the Verizon DBIR report, it states about 25-30% of breaches involved SQL injections. So making an assumption based upon all of the data available we could reasonably assume that a breach could occur between 1 time every 3 years and 1 time per year. Within FAIR, frequency values are time-bound and given in an annualized value.
To recap, a lot of the information which preceded will hopefully allow you to make a well-informed assumptions of your Loss Event Frequency. This is an especially beneficial if you have a time constraint to finish an analysis for a decision maker. Being able to show a baseline or preliminary results is better than no results at all. Keep in mind you can always look to gain precision later.
'Vulnerability' in Risk Analysis, Explained in 2 Minutes [Video]
A Crash Course on Capturing Loss Magnitude with the FAIR Model