“Quantifying the financial fallout from a data breach can help chief information security officers convey the importance of such an incident in language that executives and board members understand,” the Journal says.
Yes, indeed and Factor Analysis of Information Risk (FAIR™), the international standard for cyber and technology risk quantification, is rapidly gaining traction among CISOs for reporting to executives and board members. Just look at the growth of the FAIR Institute, heading toward 10,000 members.
But the Journal didn’t mention FAIR at all. Instead, it took a pessimistic turn into what FAIR practitioners will recognize as typical objections to quantification:
Let’s take the data issue first. There are good public sources of data (see FAIR Institute member Allison Seidel’s blog post on Shopping for Cyber Loss Data or the recent Cyentia Institute IRIS Report) but they could be a lot better and, as the Journal article points out, would be improved by some large-scale data-aggregation efforts like one underway at MIT’s Computer Science and Artificial Intelligence Laboratory.
But the most relevant data for your organization is your organization’s own data – if you have the FAIR model to guide your data collection. When they start using FAIR, many organizations discover that their subject matter experts (SMEs) in Operations, Business Continuity, SOCs, Legal, etc. have plenty of data on frequency of attacks, strength of controls and costs of impact.
And, as risk authority Douglas Hubbard said in a speech at the 2019 FAIR Conference, How to Measure Risk with Limited and Messy Data: Overcoming the Myths,
Hubbard explains that any data you collect reduces your uncertainty and, in fact, the more data you collect, the less you reduce your uncertainty. By applying calibration, as FAIR creator Jack Jones describes in his blog post No Data? No Problem, analysts can arrive at a useful degree of accuracy for any level of data.
How about the objection that it’s just too difficult to generate a range of probable outcomes for a loss event (as FAIR analysts do every day) because every attack and each technology infrastructure is different? In fact, the FAIR model, particularly on the Loss Event Frequency side, provides the critical thinking tools to assess the strength of your organization’s probable attackers and the strength of your defenses.
As FAIR book co-author Jack Freund said in a discussion about the cyber threat from Iran, the probability is low that attackers will hit your organization with “a secretive, new zero-day vulnerability” too difficult to predict. Much more likely they will hit you with phishing or going after a known, unpatched vulnerability, etc. “If you are an average American business, most of those are attacks you have already seen.” In other words, back to the first objection: You do have a useful amount of data, if you’re running FAIR analysis, to make reasonable predictions about probable ranges of outcomes.
So, here’s our invitation to the Wall Street Journal. Come to the virtual 2020 FAIR Conference, Oct 6-7, two days of sharing from organizations representing the thousands of FAIR practitioners who together are solving the challenges of risk quantification and lifting cyber risk management to a higher level.