FAIR stands for Factor Analysis of Information Risk. Simply stated, it is a model that describes what risk is, how it works and how to quantify it.
- FAIR is the only international standard Value at Risk (VaR) model for cybersecurity and operational risk.
- Unlike risk assessment standards that focus their output on qualitative color charts or numerical weighted scales, the FAIR model specializes in financially derived results tailored for enterprise risk management.
FAIR is an analytical risk model, whereas most information security risk methodologies in use today are Capability Maturity Models (CMM) or checklists. Analytic models attempt to describe how a problem-space works by identifying the key elements that make up the environment and the relationships between those elements — e.g., Newton’s laws of the physical world described how things like gravity work. If the models are relatively accurate (no models are perfect), then analyses performed using the models should consistently align with our experience and observations. With those elements identified, measurements can be made that enable risk quantification and performance of what-if analyses, neither of which can be performed with checklist or CMM analyses.
FAIR provides the means to answer questions like:
- Checklist methodologies (e.g., PCI, ISO, BITS, etc.) provide inventories of practices that an organization can use to evaluate and benchmark itself against. This can be useful for identifying gaps in controls and/or for comparison against other organizations. Checklists are not useful for determining how much risk exists or for understanding the effects of changes in the risk landscape (e.g., how much more or less risk will exist if…).
- CMM methodologies (e.g., SSE-CMM) provide an ordinal scale for rating the maturity of processes. This can be useful for evaluating the quality of processes, for setting goals, and for evaluating progress against those goals. CMM is not useful for quantifying risk or measuring the practical effect of changes in maturity.
FAIR provides the means to answer questions like:
- How much risk does X represent?
- How much risk do we have?
- How much more/less risk will we have if …?
- What are my most cost-effective options for managing risk?
Note that all three methodology types can be useful for most organizations, and should be complementary.
FAIR is conceptually very straightforward. That said, many of the risk scenarios we face in our profession are not. As a result, analyzing a complex scenario with even a simple modeling structure like FAIR can feel difficult, especially at first, without the support of proper tooling.
The good news is that besides being conceptually simple, FAIR is highly flexible. This allows the user to operate in “quick-and-dirty” mode or “down in the weeds”, whichever is appropriate given time, resources, and the significance of the problem being analyzed. In fact, the vast majority of FAIR analyses fall into the quick-and-dirty category because that’s all that is required in most instances.
As with any new skill though, there is a learning curve in how to properly scope risk scenarios. Most of that curve is spent learning how to decompose scenarios so that they can be analyzed. Once a scenario is well-defined, the analysis itself is generally quite simple.
FAIR is widely used by organizations in a variety of sectors, including:
- High Tech
- Health Care
Organization size has ranged from SMB to Fortune 1.
Bottom line — understanding and measuring risk can be useful for organizations of any size in any industry.
FAIR has been vetted at various points in its development with people who are experts in risk and quantitative analysis.
- The Open Group has selected FAIR as its standard model for risk management
- ISACA references FAIR in its RiskIT framework
- NIST lists FAIR as a complementary standard for quantifying and prioritizing risk in its industry resources page.
- COSO references the FAIR model as a tool for “management to align the cyber security program to the business objectives and set targets”.
There are many analysis methods that use ordinal scales (e.g., 1 – 5, 1 – 10) to rate risk conditions. These frameworks are commonly mistaken to be quantitative because numbers are involved, however in each case the numeric scale could be replaced with colors or words (e.g., “High”, “Medium”, etc.) and be identical. In addition, common mathematical functions like addition, subtraction, multiplication, etc. can’t legitimately be performed on ordinal scales (e.g., you can’t multiply red times yellow).
FAIR analyses use quantitative values like frequencies, ratios, and monetary loss, which enables the use of true quantitative analysis.
Logically, the effects of damaged reputation have to materialize in some form of loss or else we wouldn’t care. These effects are tangible. For a commercial enterprise these effects materialize as reduced market share, decreased stock price (if publicly traded), and potentially the cost of capital. In the public sector, an organization's goal might be stated in terms of mission delivered and service offered and not necessarily in terms of financial goals. In these cases too, reputation damage can be assessed in financial terms through the use of subject matter estimates, expressed in ranges.
In our experience, organization executives have always been able to confidently estimate the effects of reputation damage. They understand their customers, competition, and other key business factors that would come into play from a reputation perspective. The key is to get these loss estimates from business or agency executives, as it is extremely uncommon for information security or risk analysts to estimate these effects accurately.
Anywhere you have a need to know how much risk exists (or could exist if…). Examples include:
- Policy exception requests
- Audit findings
- Penetration test results
- Comparing risk issues. For example, “Does data leakage or web application security represent more risk to our organization?”
- Building a business case for new security measures or for defending existing security expenditures.
- Prioritizing risk mitigation options when the budget doesn’t allow for everything. For example, “Which is likely to be more cost-effective, training my web developers or implementing an application firewall?”
- Optimizing cyber insurance coverage
The Open Group standards are intended to provide an introduction to the concepts and methods within FAIR, but do not fully cover the body of knowledge around FAIR. You can read about the most recent developments around FAIR in the award-winning FAIR Book and the Resource section of the FAIR Institute.
The full body of knowledge includes:
- Deeper taxonomy levels
- Model for controls analysis (FAIR-CAM)· Model for assessing materiality (FAIR-MAM)
- Models for analyzing an organization’s ability to manage risk over time
- The use of distributions rather than scales and matrices for describing variablesThe use of Monte Carlo functions to analyze highly uncertain data
- The use of sensitivity analysis to identify especially important risk factors in scenarios
- Calibration to improve the quality and utility of estimates where data are sparse Means of performing risk aggregation
The FAIR standard will continue to be developed and more ancillary standards will be published over time.
The FAIR Institute was created with a mission to provide resources to learn more about FAIR and to create opportunities to develop and exchange best practices among FAIR practitioners.
We tend to treat controls as if they operate independently and in isolation. For example, when an audit or vulnerability scan finds that a patch is missing from a system, we tend to rate the severity of that condition as if it’s the only element that’s in play. In fact, there can be many other controls in place that minimize or maximize the relevance of that missing patch.
For that matter, even if a control is currently operating as intended, how reliable is it, and is it providing enough risk reduction value to warrant its cost?
None of the security assessment tools used in the industry today consider the many factors that affect a control deficiency’s significance, which makes their results inherently unreliable.
Existing control frameworks are lists of individual controls or control objectives. However, none of these frameworks formally define the many ways in which controls directly or indirectly affect risk.
FAIR-CAM™ provides a formal description of the system of control functions that directly or indirectly affect the frequency or magnitude of loss.
A useful analogy is the difference between the anatomy of a human body, and its physiology. Anatomy is a list of the parts (bones, muscles, nerves, organs, etc.), while physiology is a description of how those parts function both individually and as a system. Existing frameworks provide a useful “anatomy” for cybersecurity controls, and FAIR-CAM™ describes control physiology.
Most control assessment practices in use today simply express control conditions as ordinal scores (1 through 5, red, yellow, green, etc.). These ordinal values are abstract and subjective -- I.e., they aren’t actual units of measurement, like percentages, time, units of money, etc. As a result, control measurements tend to be less reliable, and it’s very difficult to translate control improvements into risk reduction.
FAIR-CAM™ will provide units of measurement (%, $, time, etc.) for each control function, which will mean that cybersecurity teams can empirically measure the efficacy of controls. And because FAIR-CAM™ overlays its control functions on top of the FAIR model, you’ll be able to determine how much less risk will exist as controls improve (or vice versa).
SolarWinds is similar to every other successful breach in the sense that the victim organization(s) had been making significant investments in security and yet they still got breached.
And, as with every breach, detailed analysis after-the-fact always shows that the organizations weren’t able to focus on and maintain the controls that matter most. They’re busy chasing compliance, or managing to a risk register that, ninety-five times out of a hundred, isn’t risk-focused.
FAIR-CAM™ combined with a well-defined controls “anatomy-like” framework (e.g., NIST 800-53) and a solid risk measurement model like FAIR will improve an organization’s ability to focus on the controls that matter most, and significantly reduce the odds of making the news due to a breach.