Many organizations look to NIST to help them construct their cyber security programs. Security frameworks, such as NIST CSF, are very popular for helping to ensure you’ve identified a complete list of necessary controls, have built a risk-based view of those controls, and are monitoring and reporting on them to ensure compliance with regulatory guidance and industry requirements.
These frameworks are complete in that they cover all the areas one would need (think broad), however they do not delve into all the details necessary for implementation and maturity in all areas (think deep).
This broad-but-not-deep approach to security and risk frameworks is a benefit and a curse. Firstly, it gives you a rough outline of what is needed to ensure that the organization is connected to its technology and that technology is aligned with the products and services the organization's mission dictates. However, the lack of specificity often means that there are gaps in implementation steps that require supplemental approaches.
Jack Freund, PhD, is co-author with Jack Jones of Measuring and Managing Information Risk: A FAIR Approach.
To solve this problem, some skip the broad frameworks and base their security program on resolving the operational problems addressed by things like the CIS Top 20 and CVSS. This approach helps you close the gaps most associated with data breaches but misses the mark in helping connect the organizational goals, mission, and strategy to the technology that supports them.
CVSS Measures Vulnerabilities, Not Risk
These narrowly focused security solutions largely assume risk management is wrapped around it or worse ignore it entirely. Despite this, they can also be very self-aware of how they are misused and applied in ways for which they were not intended. CVSS for example, provides a measure of exploitability, or how virulent or contagious a particular vulnerability may be. Despite practice to the contrary, it does not purport to measure risk. In fact, the CVSS version 3.1 User Guide includes this very self-aware guidance in section 2.1:
CVSS Measures Severity, not Risk
The CVSS Specification Document has been updated to emphasize and clarify the fact that CVSS is designed to measure the severity of a vulnerability and should not be used alone to assess risk.
They could not be more clear: Don’t use CVSS to measure risk. The authors go on to say that oftentimes CVSS is used when a more comprehensive risk assessment would be more appropriate.
Indeed, an overarching, comprehensive risk assessment is a prerequisite to any security framework or methodology. Whether you are attempting to implement NIST guidance, prioritize vulnerabilities with CVSS, or plug holes in a kill chain using the CIS Top 20, the organizational and technological risk assessment forms the foundation upon which all else is built. But even if you are using a standard on how to conduct a risk assessment, you could be missing critical pieces for optimum maturity.
NIST 800-300 Only Gives a Generic Risk Model
Guidance such as NIST 800-30, which specifically spells out the process for conducting risk assessments, still maintains the broad-but-not-deep approach to this task. For instance, in section 2.3.1 Risk Models, it discusses the need to consider factors such as likelihood and impact, as well as threats, vulnerabilities and predisposing conditions, and that these factors should be combined to determine risk.
Nothing is said about how exactly to combine them, how to measure them, or how to determine what constitutes risk scenarios being high, medium, or low priority to the organization. The diagram included in this section of the standard specifies that it is a generic risk model, which necessarily means it needs tailoring and specification to be meaningful to your organization.
Fortunately, the NIST CSF framework includes a set of informative resources to help round out the completeness of this framework (and by extension all the NIST guidance in the 800 series). These give users specific implementation plans for their industries or geography. They also give detailed instructions for how to combine risk factors in a meaningful way to better scope and analyze risk scenarios such as is outlined by the FAIR™ standard.
Embedded in every risk management framework document is some reference to the ability to measure risk using either qualitative or quantitative methodologies, but they never go so far as to detail how exactly to manage the inherent flaws with directly selecting qualitative ratings or how to implement quantitative methods.
FAIR Fills the Gaps in the Frameworks
FAIR provides a foundation for how to do that in the context of the overall risk management process and program outlined in such framework documents. Further, FAIR has a deep network of practitioners who are trained to use it and apply it at the enterprise level. FAIR can also be implemented by vendors such as RiskLens that has operationalized its computational engine for use in enterprise environments where managing (and misplacing or overwriting) spreadsheets isn’t an option.
Lastly, for those that are looking to bridge the gap and fully implement the quantitative methods as outlined in those foundational documents, the RiskLens’ FAIR Enterprise Model (RF-EM) can be leveraged to help with organization-specific implementation of a full suite of quantitative methods and solutions.
In the end, there is nothing inherently bad about security and risk frameworks. Each has their role in crafting a security program that is complete and tailored to your organization. But doing the tailoring requires acute understanding of the business risk to help add prioritization to the business of security namely budgeting, staffing, control selection and implementation, and understanding the tradeoffs related to each. Quantitative methods with FAIR allow you to model those tradeoffs and maximize security budget and minimize loss exposure.