FAIR Institute Blog

NIST CSF & FAIR - Part 2

[fa icon="calendar"] Apr 4, 2016 11:13:39 AM / by Jack Jones

Jack Jones

A Review of NIST CSF

NIST_and_FAIR_Puzzle.jpgGiving credit where credit is due

The people who designed and contributed to the NIST Cybersecurity Framework (CSF) clearly put a lot of thought into it, and this is demonstrated through some important positive aspects:

  • The NIST CSF does not fall victim to one of the most common and egregious problems of similar checklists — bloat. Rather than hundreds or potentially thousands of elements, the NIST CSF has boiled down its list of sub-categories (the elements that are “measured”) to under 100. The fact is, there are significant diminishing returns from asking more than a relative handful of questions when trying to characterize an organization's risk management capabilities. This makes the NIST CSF far more efficient and pragmatic than many other frameworks.
  • The NIST CSF is intended to be adaptable – i.e., organizations (and industries, for that matter) are expected to build profiles from the CSF that specifically address their cyber security needs.  This ability to tailor the framework should help avoid wasting time and effort on elements of the CSF that aren't meaningful.  It should also help organizations prioritize elements that are particularly important.
  • The NIST CSF’s focus on maturity is also important because, at the end of the day, maturity is what really matters. A more mature organization will be able to manage risk more effectively over time — i.e., handle the inevitable curve balls associated with cyber security — than a less mature organization. We should care much less about whether a particular technology or policy is in place than whether an organization is making well-informed risk decisions and executing reliably, which are the hallmarks of maturity. 
  • The NIST CSF’s use of a multi-level measurement scale for each of its sub-categories should also be a significant improvement over the common “yes/no” answer options many checklist frameworks adhere to. “Yes/no” answer options force an organization to decide whether to give themselves full credit or no credit for conditions that are only partially in place or that aren’t 100% functional. And let’s face it — what percentage of controls in most organizations are operating at 100%? This binary representation generally results in inaccurate and misleading assessment results.
  • The NIST CSF also places a lot of conceptual emphasis on risk rather than simply on controls. This is crucial because the value proposition for anything we do in our profession can and should be measured in its effect on the frequency and/or magnitude of loss an organization experiences.

Inherent limitations

NOTE: I want to make clear that these limitations aren’t unique to the NIST CSF. All checklist frameworks have these same limitations. In the case of the NIST CSF, some of those limitations are by choice and the authors clearly indicate in the executive summary what the CSF is and what it isn't. 

As I mentioned in my first post, checklist frameworks have no analytic underpinning. In other words, they don’t — they can’t — help an organization understand which missing or less mature components are more important than others or how much less risk the organization will have if it improves one or more of the components. For example, does improving Maintenance by a single level of maturity gain an organization more than improving Access Control by two levels?  

The impact of this limitation boils down to two rather important things:  

  • The NIST CSF can only be used to report directional improvements to the overall risk profile of an organization — e.g., an organization that reduced the number of sub-categories at a baseline level by half can be assumed to be in a better risk position than before it made those improvements. The NIST CSF can’t, however, support reporting on how much better off the organization is, whether those improvements were the most important ones, or whether they represented the best cost/benefit. It also can’t help an organization understand where along the maturity continuum it should be, as this can only be determined by understanding how much risk the organization currently has versus how much less it might have at higher levels of maturity. 
  • Without an underlying analytic structure, frameworks like the NIST CSF are unable to represent the dependencies and relationships between their elements. For example, many sub-categories have aboolean AND or OR relationship with one another. Take the AND relationship between PR.PT-1 (logging) and DE.AE-2 (event analysis). An organization can have outstanding event analysis capabilities, but if it is seriously deficient in its ability to capture information in the first place, its overall detection capabilities are seriously hamstrung. Clearly, this analytic limitation can seriously affect an organization's ability to effectively prioritize improvements. It also can potentially affect the NIST CSF’s ability to accurately reflect even the general risk profile if “cornerstone” elements are immature but their roles aren’t accounted for.
  • The expectation that organizations will build a unique profile (or version) of the CSF for themselves makes it difficult or impossible for reliable benchmarking. 

Opportunities for improvement

First, the obvious. No framework is or ever will be perfect, particularly when it first comes out of the gate. Maybe in a future post, I’ll share some glimpses into the earliest versions of FAIR, which would serve as a good, if somewhat embarrassing, example.  

That being the case, the following are some of the ways in which I believe the NIST CSF can be improved:

  • Many (including myself at first) who try to apply the NIST CSF framework have misinterpreted the intent of its measurement tiers.  Although on the surface it seems as though the tiers are intended to be used for measurement at the sub-category level, this isn't how NIST intended that they be used.  NIST's intent is that the tiers should be used at the organization level.  This makes sense given the tier parameters “Risk Management Process”, “Integrated Risk Management Program” and “External Participation”.  NIST also intends for each organization to define their own measurements scales at the sub-category level.  Although this helps make the framework more flexible, it also hinders its ability to be used in benchmarking.  In the 5th post of this series I'll describe a measurement scale that I've found to be practical, flexible, and useful.  Perhaps as the NIST CSF evolves they will adopt it (or something similar) that can provide the consistency necessary to support benchmarking.  
  • Although the Function/Category structure of NIST CSF is good overall, it does fall victim to an all-too-common failing that I see over and over again in the industry. For example, in one case, it puts RS.MI-3 (“Newly identified vulnerabilities are mitigated…”) into the same bucket as RS.MI-1 (“Incidents are contained”). These are very different animals, and I would suggest that remediating vulnerabilities is a better fit in the Protect function, as this is a key part of maintaining protection levels. Similarly, two fundamentally different sub-categories are combined in the Detect function. Specifically, both DE.CM-8 (“Vulnerability scans are performed”) and DE.CM-3 (“Personnel activity is monitored…”) are in the same category and function when one is detecting deficiencies in control conditions (vulnerabilities) and the other is detecting potential attacks. In this case, vulnerability scanning might be a better fit in the Identify function. Some people may feel this is nit-picking, but getting this type of thing right is important for at least two reasons:
    • If we hope to mature as a profession, we need to be able to understand and make these distinctions in the role a control element plays.
    • Distinctions like this can be very important in accurately measuring the importance of a control element.
  • At the sub-category level, there seems to be some “redundancy” in the things being evaluated. For example, I would assume that PR.DS-1 (“Data-at-rest is protected”) is to a large degree determined by things like PR.IP-1, PR.IP-2, and others, which are foundational components of protecting sensitive information being stored on systems. Likewise, PR.PT-4 (“Communications and control networks are protected”) and any other sub-categories that end in “…are protected.” The following is an analogy to hopefully make this clearer:
    1. Fresh organic vegetables are regularly eaten
    2. Seat belts are consistently worn in moving vehicles
    3. Vitamins are regularly taken
    4. Family members' health is protected

In this example, it's clear that the first three elements are the kinds of things you would do to protect your health. Therefore, it is redundant to ask if the family members' health is protected, as this can be derived from the answers to the other elements. Likewise, subcategories in NIST CSF that end in “...are protected” feel like specific use-cases so-to-speak that for some reason were called out for explicit attention. While this isn’t necessarily a critical deficiency in the NIST CSF, for the sake of clarity and logical consistency, I’d suggest that if there are specific parts of the risk landscape like data-at-rest, data-in-transit, etc. that deserve explicit measurement, that a separate part of the framework be developed for those to be evaluated rather than scattering them throughout the existing Functions.

  • Although I do not suggest that the NIST CSF be revised to become an analytic model, I do believe it should include references to existing risk analysis models like FAIR and NIST 800-30, so that NIST CSF users can know they exist and where to learn more. This seems particularly logical given the risk focus NIST CSF espouses. 

As far as I can tell, these should be relatively easy changes to make. Then again, I wasn’t part of the original development effort for the NIST CSF, so I don’t know if any of these issues were already discussed and hotly debated, or what other considerations might be in play.

The bottom line is that given the quality of other NIST materials, I have high confidence that the NIST CSF can eventually become a rock-solid framework.

Next up…

In Part 3, I’ll flesh-out some of the ways in which the FAIR analytics model can be used to supplement risk frameworks such as the NIST CSF. 

Topics: FAIR, Risk Management

Jack Jones

Written by Jack Jones

Learn more about the Open FAIR standard

Subscribe to Email Updates

Learn How FAIR Can Help You
Make Better Business Decisions

Recent Posts