This article was originally published on InformationWeek Darkreading on January 3, 2017
By Jack Jones
Boards and CEOs can focus on these critical factors to provide better cyberrisk governance.
As with any other aspect of operating a business, effectively managing cyberrisk is predicated on making well-informed decisions and then executing reliably within the context of those decisions. With that in mind, boards and senior executives must ensure that their organizations accomplish both. For the reasons described below, today many organizations are unable to do either.
There are two primary components to making well-informed decisions regarding a complex and dynamic risk landscape — visibility and analysis — and executive management must examine both.
Visibility can be a problem because organizations don't often closely track the changes in technology, network connectivity, and sensitive data use that are made necessary by rapidly evolving business needs. This is because the effort required to maintain good visibility in these areas is usually viewed as unnecessary overhead that adds expense and slows business growth. However, without this information, an organization can't realistically claim to understand how much cyberrisk it has or where its cyberrisk priorities must be.
Most organizations struggle with analysis, too. In fact, I have found that as many as 90% of an organization's high-risk issues are mislabeled regarding their significance, which means those organizations are unable to prioritize effectively. The most common challenges contributing to this problem include the following:
Nomenclature: Most people wouldn't be enthusiastic about going on a space shuttle mission if they knew that the engineers and scientists who planned the mission and designed the spacecraft couldn't agree on definitions for mass, weight, and velocity. The odds are good, however, that if senior executives asked six people within their risk management organization to define risk or list the organization's top 10 risks, they would get several different, perhaps very different, answers. Therefore, the odds are low that their organization will be able to consistently and reliably measure risk. This condition also introduces significant opportunities for miscommunication and confusion, which further reduce an organization's ability to manage risk well.
Today there is heavy reliance on the informal mental models (that is, informal ideas a person has about how something works) of personnel who evaluate cyberrisk. Consequently, very often the focus of a “risk rating” is strongly based on a control deficiency, perceived threats, and/or various cognitive biases rather than actual business risk. The most common result is significantly inflated risk ratings, which can strongly inhibit the ability to identify the risks that matter most.
Even in the financial industry, which has begun instituting processes to validate analytic models, the focus has been limited to formal quantitative models. This leaves these three things unexamined:
- The mental models of risk professionals and whether their off-the-cuff risk estimates are accurate
- Homegrown qualitative and ordinal models
- Models embedded within cyberrisk tools
Yet these models, with their implicit assumptions and potential weaknesses, are responsible for driving critical decisions about how organizations manage cyberrisk.
Being an information security subject matter expert doesn't automatically qualify someone to reliably analyze and measure risk. Personnel who are charged with measuring the significance of cyberrisk concerns must be all of these:
- Strong critical thinkers
- Well-grounded in basic probability principles
- Trained in formal analysis methods
The fact that many of the personnel who rate cyberrisk within organizations are missing one or more of these characteristics further reduces the odds of accurate risk measurement.
Reliance on Checklists
Although "good practice" checklists and maturity models are plentiful for cyberrisk, they can't actually measure risk. Almost invariably, any deficiencies identified using such checklists are subject to the "risk rating" challenges described earlier. Many organizations fail to recognize this fact and assume the use of checklists and maturity models equates to managing risk well.
Even when an organization makes well-informed risk decisions, reliable execution against those decisions must occur in order to manage risk effectively. In this dimension of risk management, there are three areas where organizations often struggle.
- Awareness: Most organizations today have information security policies, and many even require personnel to read and acknowledge those policies annually. Often, however, policies are written by consultants or subject matter experts using verbiage that is complex and/or ambiguous. As a result, personnel may lightly read and acknowledge the policies but they may not have a clear understanding of what actually is expected of them.
- Capabilities: When budgets tighten, organizations often cut training. Given the rapid pace of change in cyberrisk, this can create serious skills gaps for IT and cyberrisk professionals. Staying abreast of cyberrisk changes should be an expectation that is both set and supported by senior management A related problem under tight budgets is simple personnel shortage — that is, too few people trying to cover too much ground. In both cases, reliable execution is often a casualty.
- Motivation: Root cause analyses performed on cyberrisk deficiencies have found that personnel routinely choose not to comply with cyberrisk policies because management places greater emphasis on revenue, budgets, and/or deadlines. In part, this is influenced by the challenges noted earlier regarding risk-rating inaccuracies. It isn't unusual to find that overestimated risk ratings create a "boy who cried wolf" syndrome within organizations. The result is that organizations don't consistently or meaningfully provide incentives for executives to achieve cyberrisk management objectives because there is tacit recognition that much of what is claimed to be high risk really isn't. Another factor is that revenue, cost, and deadlines are measurable in the near-term, whereas many high-impact risk scenarios are less likely to materialize before they "become someone else's problem."
The bottom line is that prudent risk-taking is only likely to occur if executives are provided accurate risk information and if they're appropriately given incentives based on the level of risk to which they subject the organization.
By ensuring their organizations have the foundation in place to make well-informed cyberrisk decisions and execute reliably, senior executives can have greater confidence that their cyberrisk management program cost-effectively minimizes the potential for painful surprises.
About Jack Jones
Jack Jones is one of the foremost authorities in the field of information risk management. As the Chairman of the FAIR Institute and Executive VP of Research and Development for RiskLens, he continues to lead the way in developing effective and pragmatic ways to manage and quantify information risk. As a three-time Chief Information Security Officer (CISO) with forward-thinking financial institutions such as Nationwide Insurance, Huntington Bank, and CBC Innovis, he received numerous recognitions for his work, including: the ISSA Excellence in the Field of Security Practices award in 2006; a finalist award for the Information Security Executive of the Year, Central US in 2007; and the CSO Compass Award in 2012, for advancing risk management within the profession. Prior to that, his career included assignments in the military, government intelligence, and consulting, as well as in the financial and insurance industries. Jack is the author of FAIR, the only international standard value-at-risk model for cybersecurity and enterprise technology. A sought-after thought leader, he recently published Measuring and Managing Information Risk: A FAIR Approach and is a regular speaker at industry conferences.