GARP: Signs of Acceptance and Maturity for the FAIR Model

[fa icon="calendar"] Sep 19, 2017 2:15:18 PM / by Luke Bader

Luke Bader

A standard framework for measuring and managing information and cyber risks is supported by a training institute and proving its mettle in the regulatory sphere.

Originally published September 15, 2017 by the Global Association of Risk Professionals (GARP)


By Jeffrey Kutler

Factor Analysis of Information Risk (FAIR), a framework for quantifying and managing information and cybersecurity risks, is attracting interest and gaining support not just from users in the private sector, but also from regulators, Jack Jones reports.

It’s a testament to the effectiveness of the model, which Jones, a veteran chief information security officer (CISO) who is currently executive vice president of R&D for cyber risk management software company RiskLens, authored; and to the FAIR Institute, of which Jones is chairman and RiskLens CEO Nick Sanna is president. The not-for-profit education and training institute, addressing technology-related risk measurement and analytical challenges that many organizations have found to be daunting, was officially launched in early 2016 and is holding its second annual conference October 16-17 in Dallas.

Jones, who with Jack Freund of TIAA explained the FAIR approach and its principles in the book “Measuring and Managing Information Risk” (see the July 2016 GARP Risk Intelligence article A VaR Standard for Cyber and Operational Risk), says the response to FAIR has been “beyond what we could have imagined.” The conference sessions include case studies on the likes of Bank of America, MassMutual, and MUFG Union Bank, indicating the level of acceptance of FAIR.

Among regulators, FAIR “is being recognized as good practice – as a necessary or desirable step forward in maturity of risk management in an organization,” Jones notes. On top of positive feedback coming from financial institutions – “they are saying it is affecting their ability to manage risk, positively and in important ways,” Jones adds – some regulatory agencies “have put their people through FAIR training, with very good results. So they are getting exposed to it in at least those two dimensions and feel it is worth looking at for the future.”

Jones elaborated on FAIR’s progress and traction in a recent interview.

What does it take to get FAIR and its message across?

The biggest challenge we often face is inertia. The industry has a lot of practices and belief systems in place. Anytime you are trying to change or displace those in any meaningful fashion, you will run into the fact that a lot of people don’t like change. As soon as you start talking about real change, you will run into resistance.

There are also quite a few misperceptions about risk measurement, particularly in the cyber and technology space. It is not at all unusual to encounter someone who says that cyber risk can’t be measured. A common refrain is: “We are facing an intelligent adversary who can change tactics, timing and techniques at any time, which makes any risk analysis invalid.”

“Despite what [executives] may have heard in the past, measuring this risk quantitatively is not an intractable problem,” says Jack Jones of RiskLens and the FAIR Institute.




It is true that those things can change, which could change the probability or impact of various events. But my question is, “How are you measuring risk in order to prioritize?” Organizations don’t have unlimited funds, so they have to prioritize. For what you prioritize, you have to come up with cost-effective solutions. That requires measurement. They’ll respond, “the best that we can do is to do it qualitatively.” Once you do it qualitatively, the concern about an intelligent adversary goes away – and the problem is swept under the rug. With quantitative measurements, you can reflect uncertainty by using wider and potentially flatter ranges of distributions. To say that a threat is medium or high doesn’t begin to get at the range of possibilities.

If analysts or economists are able to calculate or project total cyber losses, why is it so difficult at the single-firm or micro level?

Based on my conversations with insurance people, at the micro level concerning companies they may be insuring, they rely on an incredibly rudimentary and superficial checklist of yes or no answers, on no more than a half dozen or a dozen or two dozen questions. Or, at the other end of the spectrum, hundreds of questions. Regardless, they tend to be yes-no in nature, or what I would call lipstick on a pig. When I ask cyber risk underwriters how they are making use of this data in underwriting decisions, the universal answer to date is, “we don’t.” They put a wet finger in the air, and say this feels a little more risky than that. Their companies are making a lot of money anyway. It’s almost risk analysis theater, so to speak.

Some insurance companies are trying to be more scientific and rigorous in their approach, and that is good news. But as long as the preponderance of the cyber insurance world is that superficial or half-baked, the problem is going to seem intractable – at that level. Whereas economists, by the very nature of what they do, are used to dealing with complex, big-picture questions. They are good at taking available information that may be similar to, but not exactly within, the domain that they are analyzing and being able to draw reasonable correlations and extrapolations to arrive at answers that are ballpark-correct at a macro level. They are also good at expressing the uncertainty in their measurements.

How are senior managements and boards measuring up, so to speak?

More and more we are seeing CISOs and technology risk leaders being pressed by business executives and boards of directors for business-related measurements – loss exposure in economic terms. For example, how much less risk will we have if we spend these dollars that you are asking for? Among board members whom I have talked to, a common refrain is, “I am tired of heat maps. I don’t understand what the colors mean. I don’t understand why I keep having to throw significant dollars at this problem.” They understand that the problem is complex and evolving, but they are looking for more clarification, better business intelligence around the problem, its relevance, and the value proposition around the investment.

Do they recognize a need for technology to help bring this together?

It’s not at all uncommon to hear, “FAIR is not rocket science. We’ll just create a spreadsheet or web app internally rather than spend money on a commercial product.” The reasons for this may be economic, or some organizations are just wired to be do-it-yourselfers. But they discover pretty quickly that doing this at an enterprise level – aggregating risk across a lot of different scenarios, getting portfolio views, maybe employing sensitivity analysis, finding an interface that is not painful to use, all with the right level of security – is not a do-it-yourself kind of project. Or it is more expensive to do it yourself than with a commercial product.

If an organization wants to analyze one-off types of scenarios, then maybe they can get away with a spreadsheet, particularly if they aren’t very concerned about the security surrounding it. The minute they want to apply these principles and techniques at an enterprise level, the problem changes, and the solution changes.

What is the state of education in this area, and/or hunger for it?

It is important to recognize that today, in most organizations, people who are rating risk tend to be very competent in their area of expertise – be it security architecture, penetration testing, business continuity, etc. But being expert in a specific problem set does not make someone qualified to analyze and measure risk. Measuring risk is an analytical problem that requires analytical skills, critical thinking, comfort with basic probability principles, comfort with uncertainty. You’re dealing with some amount of ambiguity. People who aren’t wired for that won’t be good at it.

Organizations come to realize that they don’t necessarily have people with these skills, so where do they find them? That leads to a growing hunger for training. As an answer to that, we are putting together three things:

(1) Online training. To date, most of our training has been on-site, a half dozen to a couple of dozen people at a time. That doesn’t scale. We are developing a truly professional online training program, self-paced and reasonably priced.

(2) We are developing a free training tool – an online FAIR analytic tool that organizations can use to get a feel for how this works and how to become adept at it.

(3) Some universities have begun baking FAIR into various programs, and others say they would like to do the same but don’t know where to begin. We have developed a curriculum that universities can use as-is or adapt to their purposes. An example: San Jose State University includes FAIR in its economics curriculum, to analyze risk scenarios that are not technology- or cyber-related. They have analyzed how much risk is associated with, or is there a risk-reduction benefit of, allowing teachers to carry weapons in schools. Or how much less risk will result from a city improving its bicycle paths. SJSU has begun a program specifically for economics students who want to go into risk analysis in the cyber and technology field as a career. We are seeing interest in this elsewhere as well.

Is the training and talent deficit related to that of cybersecurity in general?

This is a subset or specialization within the broader field – an analytic specialization. It’s being able to say what it means in a business context. If we understand that we have these weaknesses, how much should we care about it? This specialization is essentially a translator of technological data and information into business intelligence.

What does that say about the type of people who might be attracted to this work?

There are everyday analytics – facing questions like “how much less risk will we have if we implement a certain technology” – questions pertaining to how risk is managed at a boots-on-the-ground level. There is also what I would characterize as risk research. That might be a better fit for the stereotypical data scientist to sift through gobs of data, figure out subtle nuances or key insights. There is a role for that, and it is important, but that is not the everyday sort of thing. Most organizations won’t need to go there or have the resources for it. Any organization other than a mom-and-pop type would benefit from having someone on staff who is trained in everyday analytics and problem-solving, translating technology challenges into business-friendly intelligence.

What is the objective of RiskLens’ e-book for executives?

An Executive’s Guide to Cyber Risk Economics   is intended to let executives know, despite what they may have heard in the past, that measuring this risk quantitatively is not an intractable problem. And frankly, here are the very severe limitations of common practices in risk measurement. The goal is to give them information to ask better questions and set the bar higher for informing their decisions. It’s really a wake-up call and a light introduction to the more mature approaches by which they and their people can learn more.

Haven’t financial companies been on this journey before, and does that experience help?

What we would consider the more mature disciplines, such as credit risk management, certainly went through an evolutionary process. But for most organizations, that was long enough ago, and involved a different set of people, so that what was learned there was not carried over.

A lot of people in the cyber risk technology profession are convinced that theirs is a special skill type that can’t be measured quantitatively or using principles and methods that have proven effective elsewhere. The good news is that they are just flat wrong. The bad news is that they will fight tooth-and-nail to defend their position. Again, there tends to be a lot of inertia. It varies from organization to organization. The good news here is that as more and more people open their eyes to these more mature methods, it becomes riskier to rail against them, and those barriers are beginning to break down.

Are there conference highlights that you’d like to call attention to?

I am excited about the agenda we have. There are some really strong case studies. Large organizations have a lot of extra challenges when it comes to adopting change and making it effective. There is one marvelous success story of tackling all the cultural, procedural and other tough problems that had to be addressed. We have panels about communicating with the board – with CISOs and board members – and striking the right balance between compliance and risk management. We continue to listen very closely to the challenges that people are encountering in this space and to gear the agenda accordingly.

There are two other aspects of the conference that make it unique. Because it is based on the FAIR risk model and ontology, it tends to be better focused than other events that don’t have this common, underlying foundation or lens through which everybody can view the problems. Second, we are not the least bit afraid to challenge conventional wisdom. This is important because a lot of practices in cyber and technology risk measurement and management – and in other parts of the risk management world – feel like a throwback to a time of old beliefs. There is no reason we should expect bleeding to be a cure for disease, which is how medicine was once practiced. We are not afraid to call out the management of cyber, technology and operational risk in that way.


Topics: FAIR, Events, FAIR Institute, FAIR Conference 2017

Luke Bader

Written by Luke Bader

Luke Bader is Director, Membership and Programs for FAIR Institute


Subscribe to Email Updates

Learn How FAIR Can Help You
Make Better Business Decisions

Recent Posts