"I just think loss exposure is too low!” Many FAIR risk analysts have faced this response from a stakeholder at some point in their career. This rejection is often not a reflection of the work done by the analyst but comes from something different altogether. So, why do stakeholders request a quantitative, objective analysis, but then subjectively reject the results?
This blog post will attempt to identify and define the primary factors that drive a subjective rejection of cyber risk quantification and help risk analysts develop an appropriate response. It is important to understand the elements that form the rejection to which stakeholders often default, so that analysts can be prepared to engage in effective conversation.
The two main influencers of rejection to quantified cyber risk can be categorized into natural reactions and learned behavior.
For this discussion, natural reaction is loosely defined as the outwardly expressed response to new information or stimulus, often the first or primary response based on that person’s feeling or intuition. It is the response that is not filtered through an objective framework or methodology. The natural reaction and subsequent rejection of risk exposure can be broken down into two elements.
1. Stakeholders may be unknowingly attaching emotional weight to the value of the asset or decision because the course of action has already been chosen.
Phil Venables, author of the article Organizational Politics, wrote “Most formal meetings, committees, councils or otherwise are ostensibly convened to make decisions. But in most cases, they are explicitly ceremonial to confirm a decision that has already been made...”
Unfortunately, this can be true of risk assessments as well. If the timing of the risk assessment request from the stakeholder is too late, then it is likely a decision has already been made and the risk assessment unwittingly becomes a critique of the decision. The stakeholder may have already committed many hours of work to gain project funding, timeline development and business justification.
The result of a risk assessment which fails to meet the expectation of the stakeholder may cause a rejection of the potential business outcome and manifest itself as a rejection of the analytical work.
Sunstein, Kahneman, and Sibony, authors of Noise: A Flaw in Human Judgement have conducted multiple controlled studies and polls, and found senior leaders and executives gauged themselves to make the right decisions nearly 85% of the time. Unfortunately, these leaders had a predictive correlation only marginally better than a coin flip. Philip Tetlock, author of Super Forecasters, and Expert Political Judgement makes the same finding, but states his results more colorfully, “The average expert is roughly as accurate as a dart-throwing chimpanzee.” Stakeholders may reject the results of an objective risk assessment because the results are counterintuitive. This is due to over confidence in intuition instead of objectively applying a framework for determining suitable decision criteria.
Learned behavior is often a term associated with young children and even animals but can have a more general psychological application. The primary differentiator of learned behavior compared to innate behavior or instinct is the element of experience. A learned behavior is a pattern of thought or action that is attributed to one or more previously encountered events. Learned behavior that can lead to a rejection of cyber risk quantification can be broken down into two elements.
1. The first element of learned behavior can be summarized by an acronym, CYA. (if you’re unaware, or curious what CYA stands for, Google it!).
To put it frankly, many stakeholders and decision makers have developed “CYA behaviors”. These behaviors are a result of poor leadership and organizational distrust. CYA behavior causes overcaution and analysis paralysis. While the organization you are a part of might not exhibit these poor traits, employees have varied backgrounds, education and career experience. That experience may be contributing to a negative behavior or thought patterns. Simply put, stakeholders may be reluctant to accept a lower-than-expected risk exposure because doing so puts them in a perceived compromised position within their organization.
This learned behavior has proven social and economic impacts. Bill McEvily and a team of researchers conducted an experiment titled Whom do you distrust and how much does it cost? An experiment on the measurement of trust. In this context distrust has been defined as “an individual’s willingness to incur costs to mitigate vulnerability to [self] or others.” An organization’s culture of distrust results in measurable costs to the organization through improper risk rating, planning fallacies, and negative bias. This is an obstacle risk analysts must be prepared to overcome when reporting risk.
2. Finally, stakeholders may reject quantified risk results because of a misuse of a learned model or industry best practice.
In the world of cyber security there is a model of thinking that assumes the organization has already been breached, or that a breach could occur at any moment with disastrous consequences. Essentially “assume breach” is a mentality of complete distrust of physical, technical, and administrative controls. This mentality can be a good assumption for penetration testers, red teams, and threat hunters. A helpful analogy is one of loyal and attentive guard on the castle wall being distrusting of everything that approaches the gate. That guard is very good at maintaining the security posture, but would be a poor contributor to the kingdom’s diplomatic efforts. Likewise, the assumptions that make a good incident responder would be completely inappropriate for an objective, data driven risk assessment.
How Quantitative Risk Analysts Can Respond to Stakeholder Resistance
It is imperative that risk analysts understand the influences on decision making and what type of organizational and personal resistance can be expected. When initial rejection occurs, it will not fall neatly into one of the four buckets above, but will likely be a combination.
Here are three suggestions a risk analyst can use to effectively communicate and win over disagreeing stakeholders.
1. Shortly after a claim that the loss exposure is too low, the stakeholder will cite a completely fabricated version of a worst-case loss event.
Instead of arguing against the missing data points and lack of context, a risk analyst can begin to align himself with the stakeholder by showing the loss exceedance curve, and how the 90th percentile accounts for most Black Swan scenarios. Then, this is the appropriate time to remind the audience the important difference between what’s probable and what’s possible. Additionally, it is vital the audience understands how loss event frequency impacts annualized loss exposure. Oftentimes dissenting stakeholders fail to understand that a loss event without the context of time simply becomes an impact analysis.
2. For some stakeholders, risk quantification is mysterious.
Whether that’s due to a lack of exposure, explanation, or desire to look behind the curtains, risk analysts must deal with an audience that may not fully understand the process they are using. Because of this, cyber risk quantification practitioners should limit the amount of conversation around statistical theory.
Instead, the risk management program should be built on transparency achieved by documenting rationale, processes, and procedures. A formal risk assessment methodology should be adopted by the risk management team, approved by senior security leaders, and made publicly accessible. The process of asking stakeholders to accept the results of an assessment should be governed by the risk management standard. This empowers the risk owner to act within the authority granted to him by an overarching document, which has been viewed, approved, and signed by senior leadership. Being transparent in process is a crucial step in disarming destructive CYA behavioral bias.
3. Directly addressing misapplied frameworks or educational models can quickly identify the faulty assumptions that drive stakeholder rejection
The previous two suggestions are nuanced responses that do not directly call out a specific influencer of rejection. The suggestion in this paragraph is the only one that should directly and explicitly call out existing stakeholder assumptions. It is important that risk analysts do not attack stakeholder assumptions, but tactfully address them in a way that correctly establishes the context for objective risk assessment. One of the best ways to accomplish this is to admit the value of the misused framework in the right context. This aligns yourself with the audience, while still expertly defining the correct context for the analytical work.
Conclusion: Empathy and Cyber Risk Quantification
As the conversation around the risk assessment develops, it is important to understand the role a risk analyst plays. ‘Risk therapist’ is not an industry term yet, but maybe it should be. One of the most important character traits a risk analyst can exercise during these conversations is empathy. The understanding that the stakeholder is operating under a series of assumptions and inclinations against objective analysis should gently guide the conversation towards a resolution.
Zach Cossairt, a FAIR analyst and an emerging expert in Behavioral Economics, captures this thought in his blog post Human Nature In our FAIR Risk Programs: Work with It Not Against It: “The unevenness between our motives to avoid losses and achieve gains produces inertia and status quo bias… and any proposed change is evaluated as a concession, and therefore a loss. People. Hate. Losses”
One of the best ways to work with human nature in our risk programs is to understand it and skillfully apply simple response strategies to ally with the stakeholder, and influence change with a scalpel not a hammer.