Using FAIR to Analyze Project-Related Risk - Part 1

Using_FAIR_to_Analyze_Project-Related_Risk_-_Part_1.jpgWe’ve recently gotten some questions about how to apply FAIR against project-related risk – e.g., “How much risk is associated with the potential for software testers to be unavailable for this project?Because project-related risk is such fertile ground for FAIR analyses, I thought it would make sense for me to blog about it.

Three types of project-related problems

Whenever I’m doing an analysis related to project risk, I first remind myself that project-related problems manifest in three ways: delay, cost overruns, and/or quality shortfalls. When any of these occur, loss materializes in one or more of the six forms we use in FAIR. For example:

  • Delays can result in lost revenue if the project is related to revenue generation, continued higher operational costs if the project is related to improving efficiency, perhaps regulatory or legal exposure if the project is intended to mitigate liability, lost competitive position if the project is attempting to keep the organization ahead of or apace with its competition, etc...
  • Cost overruns are pretty straight-forward, and typically translate to unplanned monetary outlay (replacement loss in FAIR parlance).  
  • Quality shortfalls can materialize as reputation damage, loss of competitive advantage, legal and/or regulatory liability, response or replacement costs associated with fixing the quality problem later, etc.

Not surprisingly, an organization’s culture will often determine which of these loss events is most likely. For example, many companies I’ve worked with are “deadline driven” – i.e., nothing short of an act of God will allow for a missed deadline. They’ll incur cost overruns and quality problems all day long, just never a missed deadline. Sound familiar?

Scoping the analysis

So, let’s say we’re faced with answering the question above regarding the availability of software testers – how do we analyze it? Well, as with any FAIR analysis, we first have to lay some quick groundwork.  Specifically, we have to answer the following questions:

  • What is the purpose of the analysis?
  • What are the assets at risk?
  • Who or what is the threat agent?
  • What is the relevant loss event?
  • How will the output be used?

Purpose

We’ve kind of already answered this. The purpose is to understand how much risk (loss exposure) is associated with the potential for software testers to not be available for a particular project. (Depending upon our needs, we could of course broaden the scope to include more than one project, or even all of the software projects in our portfolio.)  

Assets at Risk

In this example, the asset at risk is the application. Specifically, we’re concerned about the integrity of the application, but we’ll explore this some more in the “Loss Event” section below.

Threat Agent

Any guesses about who the threat agent might be in this analysis? This may not be as obvious as you think. First, keep in mind that within a FAIR analysis a threat agent is the actor whose action can result in loss. Then let’s ask ourselves who or what within this scenario is in a position to negatively affect the application...  

If you said, “The programmer(s) who write potentially buggy code”, congratulations. Reason being, they are acting upon the asset and their actions have the potential for a negative outcome. They aren’t malicious (with extremely rare exceptions), but this doesn’t alter the fact that they are in a position to create loss through their actions.

Loss Event

In this example, the loss event is the introduction of a flawed application into production. But, you might ask, aren’t all applications flawed to some degree? Yes, certainly, so we need to refine our loss event description to be something like, “The introduction of a flawed application into production, where the flaw results in the loss of data integrity.” Why this specific concern? Well, at least for this example, data integrity is what our software testers would be evaluating. If they were testing something else, then we would define our loss event scope differently.

The point is, the definition of our loss event HAS to align with the purpose of the analysis. If we scope the loss event to include flaws outside of what our testers are looking for, then the analysis results would be misleading. Again – the scope can be anything we want it to be, as long as it aligns with the intended use of the analysis.

Output Use

The output from the analysis should guide decisions regarding the availability of software testers and perhaps even the software testing process itself.  

With the above questions answered, we’re in a much better position to do the quantitative part of the analysis, which I’ll cover in a following post. Stay tuned...

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37