A FAIR Artificial Intelligence (AI) Cyber Risk Playbook

FAIR-AIR Approach Playbook CoverThe FAIR Institute presents FAIR-AIR, a FAIR-inspired approach to help you identify your AI-related loss exposure and make risk-based decisions on treating this new category in cyber risk management – new but a puzzle to be solved using the FAIR techniques of modeling and quantifying cyber risk that our community has validated for years.


Download now:

FAIR-AIR Approach Playbook

Using a FAIR-Based Risk Approach to Expedite AI Adoption at Your Organization


The Playbook, created by FAIR Institute Member Jacqueline Lebo of Safe Security (technical adviser to the Institute), breaks down the generative artificial intelligence (GenAI)/large language model (LLM) challenge into five steps:

FAIR AI Playbook - Steps to Risk Analysis

1.  Contextualize

Recognize the five vectors of GenAI risk (Shadow GenAI, Creating Your Own Foundational LLM, Hosting on LLMs, Managed LLMs and Active Cyber Attack

2,  Identify Risk Scenarios

How to think through probable loss exposure for your organization from each of the five vectors using the familiar threats/assets/effects of Factor Analysis of Information Risk.

3.  Quantify Scenarios with FAIR

Using your internal data or industry data, apply FAIR analysis to produce results like this:

There is a 5% probability in the next year that Employees will leak company-sensitive information via an open-source LLM Model (like chat GPT), which will lead to $5 million dollars of losses.

4.   Prioritize/Treat AI Risks

To clarify the path to decision-making, identify the key drivers behind the risk scenarios. Example: For the Active Cyber Attack Vector, a risk driver could be phishing click rate among employees with access to large amounts of sensitive data.

5.   Decision Making

Bringing it all together – comparing treatment options, taking into account the quantitative values you uncovered, controls and key risk drivers.

As the Playbook concludes “The purpose of this approach is to meet the business needs, not create additional obstacles to AI deployment.”

More Resources on Quantitative Risk Analysis for GenAI from the FAIR Institute

It’s early days on risk management for AI-related cyber risk, but the FAIR community is already developing solid advice for applying the rigor of FAIR thinking to this frontier of risk analysis. The FAIR Institute has been selected as a participant in the National Institute of Standards and Technology’s US AI Safety Institute Consortium (AISIC). 

Some of our recent content:

Webinar: Intro to the NIST AI RMF

AI RMF - Martin Stanley 2The NIST Artificial Intelligence Risk Management Framework is a good starting place to get your arms around the new threat and risk landscape – and our webinar with NIST’s point man Martin Stanley is a good place to start with understanding the AI RMF. FAIR practitioners used to thinking in terms of risk scenarios and probabilistic outcomes, will find NIST’s advice on managing artificial intelligence risk familiar ground.

The Good News on AI Risk – We Can Analyze It with FAIR

In this video from a session at the 2023 FAIR Conference, two veteran FAIR analysts, Taylor Maze and Tyler Britton, relate the steps they took to launch FAIR analysis at Dropbox, starting with a general look at the problem space with the NIST AI RMF and working their way down to analyzable FAIR scenarios. Their advice: Apply the FAIR principle of analyze what’s a probable – not potential - cause of harm to your organization, is even more important when we don’t yet know the full effects of AI.

Also see:

How to Get Started with FAIR Analysis for GenAI Risk

FAIR Cyber Risk Analysis for AI Part 1: Insider Threat and ChatGPT

Join us in exploring the AI frontier of cyber risk management – become a FAIR Institute member now.

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37