FAIR AI Stock Image

Understanding
Generative AI Risk
with FAIRTM

Measure GenAI Risk

GenAI will transform cyber risk management, creating new opportunities for attackers but also defenders. The good news is that, with Factor Analysis of Information Risk (FAIR), we have a flexible tool to confront whatever novel risk scenarios come at us – but we will need to pivot to meet the challenge.  

Learn more in this blog post: What’s New about Generative AI Risk?

AI - Artificial Intelligence -  Blue

The FAIR Institute and the Challenge of AI Risk Management

As the leading research organization developing standards and best practices for cyber and operational risk management, the FAIR Institute is dedicated to educating the profession on AI risk, under the guidance of our GenAI Work Group. Get an on-demand recorded briefing on the Work Group’s activities in this webinar: The Future of AI Risk Management

The Institute was invited by the National Institute of Science and Technology to join the US AI Safety Institute Consortium (AISIC) (learn more).

A FAIR Artificial Intelligence Cyber Risk Playbook

The FAIR Institute presents FAIR-AIR™, a FAIR-inspired approach to help you identify your AI-related loss exposure and make risk-based decisions on treating this new category in cyber risk management.

Read the Playbook to learn how to:

1. Recognize the five vectors of GenAI risk

2. Identify risk scenarios for analysis

3. Quantify the probable frequency and magnitude of AI-related cyber loss events

4. Prioritize and treat AI risks

5. Compare treatment options for a final decision

 

Example FAIR Analysis Output 

“There is a 5% probability in the next year that Employees will leak company-sensitive information via an open-source LLM Model (like chat GPT), which would lead to $5 million dollars of losses.”

Download the Playbook

Where to Start with FAIR AI Risk Analysis

FAIRCON23 - Taylor Maze - Tyler Britton - Dropbox - FAIR Analysis of AI Risk

More Resources for AI Risk Management

The NIST AI RMF is a good starting point to wrap your head around the AI risk landscape. If you’re a FAIR practitioner used to thinking in terms of risk scenarios and probabilistic outcomes, you’re already in tune with NIST’s advice on managing artificial intelligence risk.

Watch a FAIR Institute Video: Intro to the NIST AI RMF - with Martin Stanley of NIST and CISA

Also see:


Together with our technical adviser Safe Security, the FAIR Institute created a GenAI Risk Platform that includes:

  • A GenAI Risk Scenario Library
  • GenAI Controls Library, based on the FAIR Controls Analytics Model (FAIR-CAM™)
  • A GenAI Index to rate SaaS players for security features 
NIST AI RMF Core Elements 3