FAIR Institute Joins NIST’s US AI Safety Institute Consortium (AISIC)

The FAIR Institute, the leading center for research on quantitative cyber risk management, has been selected as a participant in the National Institute of Standards and Technology’s US AI Safety Institute Consortium (AISIC)

The Consortium’s goal is “to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote development of trustworthy Artificial Intelligence (AI).”

The Consortium will advise on the development of the NIST Artificial Intelligence Risk Management Framework (AI RMF), released on January 26 with a companion AI RMF Playbook.

“We’re very much looking forward to helping NIST in this effort to get ahead of the risks and opportunities of AI,” said Pankaj Goyal, Director, Standards and Research, for the FAIR Institute. “We believe that the scientific, risk-based approach of FAIR offers a path forward to the risk management profession as it moves into the AI era.”

FAIR Analysis for AI-related Risk

FAIR™ is particularly well-suited for analyzing artificial intelligence-related risk scenarios – that was the takeaway from a recent FAIR Institute webinar with Martin Stanley, leader of the research and development program for the Cybersecurity and Infrastructure Security Agency (CISA/DHS), on loan to NIST to advance the AI RMF. 

Watch the webinar: Intro to the NIST AI RMF (FAIR Followers Have a Head Start)

Two of Stanley’s key points:

  • “Trying to bucketize risks and measure them in a specific way in every context of use is probably more likely to cause you to miss impact than anything else.” 
  • “In cybersecurity, we’re used to fixed measurements and standard outputs. That’s out the window when it comes to AI systems that are probabilistic in nature…with different outputs on similar queries.” 

With its emphasis on 1) analysis of scenarios for a highly flexible approach to risks and 2) expressing analysis results in probabilistic ranges, the FAIR model can readily be adapted to AI-related risk. 

The Institute recently released the FAIR-AIR Approach Playbook to “ensure AI adoption is handled in a risk-based, secure way by using the same language as the business, based on analysis with the proven FAIR methodology for cyber risk quantification.”

The FAIR-AIR Approach offers five steps to safe deployment of generative AI. 

  1. Contextualize – understand the vectors of AI risk (such as hosting a Large Language Model or defending against adversaries leveraging an LLM)
  2. Scope – identify risk scenarios for FAIR analysis
  3. Quantify risk in terms of loss event frequency and magnitude of impact 
  4. Prioritize risks based on probable frequency and magnitude
  5. Decision making for risk treatment 

Case Study: Applying FAIR and the NIST AI RMF at Dropbox 

At the 2023 FAIR Conference (FAIRCON23), FAIR Institute members Taylor Maze and Tyler Britton of Dropbox told how they implemented FAIR analysis for artificial intelligence risk scenarios, starting with the NIST AI RMF.

Watch a video of the Dropbox presentation: 

Quantifying Multi-Product Security and Privacy AI Risk with FAIR and NIST AI RMF

“AI is one of the buzzwords that sounds really scary,” Taylor said. “But what we learned along the way is, it’s really not that new or special.”

More Resources for AI Risk Management

Together with our technical adviser Safe Security, the FAIR Institute created a GenAI Risk Platform that includes

>>A GenAI Risk Scenario Library

>>A FAIR-CAM Based GenAI Control Library

>>A GenAI Index to rate SaaS players for security features 

Join the FAIR Community! Become a FAIR Institute member now.






Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37