Webinar: FAIR Institute Kicks off Research on FAIR for AI Risk Management

Artificial Intelligence Concept Featured

The FAIR Institute’s AI Workgroup has launched an ambitious agenda to explore the young field of risk management for artificial intelligence, particularly generative AI (GenAI). Workgroup members reported to the FAIR community via a recent webinar:


The Future of AI Risk Management: A Deep Dive with the FAIR Institute AI Workgroup

Jacqueline Lebo, Risk Advisory Manager, Safe Security

Arun Pamulapati, Sr. Staff Security Field Engineer, Data Bricks

Brandon Sloane, Security Risk Management Lead, Meta

Watch on Demand the AI Risk Management Webinar Now


Goals of the AI Workgroup:

  1. To enable risk-informed decision making in GenAI with the right standards, frameworks, tools and expertise.
  2. To collaborate with industry bodies like NIST, MITRE and EU - to complement their work with risk-driven thinking
  3. To educate and support the FAIR community on this topic

Brandon Sloane - SquareWorkgroup members gave a preview of some of the issues facing FAIR practitioners who are working to bring quantitative analysis to an often strange new world. Arun noted that in the Databricks AI Security Framework (DASF), of 55 cataloged risks, 35 were novel, the rest clones of familiar cyber risks.

Regulation is moving in fast

Arun Pamulapati - SquareRegulatory agencies have come down surprisingly strongly in some artificial intelligence cases and the webinar explores several examples. Drugstore chain Rite Aid was banned from using AI for 5 years by the US Federal Trade Commission after an AI implementation for facial recognition that was found to be biased toward identifying persons of color and women as shoplifters.

Jacqueline Lebo SquareYour AI product can be a moving target

Unlike traditional software, a GenAI application can “drift”, essentially morphing as a result of its own feedback loop – risk management must include ongoing audits of model performance.

Materiality…It’s complicated

With increasing emphasis on material risk disclosure by regulators, you should think this one through. As Brandon points out, the risk could be:

Social / Societal - does the model exacerbate societal problems?

Privacy - does the model expose data it shouldn’t?

Security - does the model enable adversarial activities?

And the risk could be flowing in many directions: the company building the model, the company hosting the model, the users of the model or society at large.

As Jacqueline summed up the outlook to take toward AI risk management, “It’s organization by organization depending on what you do or your product. Make sure your data governance team, privacy team and cybersecurity team are in the loop and working together to define acceptable uses and what the risk threshold definitions are for the use cases you are looking to deploy. There’s not a one-size fits all.”

More on AI from the FAIR Institute

A FAIR Artificial Intelligence (AI) Cyber Risk Playbook

Blog Post: What’s New about Generative AI Risk?

FAIRCON23 Session Video: Launching a FAIR Risk Management Program for AI at Dropbox

Learning Opportunity for FAIR AI Risk Management: Attend the 2024 FAIR Conference

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37