The FAIR Institute’s AI Workgroup has launched an ambitious agenda to explore the young field of risk management for artificial intelligence, particularly generative AI (GenAI). Workgroup members reported to the FAIR community via a recent webinar:
The Future of AI Risk Management: A Deep Dive with the FAIR Institute AI Workgroup
Jacqueline Lebo, Risk Advisory Manager, Safe Security
Arun Pamulapati, Sr. Staff Security Field Engineer, Data Bricks
Brandon Sloane, Security Risk Management Lead, Meta
Watch on Demand the AI Risk Management Webinar Now
Regulation is moving in fast
Unlike traditional software, a GenAI application can “drift”, essentially morphing as a result of its own feedback loop – risk management must include ongoing audits of model performance.
Materiality…It’s complicated
With increasing emphasis on material risk disclosure by regulators, you should think this one through. As Brandon points out, the risk could be:
Social / Societal - does the model exacerbate societal problems?
Privacy - does the model expose data it shouldn’t?
Security - does the model enable adversarial activities?
And the risk could be flowing in many directions: the company building the model, the company hosting the model, the users of the model or society at large.
As Jacqueline summed up the outlook to take toward AI risk management, “It’s organization by organization depending on what you do or your product. Make sure your data governance team, privacy team and cybersecurity team are in the loop and working together to define acceptable uses and what the risk threshold definitions are for the use cases you are looking to deploy. There’s not a one-size fits all.”
More on AI from the FAIR Institute
A FAIR Artificial Intelligence (AI) Cyber Risk Playbook
Blog Post: What’s New about Generative AI Risk?
FAIRCON23 Session Video: Launching a FAIR Risk Management Program for AI at Dropbox
Learning Opportunity for FAIR AI Risk Management: Attend the 2024 FAIR Conference