FAIRCON24: Learn Cyber Risk Management for AI

FAIRCON23 - AI Risk Posture - 5 AI Vectors - Blog Image 2

Join us at the 2024 FAIR Conference in Washington, DC (training September 29-30, sessions October 1-2), to explore the challenging new world of risk analysis for artificial intelligence.

The good news is we can apply our trusty techniques of FAIR (Factor Analysis of Information Risk) to do quantitative analysis for AI, just with new threat vectors, new risk scenarios, new loss categories – in other words, it’s the same but completely different.  

FAIR Conference 2024 Will Present these AI-Related Events:

 

Jacqueline Lebo Safe SecurityArun Pamulapati

Workshop: Mastering AI Governance and Risk Management

1:00-5:00 PM, Monday, September 30

Leaders from the FAIR Institute AI Workgroup:  

>>Jacqueline Lebo, Risk Advisory Manager, Safe Security

>>Arun Pamulapati, Sr. Staff Security Field Engineer, Databricks

Panel Discussion: Accelerating AI - Achieving the Right Balance Between Speed and Security

11:15 AM - 12:00 PM, Wednesday, October 2

Moderator:

>>Pankaj Goyal, Director, Research and Standards, FAIR Institute

Speakers:

>>Randy Herold, CISO, ManpowerGroup

Oki Mek, CISO, U.S. Federal Civilian Government, Microsoft

Michelle Griffith, VP, Security GRC, IHG

FAIR Inst. Workgroup Presentation: Navigating the Complexities of Assessing and Managing AI Risk

1:30-2:10 PM, Wednesday, October 2,

Moderator:

Omar Khawaja, CISO, Databricks

Speakers:

Arun Palmumati, Sr. Staff Field Engineer, Databricks

Jacqueline Lebo, Risk Advisory Manager, Safe Security

Register for the 2024 FAIR Conference!

FAIR Conference 2024 AI Workshop Preview

Consider starting your FAIRCON AI journey with the pre-conference workshop. Here’s a brief preview of just some of the topics the workshop will cover:

5 Steps to Quantification: Introducing FAIR-CAM

The FAIR Institute’s Artificial Intelligence Workgroup is doing truly pioneering work on AI-related risk, and has produced the

FAIR-AIR Approach Playbook

Using a FAIR-Based Risk Approach to Expedite AI Adoption at Your Organization

a great starting point for understanding the risk dimensions of AI, with the discipline of FAIR.  

The workshop will introduce the playbook and walk you through its 5-step approach:

FAIR AI Playbook - Steps to Risk Analysis

 

5 Threat Vectors for Artificial Intelligence

Here’s an example of how cyber risk analysis must adapt to the new architecture of large language models (LLMs) and generative AI (Gen AI): Only #1 vector will sound familiar, an external attack, while the others could be seen as new forms of insider or third-party risk.

  1. Active Cyber Attack: Your adversaries are using LLMs to attack you
  2. Shadow GenAI: You are using Generative AI, and you just don't know it.
  1. Foundational LLM: You are building LLM(s) for use cases.
  1. Hosting on LLMs: You are hosting an LLM and using it to develop use cases.
  1. Managed LLMs: You are using a third party LLM to develop use cases

5 New Gen AI Scenarios

Risk scenarios are the raw material of FAIR analyses. The workshop will go over these new forms of scenarios, and the challenges of quantification. For instance, what’s the bottom-line impact of a hallucination?

  1. Prompt injection: A direct prompt injection occurs when a user injects text that is intended to alter the behavior of a large language model (LLM). 
  2. Model theft: Steal a system’s knowledge through direct observation of their input and output, akin to reverse engineering.
  3. Data leakage: AI models leak data used to train the model and/or disclose information meant to be kept confidential.
  4. Hallucinations: Inadvertently generate incorrect, misleading or factually false outputs, or leak sensitive data.
  5. Insider incidents related to GenAI: For instance, insiders using unmanaged LLM models with sensitive data. 

7 Questions to Ask before Onboarding an AI system

AI-powered applications are on their way to being commonplace, both customer-facing and for internal use; the workshop will discuss the pros and cons of various ways to implement.

  1. Will we implement this solution?
  2. Will we need additional cyber insurance?
  3. Will we need additional controls?
  4. Should we buy an enterprise Generative AI solution?
  5. Should we use a third-party solution, or should we use a homegrown solution?
  6. Should we launch this app or alter safety rates for inputs in testing more?
  7. Does this model meet our risk tolerance, or should we use another one?

To attend the workshop Mastering AI Governance and Risk Management, register now for the 2024 FAIR Conference!

 

Learn How FAIR Can Help You Make Better Business Decisions

Order today
image 37