FAIRCON25: Hands-on Training in AI Risk Management with FAIR


The 2025 FAIR Conference (FAIRCON25), the premier event for professionals in quantitative cyber risk management, takes a bold step into the future of AI risk management, with our theme “Resetting Cyber Risk in the Age of AI.”
Among the many AI-related sessions at the conference are three groundbreaking training opportunities designed to equip risk and security managers and CISOs with the tools and frameworks needed to navigate the complex landscape of AI security and governance.
FAIRCON25
Theme: Resetting Cyber Risk in the Age of AI
Training Days: Nov. 2-3
Conference Sessions: Nov. 4-5
Venue: The Glasshouse, 660 12th Ave., New York City
Why Does the 2025 FAIR Conference Stand Out in AI Risk Training?
While the cybersecurity world is overflowing with training courses on AI threats, FAIRCON25 offers a unique value: a foundation in rigorous cyber risk quantification using the FAIR standard (Factor Analysis of Information Risk). This approach ensures that participants not only understand the risks but also learn how to measure and manage them effectively.
Let’s explore the three standout courses that make this conference a must-attend for AI risk professionals.
1. AI Red Teaming and Risk Analysis
This comprehensive workshop is a hands-on exploration of the security challenges posed by AI systems, with a particular focus on Large Language Models (LLMs). Over two days, participants will delve into:
- AI security fundamentals and LLM attack techniques.
- Understanding AI supply chain vulnerabilities and securing AI in cloud environments.
- AI threat modeling and securing DevOps pipelines.
- AI governance, compliance, and real-world use cases
Who Should Attend?
This workshop is tailored for security professionals, penetration testers, red teamers, and developers building AI-powered applications. Whether you’re a seasoned expert or new to AI, the course starts with foundational concepts, making it accessible to a wide audience.
What Will You Learn?
Participants will gain practical skills to:
- Identify and assess vulnerabilities in AI systems.
- Test and secure AI applications against real-world attack vectors.
- Bridge the gap between traditional red teaming and AI risk assessment.
With over 15 hands-on labs, attendees will leave with actionable insights and methodologies they can apply immediately in their organizations.
2. AI Risk Management for CISOs: A FAIR-Inspired Framework for Governing Responsible AI Adoption
As AI adoption accelerates, governance and security leaders often find themselves at odds with business and data teams. This workshop exclusively for CISOs and led by Omar Khawaja, Field CISO at Databricks, introduces the Databricks AI Security Framework (DASF) to bridge this divide.
Who Should Attend?
This session is designed for CISOs and governance leaders who want to confidently manage AI risks while enabling their organizations to innovate responsibly. By the end of the workshop, participants will have a clear roadmap for aligning AI initiatives with robust security and governance practices.
What Will You Learn?
- Understand the 12 components of a modern AI system and their interactions.
- Identify 62 risks and threats across AI components and map them to 64 actionable controls.
- Learn how to mitigate both technical and organizational risks of AI adoption.
3. AI Governance Accelerated: Quantifying and Controlling Risk for Enterprises in a Generative AI World
Organizations in enterprises are grappling with fragmented governance, opaque AI systems, unquantified financial exposure, and a bewildering array of emerging regulations. This intensive 4-hour workshop provides a concrete, risk-based roadmap to confidently deploy and govern AI, emphasizing practical, quantifiable strategies. to re-center your entire AI governance strategy. Led by AI risk management pioneers from SAFE and Databricks.
Who Should Attend?
This workshop is designed for both strategic leaders and hands-on practitioners who are directly engaged with AI governance and risk, with a critical focus on risk management, cybersecurity, and data privacy. While Chief Information Security Officers, Chief Risk Officers, Chief AI Officers, and Legal/Compliance Leaders will gain the strategic, quantifiable roadmap needed to establish enterprise-wide controls and measure financial exposure, Risk Analyst, Compliance Consultants, Data Scientists, Internal Audit Professionals, and especially Cybersecurity and Privacy Teams will receive the practical, concrete tools and methodologies required to translate policy into actionable, auditable governance and protect sensitive data. Attend to move past fragmented strategies and gain a unified, risk-based framework for confident, compliant Generative AI deployment.
What Will You Learn?
- Critical components of a robust AI governance framework, clarifying roles, responsibilities, and decision-making processes across the entire AI lifecycle.
- How the FAIR methodology can be powerfully adapted to quantify AI-specific risks in financial terms.
- How to create an end-to-end risk profile for your AI deployments and implement continuous monitoring and auditing of AI systems for performance, fairness, and security.
Don’t Miss Out on FAIRCON25
FAIRCON25 isn’t just another conference—it’s a unique opportunity to gain practical, quantifiable insights into AI risk management. Whether you’re securing AI systems or governing their adoption, these training sessions provide the tools and frameworks you need to succeed.
When and Where?
- AI Red Teaming and Risk Analysis: Nov 2–3, 8:00 AM–5:00 PM
- AI Risk Management for CISOs: Nov 4, 1:30 PM–3:30 PM (By invitation only)
- AI Governance Accelerated: Nov. 3, 1-5 PM
Learn More: FAIRCON25 Will Be the Biggest Event of the Year for AI Risk Management
Ready to take your AI risk management skills to the next level? Register today and secure your spot at FAIRCON25.