The FAIR Institute Blog

AI in Cybersecurity: Governance and Risk Quantification in the Boardroom

Written by Nicola (Nick) Sanna | Mar 18, 2025 4:19:37 PM

AI: A Friend and a Foe in Cybersecurity 

Artificial intelligence (AI) is both an opportunity and a challenge for modern cybersecurity. The new “AI in Cybersecurity” Special Supplement to NACD/ISA Handbook on Cyber Risk Oversight, developed in partnership with the Internet Security Alliance (ISA) and the National Association of Corporate Directors (NACD), presents a comprehensive discussion on how AI is shaping the cybersecurity landscape.  

 

Nicola (Nick) Sanna is Founder of the FAIR Institute

 

From the FAIR Institute’s perspective, this report underscores the need for quantitative risk management in AI-driven cybersecurity environments. AI can act as a force multiplier for cyber defense, but it also lowers the barrier for cybercriminals. Boards must approach AI risk with a measured, data-driven framework—one that prioritizes economic risk quantification, operational resilience, and governance best practices.  

Two members of the FAIR Institute’s board, Omar Khawaja and Nick Sanna, were among the contributors to this Special AI Supplement. Their expertise reinforces the importance of integrating quantitative cyber risk management principles—such as those promoted by the FAIR™ framework for cyber risk management—into AI oversight at the board level.  

Key Takeaways for Risk Quantification and Cyber Oversight  

1.  AI’s Dual Role in Cybersecurity: Risk and Defense  

AI is transforming cybersecurity in two fundamental ways:  

  • Cyber Defense Enhancement: AI can reduce false positives, detect threats earlier, and optimize security operations by analyzing vast amounts of data.  
  • Cyber Risk Amplification: AI empowers adversaries by automating sophisticated phishing attacks, generating deepfakes, and optimizing malware strategies.  

To navigate this landscape, boards must evaluate AI risk in economic terms—quantifying its potential financial impact rather than relying on vague, qualitative assessments. 

2.  AI and Cybersecurity Oversight: A Boardroom Priority  

Boards cannot afford to treat AI as just another technological trend. Instead, they must:  

  • Adopt a risk-based governance approach: AI risks should be measured using approaches like FAIR-AIR™ to assess economic exposure.  
  • Embed cybersecurity into enterprise risk management: AI should not be viewed as a standalone issue but rather as an integral part of business risk oversight.  
  • Require AI-specific risk disclosures: Just as cybersecurity risks are disclosed in regulatory filings, boards must ensure that AI-related risks are explicitly accounted for.  

3.  The Regulatory and Compliance Landscape

AI regulation is evolving rapidly. The handbook outlines:  

  • The patchwork of emerging AI regulations (e.g., the EU AI Act, NIST AI Risk Management Framework).  
  • Regulatory disclosure expectations—boards must proactively assess how AI risks impact shareholder value and compliance requirements.  
  • The importance of a structured AI governance program, ensuring transparency, accountability, and compliance with industry best practices.  

To avoid regulatory penalties, boards should apply structured risk models that integrate AI compliance into existing risk assessment frameworks.  

4.  AI Readiness: A Call for Board-Level Action 

The report urges boards to take a proactive approach to AI risk oversight. Key recommendations include:  

  • Assessing board AI expertise—do directors have the knowledge needed to oversee AI risks effectively?  
  • Establishing AI governance structure—should boards create dedicated AI committees or integrate AI oversight into existing risk committees?  
  • Aligning AI with corporate cybersecurity frameworks—how does AI fit within the broader cyber risk quantification strategy?  

Boards must engage third-party AI risk experts and conduct independent risk assessments to ensure their organizations are AI-ready.  

5.  Critical Boardroom Questions on AI and Cybersecurity 

To guide board discussions, the handbook provides a question framework for directors. From a FAIR Institute perspective, key questions include:  

  • What is our AI risk exposure, and how is it quantified in financial terms?
  • Are we considering AI risks in the context of our broader cybersecurity risk quantification strategy?  
  • How does AI adoption impact our cyber insurance coverage and liability?
  • Do we have an AI risk governance structure that aligns with regulatory expectations?  

Final Thoughts: The FAIR Approach to AI Cyber Risk 

AI is a strategic asset, but it also introduces new risk variables that demand rigorous, quantifiable oversight. The NACD/ISA AI in Cybersecurity Supplement provides a valuable blueprint for boards, but true AI risk management requires more than qualitative assessments.  

At the FAIR Institute, we advocate for an explicit, financially driven, quantitative approach to cyber risk management. By integrating AI into structured risk analysis frameworks such as FAIR, organizations can balance AI innovation with robust governance—ensuring AI acts as an enabler of security rather than a source of unchecked risk. 

Join the FAIR Institute (General Membership Is free)