Artificial intelligence (AI) is both an opportunity and a challenge for modern cybersecurity. The new “AI in Cybersecurity” Special Supplement to NACD/ISA Handbook on Cyber Risk Oversight, developed in partnership with the Internet Security Alliance (ISA) and the National Association of Corporate Directors (NACD), presents a comprehensive discussion on how AI is shaping the cybersecurity landscape.
Nicola (Nick) Sanna is Founder of the FAIR Institute
Two members of the FAIR Institute’s board, Omar Khawaja and Nick Sanna, were among the contributors to this Special AI Supplement. Their expertise reinforces the importance of integrating quantitative cyber risk management principles—such as those promoted by the FAIR™ framework for cyber risk management—into AI oversight at the board level.
1. AI’s Dual Role in Cybersecurity: Risk and Defense
AI is transforming cybersecurity in two fundamental ways:
To navigate this landscape, boards must evaluate AI risk in economic terms—quantifying its potential financial impact rather than relying on vague, qualitative assessments.
2. AI and Cybersecurity Oversight: A Boardroom Priority
Boards cannot afford to treat AI as just another technological trend. Instead, they must:
3. The Regulatory and Compliance Landscape
AI regulation is evolving rapidly. The handbook outlines:
To avoid regulatory penalties, boards should apply structured risk models that integrate AI compliance into existing risk assessment frameworks.
4. AI Readiness: A Call for Board-Level Action
The report urges boards to take a proactive approach to AI risk oversight. Key recommendations include:
Boards must engage third-party AI risk experts and conduct independent risk assessments to ensure their organizations are AI-ready.
5. Critical Boardroom Questions on AI and Cybersecurity
To guide board discussions, the handbook provides a question framework for directors. From a FAIR Institute perspective, key questions include:
AI is a strategic asset, but it also introduces new risk variables that demand rigorous, quantifiable oversight. The NACD/ISA AI in Cybersecurity Supplement provides a valuable blueprint for boards, but true AI risk management requires more than qualitative assessments.
At the FAIR Institute, we advocate for an explicit, financially driven, quantitative approach to cyber risk management. By integrating AI into structured risk analysis frameworks such as FAIR, organizations can balance AI innovation with robust governance—ensuring AI acts as an enabler of security rather than a source of unchecked risk.
Join the FAIR Institute (General Membership Is free)