Why I Failed the AIGP Exam - And You Should Too

Utopian Answers vs Business Reality 3-1-1

An OpenFAIR certified practitioner's call for quantitative risk methodology in AI governance certification

 

It's not often that a cybersecurity executive with over 20 years of experience, multiple certifications including CISSP and CCISO, and recognition as an ISSA Blazing Star award winner publicly admits to failing a professional certification exam. Twice. But sometimes the stakes are too high to let pride get in the way of progress.

Author Donna Gallaher, CISSP, C|CISO, CIPP/E, CIPM, is Atlanta FAIR Institute Chapter Co-Chair. 

I'm sharing my AIGP (Artificial Intelligence Governance Professional) examination failures because they reveal a fundamental problem that could undermine the entire AI governance profession: the disconnect between what we're teaching and what the business world actually needs.

The Problem: "Utopian" Answers vs. Business Reality

When I first pursued the AIGP certification, I had every reason to expect success. I'd previously earned CIPP/E and CIPM certifications from the IAPP and found their materials challenging but straightforward. As co-chair of the Atlanta FAIR chapter and an OpenFAIR certified professional, I bring years of quantitative risk analysis experience to organizations navigating complex AI implementations.

But the AIGP exam taught me something troubling: success required choosing "utopian" theoretical answers over practical business decisions.

Consider this scenario from my preparation: An AI system shows bias against a demographic group during testing, but remains within project acceptance parameters. The "correct" answer? Form a stakeholder committee to reassess the model before proceeding. The business reality? Organizations will roll out the project while investigating the bias—because that's what serves stakeholders best when acceptance criteria are met and individual harm is mitigated.

This disconnect isn't just academic. It's creating real business harm.

The Stakes: Why This Matters Beyond Certification

We're operating in the "wild west" of AI implementation. In my executive consulting work, I regularly sit in C-suite meetings where self-proclaimed "AI experts" make critical deployment decisions with alarming lack of risk expertise. The AI governance profession should be the solution to this chaos—but current certification approaches may actually be making it worse.

Here's what happens when AI governance professionals follow examination-endorsed "best practices":

  • Organizational paralysis through endless committee formation and criteria reassessment
  • Resource misallocation to repetitive analysis instead of productive implementation
  • Erosion of business confidence in governance processes, leading to shadow AI deployments
  • Competitive disadvantage against organizations making pragmatic risk-informed decisions
  • Financial harm through foregone operational efficiency when acceptance criteria are already met

The irony is stark: subjective governance approaches designed to prevent AI risks actually create greater risks by encouraging avoidance of beneficial AI applications.

The Regulatory Reality Check

Meanwhile, the regulatory environment is demanding exactly the opposite of what AIGP certification teaches. The SEC now requires companies to disclose material impact on financial condition and results of operations for cybersecurity incidents, considering both qualitative and quantitative factors. The National Association of Corporate Directors has partnered with the FAIR Institute to integrate quantitative risk methodologies into their board education programs.

Yet our primary AI governance certification still teaches qualitative risk frameworks that lack the precision these requirements demand.

Board directors educated in FAIR-based risk analysis will expect similar rigor from AI governance professionals. When those professionals can only provide red-yellow-green heat maps instead of probabilistic financial projections, the disconnect undermines both individual careers and organizational decision-making.

The Solution: Quantitative Risk Analysis

The Factor Analysis of Information Risk (FAIR) methodology provides exactly what AI governance needs: objective, quantifiable risk assessment that enables informed decision-making rather than paralysis-inducing subjectivity.

In my consulting practice, I've seen the transformation when organizations move from qualitative to quantitative approaches:

  • Board presentations become strategic discussions rather than color-coded confusion
  • Resource allocation becomes objective rather than political
  • Executive engagement increases when risk data connects to business decisions
  • Regulatory conversations focus on demonstrable due diligence rather than checkbox compliance

Most importantly, quantitative analysis enables truly human-centric AI governance—balancing individual protection with beneficial deployment rather than defaulting to indefinite delays.

The Irony: IAPP Not Following Their Own Principles

Perhaps most troubling is how the AIGP examination contradicts the very governance principles it teaches:

  • Cross-functional collaboration: The exam appears biased toward legal perspectives despite teaching the importance of diverse expertise
  • Transparency: The certification process operates as a "black box" with no insight into incorrect answers
  • Human oversight: Fully automated decisions with no meaningful appeal process
  • Accountability: Concerns about systemic issues met with requirements for additional fees rather than investigation

An organization teaching AI governance best practices should demonstrate those practices in their own systems.

A Call for Professional Evolution

I've made the decision not to retake the AIGP exam until these methodological issues are addressed. The risk to organizations whose AI governance committees follow current examination-endorsed approaches is simply too great.

This isn't about personal vindication—it's about professional integrity. The AI governance profession stands at a crossroads: become an effective enabler of responsible AI deployment, or remain a bureaucratic impediment that organizations bypass entirely.

The stakes—organizational competitiveness, individual careers, and societal benefit from responsible AI—demand certification programs that prepare professionals with methodologies proportional to the decisions they'll make.

What You Can Do

The FAIR Institute and NACD have already demonstrated successful collaboration in cybersecurity education. Now they have the opportunity to extend quantitative methodologies into AI governance certification.

I encourage fellow risk management professionals to contact the IAPP at certification@iapp.org and urge them to practice what they preach: update their curriculum to reflect modern risk assessment methodologies and implement the transparency and accountability principles they teach.

The time for this evolution is now. The question is whether we'll lead it or be forced to follow


Read the full whitepaper: "A Practitioner's Perspective: Evolving AI Governance Risk Assessment Beyond Subjective Methodologies" for detailed analysis, specific examples, and comprehensive recommendations for professional evolution in AI governance certification.

 

image 37