The FAIR Institute Blog

Decoding the EU AI Act & DORA: A FAIR Perspective on Compliance

Written by Jacqueline Lebo and Abhi Arikapudi | Apr 22, 2025 3:45:54 PM

Let's get real about AI and the regulatory whirlwind hitting financial entities. You're not just dealing with the usual cybersecurity headaches anymore. Now we've got the EU, predictably, leading the charge with a whole new layer of complexity. 

We're talking about the EU AI Act, and let's not forget DORA, the Digital Operational Resiliency Act, all landing on your plate at once. And here's the kicker: these aren't isolated issues. They're tangled up, interconnected, and they're going to fundamentally reshape how you manage AI risk. 

You can't afford to treat these as separate compliance exercises. You need to understand how they interact, how they amplify each other’s impact, and, crucially, how to quantify the financial risk they represent. 

That's where FAIR comes in. Forget the vague risk assessments; we need to talk about real numbers, real impact, and how to navigate this regulatory maze without losing your shirt.

Join us in London at the 2025 FAIR Institute European Summit

“Managing Risk at the Speed of the Business”

June 05, 2025, 9:00 AM - 5:00 PM (GMT)

Leonardo Royal Hotel

Register Now!

 

Deep Dive into the EU AI Act

Let's break down this AI Act mess. It's not a one-size-fits-all situation, and the EU, in its infinite wisdom, has decided to categorize AI systems. We're talking unacceptable risk, high-risk, limited risk, and minimal risk. Now, let's be blunt: you can pretty much ignore the minimal risk stuff for now. It's the high-risk category that's going to keep you up at night, especially when you're dealing with the financial sector's intersection with DORA.

And here's the core of it: they're pushing a risk-based approach. Which, in theory, sounds great. In practice? It means you're going to have to actually quantify the potential damage. No more of this "it's probably fine" nonsense. As Snape from Harry Potter would say, "There will be no foolish wand-waving, or silly incantations in this class." We’re dealing with real-world financial implications, not magical thinking. 

EU regulators are demanding transparency, accountability, and demonstrable risk mitigation. And let's be clear, when we're talking about high risk AI in finance, we are talking about systems that directly impact people's financial stability, and the stability of the markets.

About the authors: 

Abhi Arikapudi is Sr. Director, Security Engineering, at Databricks

Jacqueline Lebo is Risk Advisory Manager at SAFE

Now, those high-risk systems? Forget about cutting corners. We're talking mandatory conformity assessments, stringent data governance requirements, and human oversight that's not just a box-ticking exercise. Specifically, you're looking at things like:

>>Data Governance : They're going to want to see that your training, validation, and testing data is relevant, representative, and free from errors. That means you better have a handle on data quality and integrity, especially when you're using AI for credit scoring or algorithmic trading.

>>Technical Documentation: You better be ready to show them the inner workings of your AI. They're going to want to see detailed documentation on the system’s design, development, and intended use. No more black boxes.

>>Record Keeping: Every input, every output, every decision made by your AI system needs to be logged. They want an audit trail, and they want it detailed.

>>Transparency: If your AI is interacting with customers, you better be upfront about it. You can't hide behind algorithms. Transparency is not optional.

>>Human Oversight: You need humans in the loop, and they need to be able to override the AI when necessary. This isn't about automating everything; it's about responsible automation.

>>Accuracy, Robustness, and Cybersecurity: Your AI needs to be reliable, resilient, and secure. They're going to want to see proof that you've tested for vulnerabilities and that you have a plan for dealing with cyber threats.

If you're using AI for credit scoring, or insurance pricing, or any other critical financial function, you're in the hot seat. And if you think you can slide by with vague assurances, think again. They're going to want to see the numbers. They're going to want to see the potential financial impact of a system failure, or a biased algorithm, or a data breach. And they're going to want to know you've got a plan to mitigate it. 

So, let's get real: Are you ready to prove you've got a handle on this? Because to put it in Snape's terms, "foolish wand-waving"—is no longer an option. This is about real, quantifiable risk, and they're going to hold you accountable.

Connecting with DORA

So we've dissected the AI Act's charming little categories. Now, let's throw DORA into the mix, because, you know, why make things simple? 

DORA's all about digital operational resilience. Essentially, it's saying, "Your fancy AI systems better not crash the whole damn bank." And rightly so. We're talking about the backbone of financial services here, not some experimental app. We're talking about critical infrastructure, and DORA's making it crystal clear: Resilience is non-negotiable.

Now, here's where things get interesting. These two regulations aren't playing in separate sandboxes. They're colliding, hard. If you're using AI for critical functions—and let's be honest, you probably are—DORA's resilience requirements are going to amplify the AI Act's obligations. 

Think about it: a high-risk AI system goes haywire, and suddenly you're not just dealing with regulatory fines from the AI Act, you're dealing with operational chaos, potential market disruption, and a whole lot of very unhappy customers because you failed DORA's mandates. That’s a financial hit that’s quantifiable, and that’s what regulators want to see. 

Specifically, DORA is forcing you to consider:

>>ICT Risk Management: You need to prove you have a framework to identify, protect, detect, respond to, and recover from ICT-related incidents. This isn't just about cybersecurity; it's about the resilience of your entire digital ecosystem, including your AI systems.

>>ICT-Related Incident Reporting: When things go wrong—and they will—you need to report it, and you need to report it fast. They're not interested in vague assurances; they want detailed, standardized reports.

>>Digital Operational Resilience Testing: You need to regularly test your systems, including your AI, to ensure they can withstand disruptions. This isn't a one-time thing; it's an ongoing process.

>>ICT Third-Party Risk Management: If you're relying on third-party AI providers, you're on the hook for their resilience too. You need to ensure they're meeting DORA's standards, or you're both going down.

And at the heart of all this? Governance. You can't have resilient AI without clean, reliable, and well-managed data. It's the fuel for these systems, and if your fuel is contaminated, your engine's going to sputter and die. 

DORA's pushing you to think about data integrity, security, and availability. The AI Act is pushing you to think about data bias, transparency, and accountability. 

These aren't separate issues. Mess up your data governance, and you're setting yourself up for a double whammy of regulatory headaches and operational nightmares. 

And again, this is about numbers. It's about quantifying the potential losses from data breaches, system failures, and biased AI outputs. No more wishful thinking, folks. Just hard, cold financial realities. You're not just complying with regulations, you are protecting the very foundation of financial stability.

Practical Guidance for Implementation: Suppliers and Enterprises 

So you're staring down the barrel of the EU AI Act and DORA, and you're thinking, "How do I prove I'm not just throwing darts at a compliance checklist?" That's where FAIR AIR (Artificial Intelligence Cyber Risk Playbook) comes in, and frankly, it's about time we stopped with the vague risk assessments and started speaking the regulators' language: money.

FAIR AIR lets you quantify the actual financial risk of your AI systems.We're not talking about subjective opinions here. We're talking about calculating the probable loss magnitude and the probable frequency of loss events. 

You want to show compliance with the AI Act? Show them the numbers. Show them how you've assessed the potential financial impact of bias in your credit scoring algorithm. Show them how you've quantified the risk of a system failure disrupting critical financial operations. Show them, in cold, hard cash, that you're not just waving a wand and hoping for the best.   

And DORA? Don't even get us started on the data governance mess. You think "data integrity" is some abstract concept? Think again. 

FAIR AIR lets you quantify the financial risk of data breaches, data corruption, and data unavailability. You can show the regulators the potential financial hit from a system outage caused by poor data management. You can demonstrate how you've calculated the financial impact of biased data leading to discriminatory lending practices.   

The regulators aren't interested in your feelings. They want to see the numbers. They want to see that you've done your homework. They want to see that you've quantified the financial risks associated with your AI systems and your data governance practices. 

FAIR AIR gives you the tools to do just that. It's not about "feeling" compliant; it's about proving it, in terms they understand. And frankly, in terms your CFO understands too. Because, at the end of the day, it's about the bottom line. So, let's stop with the "trust us" and start showing them the money.

DORA and EU AI Act Use Case: Large Data Warehouse Provider

Let's ground this in a real-world nightmare: A large data warehouse provider, let's call them "DataVault," is identified as a critical third-party for a major bank. DataVault also uses AI to enhance its data platform, meaning they're in the crosshairs of both DORA and the EU AI Act. 

Now, the bank, in their DORA-induced panic, is demanding audit requirements that are, frankly, insane. They want full access to DataVault’s internal AI models, down to the code level, and they want 99.999% uptime, even if it means failing over to data centers in countries where the bank's customer data is legally prohibited from residing.

Here’s how DataVault negotiates, using FAIR AIR:

Quantify the Real Risks:

>>DataVault uses FAIR AIR to quantify the financial impact of potential AI bias in their data enhancement algorithms. They demonstrate that the risk of bias impacting the bank's data is minimal, due to the specific nature of the data and the algorithm’s design. 


>>They also quantify the financial risk of a data breach, considering the specific types of data they handle and the security measures they have in place. They show that the risk of a catastrophic breach is significantly lower than the cost of complying with the bank’s excessive audit demands.


>>They quantify the risk of the system being down, and show that the cost to the bank of the system being down for a few hours is far less than the cost of setting up a failover system in a country where the bank's data is not allowed to be.

Push Back with Data:

>>Instead of blindly complying, DataVault presents the bank with a FAIR AIR report showing the quantified risks. They demonstrate that the bank's audit demands are disproportionate to the actual risks.

>>They use FAIR AIR to demonstrate the financial impact of the data residency requirements. They show the bank that the potential legal and reputational damage from violating data sovereignty laws far outweighs the risk of a temporary system outage.

>>They present to the bank alternate failover plans that keep the data within legal parameters, and demonstrate the financial cost of each plan.

Negotiate Based on Actual Resilience:

>>DataVault proposes a risk-based approach to audits, focusing on the specific areas where the actual risks are highest. They offer to provide detailed documentation and regular security assessments, but they refuse to grant the bank full access to their proprietary AI models.

>>They propose a resilient system that keeps data within legal boundaries, and offer to improve redundancy within those boundaries.

>>They work with the bank to define RTO and RPO requirements that are based on actual financial risk.

Document Everything:

>>DataVault keeps meticulous records of their risk assessments, their negotiations with the bank, and their compliance efforts. This documentation is crucial for demonstrating compliance to regulators.

The key here is to shift the conversation from vague compliance demands to quantifiable risk management. DataVault isn't saying "trust us." They're saying, "Here's the data, here's the financial impact, and here's why our approach makes sense." This is how you navigate the regulatory minefield of DORA and the EU AI Act: with data, not fear.

Conclusion

You're facing a double whammy: the EU AI Act and DORA. They're demanding transparency, resilience, and a level of data governance that's going to make your head spin. The bottom line: they're not going away.

You can't afford to bury your head in the sand and hope this all blows over. You need a proactive approach. You need to stop with the vague risk assessments and start speaking the language of finance. 

That's where FAIR comes in. It's not just a methodology; it's your lifeline. It's the tool that lets you quantify the actual financial impact of these regulations. It's the weapon you need to cut through the regulatory noise and focus on what matters.

Stop wasting resources on compliance theater. Stop with the "trust us" and start showing them the numbers. Show them the potential financial losses from AI bias. Show them the financial impact of data breaches and system failures. Show them, in cold, hard financial terms, that you're taking this seriously.

This isn't about jumping through hoops. This is about protecting your organization's bottom line. This is about building a robust, data-driven compliance framework that actually works. And frankly, this is about taking back control from the regulators and showing them that you're not just going to roll over and accept whatever they throw at you.

So, here's the call to action: stop reacting and start acting. Start using FAIR. Quantify your risks, prioritize your efforts, and show them you're not playing games. Because, at the end of the day, it's not about compliance. It's about survival. And in this regulatory jungle, only the data-driven survive.