FAIR Inst Europe Summit: EU to Confront AI, Starting at Paris Olympics
You could almost feel the future closing in at the recent FAIR Institute Europe Summit, where a panel of cyber risk experts from different disciplines gave some very timely tips and warnings about the AI wave about to hit business and government.
As the panelists related, the first EU regulations on GenAI will land this year; French lawmakers are already limiting AI for surveillance at this year’s Olympics. And business decision-makers are already knee-deep in the legal, ethical and practical decisions required.
With cyber risk quantification, FAIR practitioners are well-equipped to face new challenges – but as panelists warned, it will still take a pivot to address the novel risk scenarios ahead.
Watch the video: GenAI Related Risk and Opportunities
Moderator: Pankaj Goyal, Director of Standards and Research, FAIR Institute
Gérôme Billois, Partner, Wavestone consultancy
Sabine Marcellin, Lawyer, Digital Law, Oxygen+; Professor, AI, KEDGE Business School
Jacqueline Lebo, Risk Advisory Manager in Security Services, Safe Security
Some takeaways from the session.
AI Risk Is Already Here
Gerome told the story of a GenAI chat bot on the Air Canada website that gave wrong information about ticket discounts. A passenger sued and won – despite the airline’s argument that the AI was an “entity of its own.” “The serious question here is how do we build trust in AI,” Gerome said. “If we have to check every time it is of no use.”
You Must Build Risk Management into GenAI Systems from the Start
Jackie said that many organizations are at the stage of “’We sort of understand how our AI functions.’ But it is still able to create neural networks not everyone is completely able to understand or map, so you need to look at security from the beginning not the end.”
Government Regulation of AI Is Already Here in Europe
Sabine described the EU AI Act, probably going into effect in June as the first legislation to directly confront artificial intelligence, with effects too soon to tell. But the French government is moving ahead with privacy legislation limiting facial recognition by camera surveillance for the 2024 Paris Olympics, she said; for instance, the AI may not legally operate without human supervision.
Cyber Risk Quantification for GenAI Is Possible – and Necessary
Jackie, who is the author of the FAIR Institute’s AI Risk Approach Playbook (FAIR AIR), described working with a very large client offering an AI-powered image generator and thought it was doing well with a 4% failure rate of generating an objectionable image. FAIR analysis showed that, at that rate of failure and the high frequency of use of the image generator, the company was still facing huge losses.
Scaling Is a Risk Management Challenge in Artificial Intelligence
To deploy GenAI or not – those questions are piling up in large organizations, Gerome said. One of his clients now has 50 such decisions on the table. “The main issue now is how to scale…there is legal risk, cyber risk, ethics, resilience, safety.” He thinks that organizations need a framework to triage among use cases sorting those that demand thorough risk quantification from the rest (for instance, because they are internal only.)
Watch the video: GenAI Related Risk and Opportunities
FAIR Institute chapters bring members together around the world to share knowledge on the latest techniques in cyber and operational risk management. Join the FAIR Institute with a free General Membership now!