FAIR Institute Letter to Congress Supports Funding U.S. Artificial Intelligence Safety Institute
The FAIR Institute co-signed a letter to Congress from leading technology institutions in support of statutory funding (in other words, mandated by Congress) for the U.S. Artificial Intelligence Safety Institute (AISI) within the National Institute of Standards and Technology (NIST). The FAIR Institute is a member of NIST’s US AI Safety Institute Consortium (AISIC), advising on AI policies and standards.
As the letter states the problem,
“Currently, businesses of all types recognize the potential of AI, but many have refrained from adoption, in part due to concerns regarding implementation risks.
“The AISI provides a venue to convene the leading experts across industry and government to contribute to the development of voluntary standards that ultimately assist in de-risking adoption of AI technologies.
“The AISI is particularly important for those enterprises not primarily engaged in technological activities and which do not possess the wherewithal to develop bespoke benchmarks and protocols to assess AI systems.
“NIST, which does not possess regulatory authority and has a long history of successfully engaging the private sector, accomplishes this within the AI space primarily through the AISI Consortium (AISIC).”
The Consortium is advising on the development of the NIST Artificial Intelligence Risk Management Framework (AI RMF), and companion AI RMF Playbook.
“We’re very much looking forward to helping NIST in this effort to get ahead of the risks and opportunities of AI,” said Pankaj Goyal, Director, Standards and Research, for the FAIR Institute, of the Institute’s work on the AISI Consortium. “We believe that the scientific, risk-based approach of FAIR offers a path forward to the risk management profession as it moves into the AI era.”
As we learned in a webinar on NIST RMF hosted by the FAIR Institute with RMF developer Martin Stanley, the new framework shares with FAIR an emphasis on analyzing risk in probabilities. “In cybersecurity, we’re used to fixed measurements and standard outputs,” Stanley told us. “That’s out the window when it comes to AI systems that are probabilistic in nature…with different outputs on similar queries.”
The FAIR Institute’s GenAI Workgroup has developed a FAIR AI Approach Playbook to help identify AI-related loss exposure and make probabilistic risk-based decisions on treating this new category in cyber risk management. The Playbook identifies five vectors of GenAI risk and gives direction on how to quantify the probable frequency and impact of AI-related cyber loss events.
Dig in to risk analysis and artificial intelligence with these resources
Webinar: FAIR Institute Kicks off Research on FAIR for AI Risk Management
FAIRCON23 Session Video: Launching a FAIR Risk Management Program for AI at Dropbox
Join us for the 2024 FAIR Conference with multiple learning opportunities on FAIR analysis for AI-related risk.