New Case Study in AI Risk Quantification: FAIR-AIR Supporting Dissent in the FTC’s Rytr Case
In the recent high-profile US Federal Trade Commission (FTC) crackdown on deceptive AI claims, a significant dissent arose regarding the case against Rytr, an AI content-generation tool accused of misleading advertising.
In a dissent, Commissioners Andrew Ferguson and Melissa Holyoak, argued that the FTC’s allegations lacked proof of material harm—a key concept often difficult to quantify when assessing AI risks. This divergence of opinion underscores the pressing need for a framework that can accurately quantify AI risks and the harm they may or may not cause.
About the Authors
>>Jacqueline Lebo is Risk Advisory Manager at Safe Security and a member of the FAIR Institute’s GenAI Workgroup.
>>Denny Wan is a member of the FAIR Institute Standards Committee, winner of the FAIR Ambassador Award and host of the Reasonable Security podcast
Enter FAIR-AIR, an innovative approach adapted from the Factor Analysis of Information Risk (FAIR) model, specifically designed to quantify AI-related risks.
In this case study, we’ll explore how applying the FAIR-AIR methodology could support the dissenting view in the Rytr case, providing a quantifiable way to analyze AI risk and mitigate ambiguity around material harm.
Setting the Stage: The FTC’s Case Against Rytr
As the FTC’s complaint stated, Rytr has created and markets a package of over 40 generative artificial intelligence (GenAI) tools with a variety of uses, from writing essays to creating poetry and music lyrics. One of these tools allowed users to generate consumer reviews based on prompts provided by the user.
The Commission accuses Rytr of violating Section 5 of the Federal Trade Commission Act by furnishing its users with the “means and instrumentalities” to deceive consumers with AI-generated reviews.
The FTC argued that these reviews misled consumers, potentially resulting in material harm.
Commissioners Ferguson and Holyoak dissented, saying that “Treating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud is inconsistent with our precedents and common sense. And it threatens to turn honest innovators into lawbreakers and risks strangling a potentially revolutionary technology in its cradle.” They noted that without clear metrics to assess harm, the case for punitive action was weak.
This dissent taps into a core issue surrounding AI risk: How do we measure the potential impact of AI tools accurately and objectively? The answer may lie in the FAIR-AIR approach, which can provide a structured way to break down, quantify, and ultimately support (or refute) claims of material harm from AI systems.
Applying FAIR-AIR to Support the Dissent’s Case: A Step-by-Step Approach
Using FAIR-AIR, the dissenting Commissioners could quantify the alleged risks and harms posed by Rytr’s claims. Here’s how the FAIR-AIR five-step methodology could support their position by offering measurable insights into the likelihood and impact of consumer harm.
1. Contextualize
FAIR-AIR starts by understanding what you're quantifying. The dissent highlighted that Rytr’s tool, while possibly over-stating product details, may not actually expose users to financial or reputational damage that would justify regulatory intervention. Using FAIR-AIR, we can isolate risk factors, such as:
>>Inaccuracy of AI-Generated Content: Does Rytr produce content that misleads users to their detriment? If so, does this create an existing problem or just exacerbate it? We have all seen that one Amazon product with 10,000 five-star reviews and purchased it just to find we have been duped.
>>Reputational Impact: If the content is poor quality, is there measurable damage to users?
2. Scope
FAIR-AIR requires the visualization of risk scenarios within chosen vectors (third party provided AI solutions). Additionally, to scope out the scenarios you will need to take the given context and identify the risk scenarios that you will be reviewing.
>>Inaccuracy of AI-Generated Content: Is the AI-generated content substantiated by credible research, references and operational data?>>Liability from Misleading Recommendations: Could the AI content generation tool (GenAI) give rise to commercial liability when taken literally as expert recommendations instead of crowd-sourced opinion and personal experience?
3. Quantify Frequency and Impact of Risks
FAIR-AIR applies quantifiable metrics to assess both the likelihood of these risks occurring and the potential impact. For Rytr’s case, the dissent could utilize FAIR-AIR to examine how often consumers would realistically encounter significant harm from use of the tool.
>>Frequency Assessment: Using data from consumer reviews, complaints, and performance analytics, FAIR-AIR could calculate the probability of users experiencing poor outcomes with Rytr. For example, if less than 1% of users report issues, it challenges the notion of material harm. However, proving that the Rytr product is the proximate cause of those issues is also a challenge as users can today make up reviews for products and post them publicly. What this product does is make those individuals more productive by a certain amount.
>>Impact Assessment: If harm does occur, what’s the measurable cost? For instance, if reviews lead to an increase in product issues, the number of negative reviews will increase for the product likely at a much higher rate than the small percent of individuals that choose to pose false information. This allows for a comparison between perceived harm and actual monetary impact, potentially revealing that Rytr’s content does not justify FTC action.
4. Evaluate Safeguard Sufficiency and Mitigation Measures
FAIR-AIR doesn’t stop at quantifying risk; it also assesses how well existing safeguards or disclaimers mitigate these risks. The dissenting opinion in Rytr’s case could leverage FAIR-AIR to argue that disclaimers or user guides provided by Rytr reduce the likelihood or severity of potential harm.
>>Safeguard Analysis: By analyzing whether Rytr has taken appropriate steps to educate users about the tool’s limitations, FAIR-AIR can quantify how much these safeguards reduce risk. For instance, if disclaimers are effective at informing users about limitations, this would lower the estimated financial impact of any misleading claims, further supporting the dissenting view. If the disclaimer makes clear that the tools are intended only for real feedback and use, it will likely reduce harm.
5. Create an Evidence-Based Case for (or Against) Material Harm
Ultimately, FAIR-AIR would allow the dissent to create an objective, evidence-based quantification of material harm—or lack thereof—associated with Rytr’s claims. If the calculated financial impact of user harm remains low, it substantiates the dissent’s argument that the FTC’s case lacks a measurable basis for punitive action.
>>Using FAIR-AIR metrics, the dissent could argue: “Our analysis shows that the probability of meaningful financial harm to users or consumers is minimal and does not meet the threshold of material harm.” By grounding the dissent in quantifiable data, this methodology strengthens the case for a more measured regulatory approach.Implications for the FTC and Future AI Risk Cases
The Rytr case highlights an important gap in regulatory approaches to AI: the need for objective, quantifiable measurements of harm. FAIR-AIR, by converting abstract concerns into concrete financial metrics, could become a foundational tool for both regulators and companies. Here’s why:
>>Consistency and Objectivity: FAIR-AIR provides a structured approach to assessing AI risk, enabling regulators to apply consistent metrics across cases. This helps avoid arbitrary judgments and supports fair, evidence-based decision-making.
>>Clarity in Regulatory Scope: Quantifiable risk assessment tools like FAIR-AIR could prevent overreach, helping regulators focus on cases where there’s demonstrable material harm. This is particularly critical in the rapidly evolving AI landscape, where cases of genuine harm must be distinguished from low-risk scenarios.
>>Balanced Accountability: FAIR-AIR empowers companies to proactively assess their AI risk and, where needed, provide quantifiable evidence to refute regulatory claims. For organizations, this framework ensures that their AI products meet regulatory standards without unnecessary risk of punitive action.
Conclusion: FAIR-AIR as a Path to Fairer AI Regulation
The FTC’s case against Rytr—and the accompanying dissent—demonstrate the value of a quantifiable approach to AI risk assessment. By applying FAIR-AIR, dissenting voices within regulatory bodies can offer measurable evidence that strengthens their arguments, helping to safeguard companies from undue penalties while still holding high-risk AI tools accountable.
FAIR-AIR isn’t just a tool for compliance—it’s a strategy for fostering responsible, measurable AI use. By quantifying AI risks in financial terms, FAIR-AIR allows both regulators and organizations to make informed, balanced decisions in a field where risks are as complex as they are consequential.