What Is Faster Patching Worth? Put a Dollar Value on Vulnerability Remediation (FAIRCON25 Video)

FAIRCON25 Laura Voicu 1440x780

Security leaders have spent years telling teams to patch faster. But for many FAIR practitioners, the more valuable question is not whether faster remediation is good. It is: how much risk reduction does it actually buy, and when is it worth the investment?

At FAIR Conference 2025, Laura Voicu — Security Consultant and Independent Researcher at Apropos Security, and Co-Chair of the Switzerland Chapter of the FAIR Institute — tackled that question head on.

Watch the video on demand:

The $ Value of Faster Vuln. Remediation (How Much Risk Reduction Does Faster Patching Buy You?)

Laura’s presentation offered a practical framework for linking vulnerability management data to quantified cyber risk in dollar terms.

For practitioners trying to help their organizations reduce risk in the most cost-efficient way, Laura’s message was clear: better measurement leads to better business cases.

Stop trusting simplistic patching metrics

Laura began with a challenge many vulnerability management teams will recognize. Most organizations measure patching with metrics like mean time to patch or mean time to remediate. Those numbers are easy to calculate and easy to report. The problem is that they often do not reflect reality.

As Laura put it, “your mean time to patch or your mean time to remediate is probably wrong — or at least the way you’re calculating that is probably wrong.”

Why? First, averages hide the long tail. A team may close many routine issues quickly while a smaller number of vulnerabilities remain open for months or years. Second, traditional metrics often exclude the vulnerabilities that are still open because those issues do not yet have a final “time to patch.” From a risk standpoint, that is exactly backwards. The unpatched vulnerabilities are often the ones that matter most. Third, aggregate metrics blur important differences across environments, teams, and technology stacks.

In other words, a single patching number can look good while masking the places where exposure is quietly building.

Use survival analysis to measure what actually matters

To solve that problem, Laura turned to survival analysis, a statistical method commonly used in medical research to answer questions such as how long until recovery or relapse. She showed how the same approach can be applied to vulnerability remediation: how long until a vulnerability is patched, even when some vulnerabilities are still open at the time of analysis.

This matters because survival analysis captures the full distribution of remediation behavior, not just an average. It allows teams to compare patching performance across severity levels, environments, business units, and technical stacks. It also makes it easier to spot bottlenecks or breakdowns in process.

Laura described the value plainly: “survival analysis is allowing us to understand better patching behavior.”

That includes more than just a median remediation time. Teams can look at 90th or 95th percentiles, arrival rates of new vulnerabilities, and “escape rates” — the percentage of vulnerabilities that exceed a predefined containment point or service-level objective. For organizations with low risk tolerance or meaningful long-tail exposure, those metrics may be far more informative than a simple average.

Bridge operational metrics and FAIR with FAIR-CAM

The most useful part of Laura’s talk for FAIR practitioners was the bridge she built between operational security data and quantified loss exposure.

On one side is vulnerability management: patch logs, discovery dates, remediation dates, open issues, and observed remediation behavior. On the other side is a FAIR analysis with loss scenarios, susceptibility, control effectiveness, and annualized loss exposure. The bridge between them, Laura argued, is FAIR-CAM, the controls analytics model. 

Using FAIR-CAM, patching can be treated as a form of variance management control. Its effectiveness depends on two measurable factors: how often assets become vulnerable, and how long they remain vulnerable. Laura mapped these directly to metrics practitioners can calculate from survival analysis: arrival rate and remediation duration.

That is the key move. Instead of estimating control performance through expert judgment alone, you can begin with real operating data and use it to inform control reliability and operational efficacy.

As Laura said, “the bridge exists. We have been able to build it.”

Join the FAIR Institute, access an exclusive community of cybersecurity and business leaders.

A practical example: halving patch time reduced risk by $2.2 million

Laura then walked through a simple but powerful example. In a hypothetical cloud-service-provider scenario, she held most control factors constant and changed just one variable: patching duration.

Reducing time to patch from 30 days to 15 days improved patching operational efficacy from roughly 41% to nearly 63%. That, in turn, reduced susceptibility from 8% to 3%. In the FAIR model, the result was a $2.2 million reduction in annualized loss exposure.

That example is important not because every company will produce the same number, but because it shows the logic of the method. Faster patching does not just sound better operationally. It can be expressed as a specific reduction in financial risk.

And once you can do that, you can have a much better investment conversation.

Laura made that point directly: “We can quantify control improvements… and build a data-driven business case for that risk reduction.” If it costs $500,000 to achieve a $2.2 million reduction in exposure, that may be a very compelling investment. But if pushing patching time even lower produces only marginal additional benefit, the organization may be hitting diminishing returns.

That is exactly the kind of analysis FAIR practitioners are trying to bring into security decision-making.

The bigger takeaway: optimize for risk reduction, not just speed

Laura closed by stressing that this approach is not limited to patching. The same logic can be applied anywhere time-series control data exists: configuration management, credential rotation, detection, response, and more. Some controls will influence susceptibility, others loss magnitude, but the principle remains the same.

Her presentation offered something valuable to FAIR practitioners: a way to connect day-to-day operational metrics to strategic, defensible financial decisions. It helps answer not just whether a control should improve, but how much that improvement is worth.

For teams looking to add more value to the business, that is the real payoff. Faster patching is not the goal by itself. The goal is to understand where faster remediation buys meaningful risk reduction — and where it does not — so resources can be spent where they matter most.

Read more about the 2025 FAIR Conference


 

image 37