FAIR Institute Blog

The Problem with Ransomware Risk Data

[fa icon="calendar"] Jul 21, 2017 8:00:00 AM / by Jeff B. Copeland

ransomware 2.pngHats off to (FAIR Institute Board Member) Wade Baker and partner Jay Jacobs of Cyentia Institute for plowing through all the available public data sources on ransomware and writing two blog posts that are essential reading for anyone serious about estimating ransomware risk from a solid foundation. 

Spoiler alert: If you’re looking for some handy, rule-of-thumb numbers, the data aren’t that solid. 

In Measuring Ransomware, Part 1: Payment Rate, Baker shoots for a “meta-analysis of ransomware risk factors” that would combine all sources into a single rate for how frequently victims pay to bring their data back from captivity. 

He found that empirical studies and studies based on surveys landed on widely divergent payment rates—surveys 40% vs empirical 1.65%.

Baker’s conclusion: Each type of study is probably flawed in its own way.

In Measuring Ransomware, Part 2: Ransom Demands,  Jacobs had a bit more success with his meta-analysis. One insight: The commonly reported numbers for ransomware demands are actually a base number that gets multiplied by the number of systems infected to produce the total loss.

Jacobs generated a “loss exceedance curve”,  a tool from insurance modeling that communicates the probability of losses exceeding some amount. The curve shows, for instance, a ten per cent chance of a total loss in ransom payment exceeding $25,000 in a single ransomware attack.

Related:

Ransomware Risk: Setting Up a FAIR Analysis

 

Topics: Risk Management

Jeff B. Copeland

Written by Jeff B. Copeland

Jeff is the Content Marketing Manager for RiskLens.

More

Subscribe to Email Updates

417NjDVYgtL._SX404_BO1204203200_.jpg
Learn How FAIR Can Help You
Make Better Business Decisions

Recent Posts