# Jack Jones on the Problem with “Risk Scores”: Numbers without Quantification

You’ve probably seen the claims from cybersecurity software vendors who give you “vulnerability ratings”, “asset criticality”, or “risk” scores using algorithms and “machine learning” to help you prioritize your cybersecurity program activities.

It’s a worthy objective, and it all sounds authoritatively math-y. But watch out. As Jack Jones, creator of FAIR, the recognized standard model for cyber risk quantification says, these scoring systems are very often built on a “house of cards.”

Jack has long argued the case against the misuse of numbers in cyber risk management. For starts, “Any time someone says there’s a risk “score”, almost invariably it’s an ordinal value rather than quantitative value,” Jack says. “It might be numeric but it’s not quantitative because there is no unit of measurement.” On the surface that may seem pedantic, but it matters a lot when you start applying math.

Become a Contributing Member of the FAIR Institute, gain access to the exclusive community of information risk officers, cyber security leaders and business executives who share their knowledge on the growing discipline of quantitative risk management.

Ordinal numbers (e.g., “1, 2, 3”, etc.) are just labels for sorting; they they have no quantitative value and could be replaced by “A, B, C” or “high, medium, low.” “The premise of using something relatively simple like ordinal scales to rate things is fine,” Jack says, “but how you go about it can be either solid and defensible or not. And most of what I see out there is not.

“Whatever you do with these ordinal values, it should not include multiplication or most other mathematical operations… You simply can’t multiply an ordinal asset criticality times an ordinal vulnerability and expect to get a result that stands up.”

Even if a vendor’s solution isn’t itself applying math on ordinal values, very often one of the key inputs is CVSS scores. The Common Vulnerability Scoring System is a model that relies on cybersecurity experts to rate newly discovered vulnerabilities on a list of characteristics that are then aggregated into an overall numeric severity score. These experts are well-respected and are performing an important service. But they are assigning ordinal values, which the CVSS model then applies a lot of math to – math that simply can’t generate good results because the inputs are ordinal.

“Ordinal values are given to each CVSS input parameter, then they multiply these things, apply a weighted value which is also ordinal, so you get a compounding effect of errors,” Jack says. “If CVSS output is a cornerstone of your scoring model, anything else you layer on top of that is kind of fruitless.”

What’s unfortunate, is that because CVSS has been established for so long it’s assumed to be valid – or at least “good enough.” On the contrary, Jack has argued that; “When scoring models are significantly flawed they hinder organizations from being able to identify and focus on the vulnerabilities that matter most, resulting in wasted effort and cost, and potentially allowing truly important weaknesses to remain unmitigated for longer than they should have.”

And that leads to a key point: Where vendor claims go wrong is in characterizing their scoring models as being quantitative," Jack says. "Quantitative cyber risk measurement, requires inputs and outputs to be quantifiable values like frequencies, percentages, and monetary units."