In the first part of my blog post I focused on calculating the impact of a cybersecurity breach in relation to a company’s size and industry. In part two, I present an approach to better understand how often a company will experience security breaches.
Jack Whitsitt has been a FAIR practitioner since 2016, built the quantitative risk analysis program at Bank of America and is now doing the same at Datto (the services provider to MSPs)
In September, 2020, our IBM X-Force IRIS security analysis group began tracking strange phishing attacks targeting suppliers of HVAC equipment and services.
The generally accepted model for risk is that it is a function of frequency (some refer to it as probability or likelihood, i.e., how often the loss event will probably occur in a given time frame) and magnitude (how bad the event will probably be, consequences).
Quantifying risk scenarios using quantitative analyses helps understanding the exposure to specific risks, however, building a portfolio of quantified risks to understand and manage a company’s risk landscape comes with additional challenges.
Strange, unusual, media-worthy vulnerabilities and cyberattacks… they seem to pop up every few months or so and send us risk managers into a fire drill. The inevitable questions follow:
The recent SolarWinds and Microsoft security issues remind us of the importance of Third-Party Risk Management (“TPRM”). If your organization is using a one-scorecard-fits-all approach to TPRM, you may be wasting resources
Every few months or so, we hear about a widespread vulnerability or cyber attack that makes its way to mainstream news. Some get snappy nicknames and their very own logos
In this blog post, I will share my thoughts on why cyber risk is considered a board level fiduciary responsibility, the need for a globally sourced set of board level cybersecurity best practices
I wear my ‘FAIR™ evangelist’ badge proudly. I have had the opportunity to present quantitative risk analysis to a variety of audiences