Keith Weinbaum stood the standard FAIR adoption model on its head: He introduced FAIR and cyber risk quantification to executives at Quicken Loans by applying it to risk scenarios they understood already from the mortgage business, and with that acceptance in hand moved on to using FAIR in enterprise risk management, and finally cyber. Keith gave a presentation at the 2019 FAIR Conference "Case Study: Scoping Enterprise Risk Assessments." “At a very high-level concept, the probable frequency and the probable magnitude of the loss [from FAIR] are not just an information security thing, that’s in anything,” Keith explains. Hear about his unusual FAIR journey in this video or read the transcript below.
Meet more members on the leading edge of risk management: Join the FAIR Institute (it's free).
What’s the appeal to you of FAIR?
The appeal to me is to communicate in business language, in the language that the highest decisions makers at my talk, which is in dollars. A forecast of future loss that we are exposed to, so that they can make more informed decisions on how we could ahead and address that.
What did you find in trying to explain FAIR and promulgate it in your company?
It was interesting at first. It didn’t go so hot at first. People saw these numbers, these dollar amounts and they were like ‘where the heck did this come from?’ Which I understand is somewhat regular in the industry. So, it took some time to get them into it. It took some analyzing of scenarios that our executive audience was very familiar with.
In our particular case it’s the mortgage industry. So instead of focusing on analyzing scenarios that had something to do with information security, which was my domain at the time, instead focused on stuff that had to do with the mortgage industry. So, the understood that domain of knowledge. Then we could overlay risk-related numbers on top of that, which they also understood – the language of dollars. That helped them get on with it.
Explain this a little more. Were these cyber scenarios or not?
Initially, they were not cyber scenarios. They were mortgage related scenarios. They were scenarios on, if we mess up the underwriting of a loan, are we going to have to repurchase that loan somewhere later on, incurring a significant amount of loss.
That’s fascinating. Did they have other models they were using that you were competing against, so to speak?
Not that I’m aware of. They probably did their best to do some forecasting, but they weren’t using some of the fancier techniques that I enjoy in FAIR, such as Monte Carlo scenarios.
From that start, how did you migrate it into cyber in a way that made sense to them?
It actually took a while. It just so happened that even though I was heading up the information security team at the time, and I was going down the FAIR track, it just so happened that the company was looking to set up an enterprise risk management function. So that was the genesis. I raised my hand to kick off that function, to build it, just like I did with information security about 10 years prior.
So, we actually leveraged the fact that we had to build an enterprise risk management function in the company in order to wiggle our way in from a FAIR perspective. And then eventually, probably about a year later, we finally started to focus on cybersecurity risk.
What an interesting evolution. It’s kind of like the reverse of what everybody else is looking for. So how did you make FAIR work with ERM?
I had a few great people I was lucky enough to work with in this particular journey, and one of those people trained me on FAIR, so we were able to say ‘How can we apply the methodology that FAIR has originally focused on, which was cybersecurity and how could we apply that more generally?’ At a very high-level concept, the probable frequency and the probable magnitude of the loss is not just an information security thing, that’s in anything.
So, where are you looking to go with FAIR next?
We’re looking at getting closer to an automated way of getting estimates for the risk factors that ultimately tie to the risk scenarios that we’re ultimately measuring risk on.
What does that mean exactly?
For example, with vulnerability management, to be able to automatically calculate the risk results to help the folks in IT prioritize what are truly the vulnerabilities that we need to address first out of a sea of many different vulnerabilities.