The FAIR Institute Blog

(Video) Meet a Member: Daniel Davis, Security Analyst at Lyft

Written by Luke Bader | Nov 6, 2019 3:08:57 PM

Daniel Davis, Security Analyst at Lyft in San Francisco, came to FAIR from an unusual, non-IT perspective – safety engineering.  He first came to Lyft to work on safety for autonomous cars. “The way that FAIR defines risk as threat, asset and impact…is very similar to the way that safety engineering has treated hazards for years,” he says. “…So to see security done in a parallel way made a lot of sense to me.”  Daniel came to FAIRCON19 to prepare to launch an inforisk FAIR program at Lyft. His impression of the conference: “Everybody I’ve talked to is looking at the problem really the same way I am. And it seems also everyone thinks this is a viable solution that will give them a substantial ROI.”

Meet Daniel in this video or read the transcript below.

 

 

TRANSCRIPT

What brought you to the FAIR Conference? 

I’ve been searching for years for a way to quantitatively calculate risk and a friend of mine just so happened to recently have me read Doug Hubbard’s book and the very next book I got turned on to was the FAIR book. It culminated in me realizing this actually can be done and it can be done in a way that companies are adopting. It was really a eureka moment for me.

What was it about FAIR that had a particular appeal for you? 

The common taxonomy for risk definition was a big appeal for me. There are so many different ways you can talk about risk. Also, I have a background in safety engineering, and the way that FAIR defines risk as threat, asset and impact as a loss event is very similar to the way that safety engineering has treated hazards for years. So, I’m very familiar with that, I’m very familiar with the maturity of safety engineering, so to see security done in a parallel way made a lot of sense to me.

Interesting. I’d never heard that comparison made before. 

Actually, one reason I came to Lyft was, I was working on the concept of safety and security for autonomous vehicles so that was a goal and still may be a goal long-term, but autonomous vehicles are a different discussion.

Where are you at in your own FAIR education and evolution? 

Rapid acceleration. So, I started reading Doug Hubbard’s books last month, I suppose. Again, this is something I’ve been researching for many years and finally I see it all come together. It makes so much sense to me. My goal is to rapidly accelerate and try to implement this throughout the end of this year.

Have you started at all yet? 

No, that’s when I get home on Friday. 

What have you learned at the conference?  What are the highlights?

There’ve been some really great workshops that were eye-opening in going through the traditional qualitative methods, which I’m very familiar with in my past life doing security and safety work for the Air Force. I’m very familiar with the discussions of what’s high, what’s very high, what’s the difference. So, to go through a very reasonable exercise of two scenarios and come to reasonable conclusions, then actually do the same process using FAIR, and realizing that the actual annualized loss exposure is dramatically different, and not only that, it’s telling you what is actually generating that exposure. Is it the actual magnitude, is it the frequency, and that drives decision making, that drives this is where we spend our money, this is how we mitigate this risk or maybe we say, this is such an infrequent event, that we’re better off just trying to insure ourselves.

What kind of feeling do you get from the FAIR community?

It seems like everybody has the same problem. They’re trying to figure out how do we justify our spend, how do we mitigate our risk in a cost-effective way. Everybody I’ve talked to is looking at the problem really the same way I am. And it seems also everyone thinks this is a viable solution that will give them a substantial ROI.

Thank you very much.

Related: 

Meet more FAIR Institute  Members

More on the 2019 FAIR Conference