Is the failure of risk management down to the lack of quantitative methods we use?

Kin Ly
3 min read
Sep 30, 2020
Doug Hubbard, author of the renowned book, ‘The failure of risk management’ joins a group of more than 40 risk managers from around the world to discuss the answer to this very question. Watch an excerpt from the meeting below

Here’s a list of significant risk events that took place over the last decade: which would you say was the single biggest risk?

  1. Fukushima Daiichi nuclear disaster (2011)
  2. Deepwater Horizon offshore oil spill (2010)
  3. Flint Michigan water system (2012 to present)
  4. Samsung Galaxy Note 7 (2016)
  5. Multiple large data breaches (Equifax, Anthem, Target)
  6. Amtrak derailments/collisions (2018)
  7. California utility PG&E wildfires (2018)
  8. COVID-19 (2020)

The answer is pretty obvious: COVID-19 surely tops the list. 

But there’s another risk that should be deemed ‘the single biggest risk’, according to Doug Hubbard, author of eight books on quantitative risk methods.

Here’s how he put it to a virtual room of senior risk managers from across the world in our most recent New Thinking meeting (you can watch an excerpt taken from our recording of the session below). 

“But the question would be, out of all of these things that have occurred [in reference to the above list], what is our single biggest risk? I think the answer should be the same in any industry, any profession, any region of the world.

“Your single biggest risk is how you measure risk.”

To many in the room this was a bold assertion. But here’s how Hubbard explained his rationale: these 8 risk events were not unpredictable. They were not black swan events. With the right risk assessment methods most, if not all, of these risks could have been identified.

So what are these methods? Quantitative risk assessments, argues Hubbard.

The argument for quantitative approaches

Drawing on academic research, he makes a compelling case. “We have to look at broad research on lots of individual parts of methodologies,” he says. 

He recommends stress-testing any risk methodology based on the following: 

  • How well do people estimate things subjectively? 
  • Does using ordinal scales or verbal qualitative labels improve estimates or dilute them? 
  • How do people use quantitative estimates? 
  • If there's uncertainties, which methods measurably outperform others? 
  • Which subjective estimation methods measurably outperform other subjective estimation methods? 

So, what does the research tell us?

“There's quite a lot of research that shows that even relatively naïve statistical models still measurably improve on unaided human expertise,” he says and cites the following studies: 

  • Paul Meehl assessed 150 studies comparing experts to statistical models in many fields (sports, prognosis of liver disease, etc.). Of those studies, only six concluded that humans performed just as well or slightly better than the statistical models. 
  • Philip Tetlock tracked a total of over 82,000 forecasts from 284 experts in a 20-year study covering politics, economics, war, technology trends and more. He concluded that it was impossible to find any domain in which humans clearly outperformed extrapolation algorithms, less still, sophisticated statistical ones. 

So why don’t we use more quantitative methods?

However, the take-up of quantitative risk methods among the global risk community is low. 

Having conducted numerous surveys on how risk managers were conducting risk assessments in cyber security, ERM and project risk management, Hubbard found that probabilistic methods were in the minority. Most methods were some form of qualitative risk assessment. 

The four common objections are:

  1. We don’t have sufficient data
  2. Risk management is too complex to model
  3. Each situation is too unique and complex to apply scientific analysis of historical data
  4. How do you know you have all the variables?

“The implied (and unjustified) conclusion from each of this is, ‘Therefore we are better off replying on our experience’,” he says. 

In fact, these were the very objections that our Members raised in earlier virtual meeting on the pros and cons of quantitative methods.

It was these objections – and more – that Members unpicked in Q&A session with Doug after his presentation. 

Members can watch the full session here.

The full recording covers:

  • Do scores and scales work? (A review of the current most popular method)
  • The analysis placebo (Confidence in decision making methods is detached from performance)
  • Experts vs. algorithms (What the research says about statistical methods vs subject matter experts)
  • The method of measurement (Monte Carlo simulation: how to model uncertainty in decisions)
  • So why don’t we use more quantitative methods? (Commonly stated reasons for not using quantitative methods)
  • Irrational bias against algorithms (A double standard)
  • The method of measurement (The rule of succession; Bayesian methods)
  • Dos and don'ts
  • Questions to ask about risk management in general

Are you an in-house risk manager who could benefit from access to a global network of risk leaders? Talk to us about becoming a Member today.

Get new posts by email