background2

The past few weeks I have (among other things) read Doug Hubbard’s book “The Failure of Risk Management”. The title and description sounded promising, but I think the book has only limited value for practical safety work (even though it does at times mention safety issues). Still, thinking about risk is ‘core business’ for many Safety Professionals and so reading and thinking about risk in its various forms (also from other sectors, like finance) could provide learning points.

It quickly turned out that the author and I are on different planes (planets) philosophically, but that might make reading the book even more interesting, or so one would hope. As a whole, it has been a bit of a rollercoaster between sheer boredom, annoyance and interest. Regrettably there are also parts that really make me cringe. Let’s look at one of these occasions.

A Cringe Quote

Early on in the book, the author argues that virtually all of the risk analysis methods used in business and government are flawed in important ways and most of them are no better than astrology. And so he comes to a ‘profound’ conclusion:

“Obviously, if risks are not properly analysed, then they can’t be properly managed”.

Further on in the book he more or less repeats his claim:

“Most of the key problems with risk management are focused primarily on the problems with risk analysis. That is, if we only knew how to analyse risks better we would be better at managing them”.

Hmm, let’s think a bit about this. Can this really be true?

Shit In = Shit Out?

In a way, it seems to make sense. After all, the First Law of Quality Management says “Shit In = Shit Out”. So, feed a bad risk analysis into a system and you will get bad risk management. Common sense, end of story.

But wait, there is more to say!

Firstly, the above line of reasoning assumes that risk analysis is the only way to risk management. I would say that there are more ways to risk management than always through analysis. We get back to this in a minute.

Secondly, the above line of reasoning also assumes that risk analysis is the direct (and only) input to risk management. There are, however, a couple of steps between analysis and action, one of those deciding what to do. You can do a perfect analysis, and then decide to do something entirely ineffective, for example because you have incentives to do so, or because you do not have the means to initiate successful actions. Even if you take a good decision based on your perfect assessment, it is fully possible that your management is unsuccessful when external influences or unforeseen circumstances interfere with the results. To mention just some problems…

Decomposing a statement, phase 1

But let’s decompose the statement a little further. What does “if risks are not properly analysed, then they can’t be properly managed” actually say? The key points to look at would be the terms ‘analysed’, ‘properly’ and the connection between analysis and management.

Point one: what do we mean by analyse? Looking up ‘analysis’ tells us that it is “the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it” (Wikipedia) and “a careful study of something to learn about its parts, what they do, and how they are related to each other” (Merriam-Webster).

It is clear that this is some kind of a formal and structured process, but according to the definitions above not necessarily involving numbers. Hubbard does (grudgingly?) agree on this part because he acknowledges at one point that also qualitative approaches are “an attempt to analyse risk”.

Decomposing a statement, phase 2

Point two: what does ‘properly’ mean. I presume that I would define it somewhat differently than Hubbard would. Hubbard is rather clear that the only acceptable form of risk analysis involves numbers and probabilities. Everything else is “soft”. I on the other side, would argue that the automatic, qualitative and even rather sub-conscious response before crossing a street is a good example of a proper analysis (although assessment may be the better word in that case) without any calculations whatsoever. And many practical assessments with regard to risk are made every day, just like that one.

When you ask me what a proper analysis is, or proper management means, then I would say something that is effective and fit for purpose. Others like Hubbard, however, hinge very much on qualitative and (semi) scientific rigour. Now, I am very much pro-science, but I think we should be aware of the fact that more numerical or more statistical is not always the same as more scientific or better science. It only looks that way.

Neither does more numerical or more statistical always lead to better decisions. And while we may be intuitively bad at dealing with probabilities, models don’t necessarily improve our decisions. Research has even shown that under some circumstances (especially with great uncertainty and little knowledge) less information actually can be an advantage.

Decomposing a statement, phase 3

Point three: what is the connection between analysis and management? We already touched upon the subject before and I have come across examples where the right things were done in spite of the risk assessment that came before it. Granted the analysis/assessment was in most cases not the best, but the risk management was initiated without any analysis whatsoever.

I am not familiar with any research about the connection or correlation between the quality of risk assessments and the quality of measures taken or safety management in general (leave alone the level of safety). There is a problem of how to measure these things, of course. We generally assume that good risk assessments contribute to good decisions and good safety management. Not at least because a good risk assessment gives opportunities for a systematic and documented review of possible problems (risks) instead of leaving it to coincidence what person, what competence, what authority or what political agenda was part of a decision process. On the other hand, organisations that are good at safety management are probably doing better risk assessments and using them better, so causality is bound to be fuzzy.

Finally, I mentioned crossing a street as an example above. Many decisions about risk are actually based on perception with no analysis at all. These decisions are not necessarily better or worse, they are different. The judgement about better or worse is only in the eye of the beholder. Many people do not want a nuclear power plant in their neighbourhood, based on their perception of the risks. Analysts may think this is crazy, because they can calculate a ridiculous low risk. Who can say what is right from a risk point?

Sorry…

Summing up, I am afraid that there is little which supports the rather strong quote from the book. You Cannot Manage What You Haven’t Analysed? It sounds superficially okay, but if you really think about it, it is more a smooth sales pitch from a consultant. It is also a good candidate for another Safety Myth.

 

Also published on Linkedin.