background2

If you set out to find Safety Myths and Misunderstandings, incidents and causes are a rich field. Quite a few from that area can be found in my book, and here is another one illustration that I came across earlier this week.

There was this survey, intended to test some hypotheses about the human factor in safety and collect data for a ‘scientific’ article about these hypotheses. Information was gathered through an online survey tool that questioned safety professionals about the most recently incidents they were involved in as investigators.

Participants were asked to what degree four factors (‘causes’) had contributed to the incident. Participants had to assign a score between 0 and 100% to each factor, with a total for the four factors of 100%.

……% Technical

……% System

……% Culture

……% Behaviour

At this point I stopped reading and decided not to participate and wondered if I should ask the person behind the survey (of whom I had a rather high opinion, and so I did engage in a discussion - results pending) to stop this nonsense because this is so wrong on so many levels. Just a few issues:

  • Obviously, this is an oversimplification. Why these four factors? What model are these four factors based on? What about other factors, for example external factors, like forces of nature?
  • What level of analysis of the incidents are we talking about anyway? Behaviour is usually very downstream (close to the incident) while culture and system tend to be more upstream (greater distance to incident). 
  • How do you ensure that this is going to be a (somewhat) representative research? How do you ensure correct scope and applicability of the findings? How do you ensure that the incidents in the survey are (somewhat) representative? Is it okay to include complex cases and more trivial OHS incidents in the same survey? And, how would you know? The only indication one can get is from the first questions that differentiate between near misses and accidents, and ask for the outcome of the incident. But, mind you: outcome says nothing (!!) about the case's complexity.
  • How are you going to ensure interrater reliability such that each participant answers more or less from the same frame of mind and definitions? Besides, the definition of some terms (especially culture) was rather questionable.
  • What is the quality of the investigations this survey is going to be based on? Typically many investigations stop when they come across a convenient cause (often behaviour) and don’t look further.
  • What is the competence of the people who are going to answer? The survey was open on the www, so anyone could participate.
  • How are participants supposed to judge the contribution of a causal factor? And how would you know their method? Will they be counting how often a factor is mentioned? (which is useless and arbitrary) Will they do this by assigning a measure of importance? If so, does that mean that something on a higher organisational level should have more weight? Or the reverse, should something closer to the accident have more weight? What about underlying factors that affect more than one other factor? And so on.

This only scratches the surface of my concerns. Online newspaper do opinion polls on this level, and the quality that comes out of this survey will probably not be any better than that. So please, don’t come and call it scientific research.

Before even the first word of the report is written, I suspect that the results are going to be an extremely arbitrary opinion poll at best. Most likely, however, they are going to be extremely questionable. It might have been better to take a generally “accepted” number with a more or less known bias (like Heinrich’s infamous 88%) than come with a shaky survey that will only lead to even more confusion - even if it would “find” the opposite of what Heinrich wrote many decades ago.

It is important to look into the role of human factors in safety and incidents. But, please let’s do it properly and not by means of some make-believe-research. The First Law of Quality Assurance does apply for full: Shit in = Shit out. Things like this will do harm to safety, firstly because of the confusion it may create, secondly because it looks like safety people can’t get basic science right.

--- --- ---

Having said that… Of course, there is also a different possibility. The mentioned survey is not about incident investigation at all, but about the biases and gullibility of safety professionals… I hope the latter (and still then some of my concerns apply), but fear the other alternative.

--- --- ---

If this has tickled your interest in Safety Myths and Misconceptions, then make sure to check out the book Safety Myth 101. In the book, you can find 123 (and then some) Safety Myths, including explorations of their causes and suggestions for an alternative, better approach.

Among others the following Myths are relevant to the discussion above: #20 (understanding research), #83 (behaviour and human error), #91 (investigation and stop rules) and very much #100 (counting causes). Enjoy!

 

Also published on Linkedin as a follow-up to the post Safety Science? If Only!