If you set out to find Safety Myths and Misunderstandings, incidents and causes are a rich field. Quite a few from that area can be found in my book, and here is another one illustration that I came across earlier this week.
There was this survey, intended to test some hypotheses about the human factor in safety and collect data for a ‘scientific’ article about these hypotheses. Information was gathered through an online survey tool that questioned safety professionals about the most recently incidents they were involved in as investigators.
Participants were asked to what degree four factors (‘causes’) had contributed to the incident. Participants had to assign a score between 0 and 100% to each factor, with a total for the four factors of 100%.
At this point I stopped reading and decided not to participate and wondered if I should ask the person behind the survey (of whom I had a rather high opinion, and so I did engage in a discussion - results pending) to stop this nonsense because this is so wrong on so many levels. Just a few issues:
- Obviously, this is an oversimplification. Why these four factors? What model are these four factors based on? What about other factors, for example external factors, like forces of nature?
- What level of analysis of the incidents are we talking about anyway? Behaviour is usually very downstream (close to the incident) while culture and system tend to be more upstream (greater distance to incident).
- How do you ensure that this is going to be a (somewhat) representative research? How do you ensure correct scope and applicability of the findings? How do you ensure that the incidents in the survey are (somewhat) representative? Is it okay to include complex cases and more trivial OHS incidents in the same survey? And, how would you know? The only indication one can get is from the first questions that differentiate between near misses and accidents, and ask for the outcome of the incident. But, mind you: outcome says nothing (!!) about the case's complexity.
- How are you going to ensure interrater reliability such that each participant answers more or less from the same frame of mind and definitions? Besides, the definition of some terms (especially culture) was rather questionable.
- What is the quality of the investigations this survey is going to be based on? Typically many investigations stop when they come across a convenient cause (often behaviour) and don’t look further.
- What is the competence of the people who are going to answer? The survey was open on the www, so anyone could participate.
- How are participants supposed to judge the contribution of a causal factor? And how would you know their method? Will they be counting how often a factor is mentioned? (which is useless and arbitrary) Will they do this by assigning a measure of importance? If so, does that mean that something on a higher organisational level should have more weight? Or the reverse, should something closer to the accident have more weight? What about underlying factors that affect more than one other factor? And so on.
This only scratches the surface of my concerns. Online newspaper do opinion polls on this level, and the quality that comes out of this survey will probably not be any better than that. So please, don’t come and call it scientific research.
Before even the first word of the report is written, I suspect that the results are going to be an extremely arbitrary opinion poll at best. Most likely, however, they are going to be extremely questionable. It might have been better to take a generally “accepted” number with a more or less known bias (like Heinrich’s infamous 88%) than come with a shaky survey that will only lead to even more confusion - even if it would “find” the opposite of what Heinrich wrote many decades ago.
It is important to look into the role of human factors in safety and incidents. But, please let’s do it properly and not by means of some make-believe-research. The First Law of Quality Assurance does apply for full: Shit in = Shit out. Things like this will do harm to safety, firstly because of the confusion it may create, secondly because it looks like safety people can’t get basic science right.
--- --- ---
Having said that… Of course, there is also a different possibility. The mentioned survey is not about incident investigation at all, but about the biases and gullibility of safety professionals… I hope the latter (and still then some of my concerns apply), but fear the other alternative.
--- --- ---
If this has tickled your interest in Safety Myths and Misconceptions, then make sure to check out the book Safety Myth 101. In the book, you can find 123 (and then some) Safety Myths, including explorations of their causes and suggestions for an alternative, better approach.
Among others the following Myths are relevant to the discussion above: #20 (understanding research), #83 (behaviour and human error), #91 (investigation and stop rules) and very much #100 (counting causes). Enjoy!
Stian Antonsen, from SINTEF and NTNU had an interesting presentation at our HSE Seminar at Hell (which is a place near Trondheim in case someone wonders), May 2016. Here is a brief overview from his presentation where he discussed safety culture from a general, theoretical point of view with many examples from practice and from accidents.
Since the 1980s culture has been seen as one way to ‘excellence’, and looking at it from the other side, culture has been seen as a cause for accidents. This has led to a kind of parable and easy explanation that things went wrong because of the culture.
The Chernobyl accident was the first where culture was pointed out as a contributing causal factor. The culture there was among others characterised by the mission to produce electricity, no matter what. Also it was difficult to question decisions of a superior. Several accidents in the years to follow saw culture mentioned as a causal factor: Challenger, Clapham Junction, and Piper Alpha. The report of the latter even chose to express its view on the culture through normative language.
In Norway the term HMS kultur (HSE culture) has even found its way into legislation for the oil and gas sector. §15 of the Rammeforskriften says that (my approximate translation):
A good culture for health, environment and safety that includes all phases and areas of activity shall be fronted through continuous work to reduce risk and improvement of health, environment and safety.
This can be problematic, because how can one enforce and comply with something as ‘vague’ as culture? (the regulator has published a guidance document on the subject that can be downloaded for free).
It’s organisational culture one works on, anyway. There is no such thing as safety culture - that is merely a sticker we stick on elements of organisational culture related to safety.
What is culture? One often used simplified definition says that culture is "the way things are done around here" which points towards some important elements of culture. There is a practice (‘the way things’), behaviour (‘are done’) and it happens within a group (‘around here’). It’s interesting that behaviour is both a cause and a consequence of culture. Experiences are important - they affect the practice that evolves.
Culture is something that is found between the lines and a very powerful social mechanism. Culture is a frame of reference, it’s like a pair of glasses that one sees through and that therefore affects how you look at things.
Some seem to think that culture and attitudes are almost synonymous. While ‘attitudes’ are found in the heads of people, one finds that ‘culture’ is something that happens between people. Culture can get us to abandon our own attitudes… (see for example the Milgram experiments). Often one sees that so-called culture improvement actions are directed against attitudes through awareness campaigns and the like. Attitudes are important, but the possibility to affect through campaigns, and the effect of campaigns is very limited. One should rather use effort for other actions.
Culture and Power is a subject that is relatively little discussed. Perrow has not discussed culture explicitly in his work because he found that when you look into accidents you will usually find pressure connected to power in the causal chains (e.g. Challenger accident). The dynamics of power are important for culture(s), in a group and between groups.
Can culture be used as a tool for change? There are "culture optimists" around who actually think that culture is a tool for change. Stian is very sceptical about this. As he said: It’s not possible to turn culture a bit up or down … Quick-fix management literature gives this impression that often borders on brain washing. Where goes the line between organisational development and manipulation? It’s like some organisations move the border from controlling behaviour to wanting to control "hearts and minds".
There are some serious limitations with culture as a tool for change:
- Organisations will included several different cultural units.
- Culture is a side-effect of social interaction. Culture changes through interaction and NOT through attitude campaigns, flaming speeches and noble visions. These can give short-lived positive feelings, but there is always a Monday…
- Culture is not something that can be managed. But one can affect culture’s ”conditions for growth”.
- You cannot decide what culture looks like beforehand. The big One Size Fits All programs have only small likelihood of creating real change. Rather use resources for many small drops close to people’s everyday work over a longer period instead of a HUGE program. (and neither should you lat consultants run the process, do things yourself) Culture changes are often set in motions Top Down, but in general these things work Bottom Up.
The objective should not be to create a common culture. There will always be different cultures and the differences are actually necessary. An objective should be to facilitate for a common language, communication and understanding between groups. And it’s essential that what is said and that what is done are aligned: Walk The Talk.
Stian has written a very fine book about safety culture: Safety Culture: Theory, Method and Improvement.
Culture is one of the most maligned words these days, in management speak in general, but in the safety world in particular, I think. When a word is mentioned so often, by so many people and especially as the problem or solution for almost everything, your buzzword radar should raise red flags all over and the wise thing is to adopt some healthy scepticism.
The other day, I read a new article from Safety Science by Sidney Dekker and Hugh Breakey about Just Culture. An interesting paper, about improving safety by achieving substantive, procedural and restorative justice. You should be able to download it from Science Direct (http://dx.doi.org/10.1016/j.ssci.2016.01.018).
A very interesting comment is ‘hidden’ in the footnotes of the paper (interesting comments often are - I know many people ignore them, but I tend to follow them up, because many worthwhile side-tracks are usually found in them):
“Whereas social science has gradually abandoned culture as a prime-moving mechanism of social life, safety science has embraced an almost nineteenth-century certainty about the importance of culture to the social and organizational order (Guldenmund, 2000; Myers et al., 2014). Safety science tends to follow the functionalist tradition of management science and organizational psychology, where culture is seen as something that an organization has – a modifiable or exchangeable possession or property which can be mapped with quantifiable data gathered through e.g., surveys.”
There are several interesting comments in this footnote. Let’s highlight some.
What struck me is that some sciences appear to have discarded the concept of culture as a driver, while in the safety world it has been embraced as a solid fact. Does this mean that culture is very much One Big Myth? Or at least that the influence of culture maybe isn’t as big as we tend to assume? Interesting, and worth exploring at a later point of time. For now I don’t want to leave the concept entirely, because it does have a strong appeal, and I really need more information than just a mention in a footnote. But at least this should be a hint to handle the term a bit more carefully.
Whether culture-as-a-mover is a Myth or not - even so there is more than enough Culturebabble going around. As the authors correctly observe, the predominant view on culture within safety is a functionalist, normative view. Culture is generally seen as something that can be measured and managed, like other aspects of business.
Just one random example picked from many possible options, is the summary of a safety student’s master thesis that I came across a month or two ago. The student had looked into the implementation of a Safety Program (note the capitals, of course there was a Bradley curve in the thesis, as well as a reference the corporate goal to reach a World Class Safety Level of Zero Accidents) in a chemical plant. The average score in a perception review had gone from 3,5 to 3,8. Without explaining what that meant and whatever the context was, this was seen as proof that safety culture had improved, thanks to the Program.
This is not a stand-alone example. Alas it is a rather common modus operandi in the safety world. And while I find this approach highly questionable, I do want to note that even these applications can have positive effects and drive forward some improvement. But also they also contain risks, because positive results from such a simple survey can lead to complacency or reduced sense of urgency, as well as eventually more severe direct consequences.
The functionalist view also reigns supremely outside of safety, by the way. Only a few weeks later, I was in a meeting where a high ranking HR Manager thought that it was possible (and perfectly alright) to draw conclusions about the organisational culture based on a year old employee satisfaction survey without any special additional work. She even suggested culture as a KPI and it boggles my head even today, thinking how this is would be defined or managed.
Just a closing thought. As Sidney and Hugh write in that footnote, in Safety, culture is often “seen as something that an organization has”. Interestingly, most people use the (heavily simplified) definition “Culture is how we DO things around here”. That one should rather fit to the Interpretive approach that sees culture as an emerging property in an organisation and rather as something the organisation does than something it has. This just goes to illustrate that most people really don’t know what they are talking about and are only culturebabbling something that sounds interesting, trendy and somewhat sciencey.
My recently published book on Safety Myths contains a relatively short chapter on (Safety) Culture, featuring eight Myths ranging from “We Have Been Doing This for 30 Years”, culture and compliance, positive mind-sets and top-down culture changes to the certification of culture, responsibility and the question whether toilets tell about culture. And of course there is plenty of attention for buzzwords throughout the book!
Find more information about the book, including an overview of its contents and how to order on:
This blog was also posted on Linkedin.
A few weeks ago, I led a workshop about risk in the wonderful city of Bergen on the west coast of Norway. On the toilet mirror, I noticed these stickers issued by the city council. The stickers pictured a hand on which a text was written with ‘permanent’ marker, saying “100% Clean Hand. 37 degrees”, and below the encouragement “Wash your hands!”.
Well-intended as this sticker surely is, here is a lot of nonsense going on.
Firstly, my mother wouldn’t agree at all. If I would have shown up at supper with permanent marker all over my hands she would definitely NOT have judged them as clean, not even 50%. I would have been sent to the bathroom to give them another good scrub, and no questioning that command.
Secondly, if I recall well, there are a lot of bacteria that thrive very well at 37 degrees Celsius. If you can believe Wikipedia, this is even the optimal growth temperature for e-coli, which is a bacteria that is commonly found in the lower intestine of warm-blooded organisms. Which is, well, the area that you probably have been in contact with on the toilet.
What we see here, are phenomena at work that we regularly encounter in safety (and health, quality, etc.). In this case, they are based on at least four Safety Myths:
- Believing that words don’t matter.
- Believing that firm language, based on absolutes, is good.
- Believing that perfection is good and can be achieved.
- Making information as simple as possible.
Let’s look at them one by one.
Myth 1: In this example words get a different meaning than we attach to them in everyday life. Clean doesn’t really mean clean, but okay, we might forgive that as a white lie because the permanent marker is just a gimmick to attract attention. But we most certainly don’t mean 100%; rather “clean enough to safely eat your lunch” or something similar. More about that below.
On a practical note, how would you know that the water is 37 degrees Celsius? This is the average body temperature, but your outer extremities are probably colder than that. So the water should feel warm, but how much more than your hands? So, do they actually mean 37 degrees, or something else?
Myth 2: Firm statements based on absolutes ooze a sense of certainty. Just check these:
100%. No Doubt. Definitely. Read My Lips.
Wonderful, aren’t they? Many perceive this a better form of communication than more nuanced and somewhat ambiguous statements like “as clean as possible” or “good enough”. Often, however, this firm communication with use of absolutes lacks basis to do so. Cursory critical poking makes the arguments crumble. Washing your hands at 37 degrees Celcius will neither remove all dirt nor kill off all bacteria.
Interestingly, also the mentioned temperature is packed in a precise and firm statement, but the operational part (the temperature of the water you wash your hands with) will be approximate, at best. Unless we are required to bring thermometers to the bathroom.
Myth 3: Striving for perfection is unrealistic and actually counterproductive. Just imagine that you go for 100% clean hands. This would require either temperatures that will in fact hurt you, or a level of scrubbing that damages your skin (and leaves everything bloody, which isn’t really clean either). Additionally you will remove also the ‘good germs’ on your skin that you have need for.
Besides, you should not forget that some dirt may actually be good for you - it helps you build resistance (the process called hormensis).
Myth 4: Of course, information should be easy to understand, but simplification is often taken so far that it ends up in dumbing down a complex subject into a soundbyte. The result of the process of cleaning is a combination of temperature, the amount of water, time, chemicals (soap, alcohol) and mechanical techniques (e.g. what parts to pay special attention to and the amount of scrubbing). Feel free to experiment with those factors the next time you do the dishes, it may make the process much more fun too.
Stripping it down to just a temperature is severely misguided, as one may be led to believe that this is all that matters while it is not.
As my dad used to say: you shouldn’t believe everything you read. This sticker falls squarely into that category. Luckily these were stickers only intended for ‘common’ people and not occupations with higher risk of contamination and infection through contact, like health care (find better alternatives here and here). Still, it’s a questionable practice.
Safety Myth 101
You won’t find this exact example, but a great variety of these and other Safety Myths in the book Safety Myth 101 that has just been published through Mind The Risk. Available from a.o. Amazon - find links at:
Also published on Linkedin
Within the Safety community there is some discussion going on about the ‘new’ and ‘old’ view on certain things. Some people seem to think that things are black and white and that it’s either one or the other. It must be stressed over and over again, however, that both views can be complementary and both have value - depending upon what element you want to focus on. A nice illustration of this occurred during the Learning Lab on Critical Thinking In Safety at Lund University in January 2016.
The Learning Lab took place at the Pufendorf Institute in Lund. The Institute is housed in a wonderful historical building. During a past upgrade some genius designer apparently thought it fit to put power outlets in the floor right behind the entrances of the big room where the Learning Lab was held (and liked the idea so much that he/she did this for all three the entrances!). Of course people needed power to feed their smart phones and iPads, so most of the time one or more charging cables were lying on the floor right behind the door.
As said, this example illustrates very nicely both the old view and the new view of safety - and that they go well together.
This situation provided a perfect empirical case for the (by some) much loathed safety pyramid. People had to get in and out of the room and stepped over the cables (which provided a clear tripping hazard that interestingly none of the safety folks in the room acted upon - including yours truly, because I was observing this experiment…) continuously. It was only the course leader, JB, who tripped once over the half-opened power outlet (but did not fall or hurt himself).
In traditional ‘Heinrichian’ language we can say that we observed many unsafe acts and one near-miss. I have to admit that I didn’t count the ratio, but then, the numbers are irrelevant and context dependent; it’s the idea that counts.
What we have here is a nice little example of more or less normal (they were safety professionals, after all) people making do with what there is. We all respected the fact that also the ones who were so unfortunate to sit too far away from the wall-mounted power outlets needed to charge their gadgets (ETTOing in a way) and figured we were careful enough to deal with the variability. Once JB tripped, he was resilient enough to catch himself.
So, who’s right?
Who really cares? One could even argue that we normalised deviance, operated clearly at the margin or one could make a case for situational awareness or some kind of safety culture. It all depends upon the narrative you chose - which was one of the main themes of the Learning Lab... It’s not so much a case of being right, but what analytical choices you make - and in the end what you want to use it for (hopefully improvement, or possibly using it as inspiration for a blog).
Also published on Linkedin