background2

A few weeks ago, I was at an international meeting where I (among other things) presented some learning points from recent work that we have done with regard to the use of a particular breed of dogs in the police force. This was one of the most fascinating projects I have ever worked on, and one that I am rather proud of, because we managed to turn the initial reaction in the line of “remove the bad apple” to a more system and context oriented approach.

After the meeting, a colleague approached me. He told me another dog story that contains some good learning points as well. This true story (somewhat altered below to make it more anonymous) also illustrates perfectly our initial desire for justice or retribution versus a more systemic view that makes one wonder if the concept of cause and effect applies here.

The Story

This colleague, and his family, had visited a cousin. This cousin and her husband love animals, and they had taken in a former stray-dog from a dog asylum. While this dog had seen a lot of neglect and abuse earlier in its life, it had become rather well adjusted to family life and being around people. It had not caused any real problems so far. Still, because of its experiences, the dog would feel threatened and react accordingly when it was surprised, especially when approached from behind.

Therefore, the hosts told their visitors to be careful around Max, the dog. It was fine to pet him and play with him, but they were not to frighten or scare Max. They were told to always approach the dog from the front, as much as possible, to avoid surprises.

Everything went fine. The family had a good time together and the colleague’s kid, aged 5, played a lot with the dog. The kid and the dog went very well together and seemed to enjoy each other’s company.

The next day something terrible happened. The kid went to play with the dog again. Since they had played all day before and she and Max had become best friends, she dropped all caution. And so it happened that the child approached the dog from behind. The dog was startled, reacted instinctively and bit the child in its face. The kid was seriously injured and had to be brought to the hospital a.s.a.p. for stitches and treatment. Luckily, the damage there was no permanent. The result could have been fatal, however. Had the dog bitten about 10 centimetres lower, he would have injured the main arteries or other vital parts.

We MUST have accountability!

So, what was the cause for this unquestionably unwanted event?

  • The dog, for biting the child, injuring it badly and potentially killing her, had it bitten 10 centimetres lower?
  • The child, approaching the dog from behind, even though she had been told not to do this?
  • The hosts, for allowing the dog (who was a known potential problem after all) inside the house with guests and children present? Or for taking in an unstable pet in the first place?
  • The parents, for not watching out more carefully, and reminding the child to take care?

Whom or what are we going to hold accountable? Whom do we blame? When something terrible happens we must have someone who is held accountable, after all.

Or do we?

None of the choices feels fully right. Just mentioning some factors:

  • Max, the dog, was minding his own business until he was provoked. The dog reacted aggressively, sure, but an instinctive defensive reaction is perfectly normal, given the sudden stimulus and his history.
  • The child acted out of her expected pattern. Even though she had been warned, successful practice had made the warning look overly cautious. She had played with the dog the entire day before and everything was fine. Besides, children are often forgetful when they play.
  • The hosts love animals and they figured they did society and the animal a favour of taking Max in their home, instead of having him killed at the dog asylum. They were used having the dog inside the house and they had never before experienced trouble when they had visitors.
  • The parents had warned the child, seen that everything worked fine the day before, and besides, they cannot go after a child all day long.

It is hard to argue that we have an incident caused by broken parts or bad apples. The various parts functioned fine in isolation and had functioned fine together the day before. Was this then a case of normal people (and animals) doing normal things? I would say it definitely was. This is a case where an unlucky combination of several rather normal factors led to a serious outcome. Surely, it could have been worse, which is very easy to imagine seeing the injuries in the child’s face. But, the outcome could of course also have been better had the dog missed the child, or only snapped a warning. In that case, I would probably be sitting here and writing another story. The kid would have been frightened for a moment, calmed the dog and most likely have carried on playing. At worst, it would have run to its parents and probably ignored the dog for the remainder of their stay.

No action was taken in this case and while the colleague in question said that this was rationally the correct decision, he felt só unsatisfied. His child was hurt and could have been disfigured for life or even dead, so someone must be accountable. He would not have lost any sleep over putting Max to death for reacting the way he did. After all, it is only a dog. I think many of us can sympathize with that view, emotionally. Reason and emotion are two different things and they are not always in tact.

Know any Maxes near you?

We see often that the easiest way to deal with problems is to pick on Max (the weakest party). It is also a rather satisfactory solution (to a certain degree) for many, because the party who did the harm receives punishment (satisfying our “eye for an eye” desire) and there is some quick and decisive action. But would it be fair? Or logical? And would it really solve the problem?

Now translate this story to a workplace accident near you and reflect.

(And do read Ron Gannt's fine article on where failure comes from)

 

Also published on Linkedin.

The past few weeks I have (among other things) read Doug Hubbard’s book “The Failure of Risk Management”. The title and description sounded promising, but I think the book has only limited value for practical safety work (even though it does at times mention safety issues). Still, thinking about risk is ‘core business’ for many Safety Professionals and so reading and thinking about risk in its various forms (also from other sectors, like finance) could provide learning points.

It quickly turned out that the author and I are on different planes (planets) philosophically, but that might make reading the book even more interesting, or so one would hope. As a whole, it has been a bit of a rollercoaster between sheer boredom, annoyance and interest. Regrettably there are also parts that really make me cringe. Let’s look at one of these occasions.

A Cringe Quote

Early on in the book, the author argues that virtually all of the risk analysis methods used in business and government are flawed in important ways and most of them are no better than astrology. And so he comes to a ‘profound’ conclusion:

“Obviously, if risks are not properly analysed, then they can’t be properly managed”.

Further on in the book he more or less repeats his claim:

“Most of the key problems with risk management are focused primarily on the problems with risk analysis. That is, if we only knew how to analyse risks better we would be better at managing them”.

Hmm, let’s think a bit about this. Can this really be true?

Shit In = Shit Out?

In a way, it seems to make sense. After all, the First Law of Quality Management says “Shit In = Shit Out”. So, feed a bad risk analysis into a system and you will get bad risk management. Common sense, end of story.

But wait, there is more to say!

Firstly, the above line of reasoning assumes that risk analysis is the only way to risk management. I would say that there are more ways to risk management than always through analysis. We get back to this in a minute.

Secondly, the above line of reasoning also assumes that risk analysis is the direct (and only) input to risk management. There are, however, a couple of steps between analysis and action, one of those deciding what to do. You can do a perfect analysis, and then decide to do something entirely ineffective, for example because you have incentives to do so, or because you do not have the means to initiate successful actions. Even if you take a good decision based on your perfect assessment, it is fully possible that your management is unsuccessful when external influences or unforeseen circumstances interfere with the results. To mention just some problems…

Decomposing a statement, phase 1

But let’s decompose the statement a little further. What does “if risks are not properly analysed, then they can’t be properly managed” actually say? The key points to look at would be the terms ‘analysed’, ‘properly’ and the connection between analysis and management.

Point one: what do we mean by analyse? Looking up ‘analysis’ tells us that it is “the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it” (Wikipedia) and “a careful study of something to learn about its parts, what they do, and how they are related to each other” (Merriam-Webster).

It is clear that this is some kind of a formal and structured process, but according to the definitions above not necessarily involving numbers. Hubbard does (grudgingly?) agree on this part because he acknowledges at one point that also qualitative approaches are “an attempt to analyse risk”.

Decomposing a statement, phase 2

Point two: what does ‘properly’ mean. I presume that I would define it somewhat differently than Hubbard would. Hubbard is rather clear that the only acceptable form of risk analysis involves numbers and probabilities. Everything else is “soft”. I on the other side, would argue that the automatic, qualitative and even rather sub-conscious response before crossing a street is a good example of a proper analysis (although assessment may be the better word in that case) without any calculations whatsoever. And many practical assessments with regard to risk are made every day, just like that one.

When you ask me what a proper analysis is, or proper management means, then I would say something that is effective and fit for purpose. Others like Hubbard, however, hinge very much on qualitative and (semi) scientific rigour. Now, I am very much pro-science, but I think we should be aware of the fact that more numerical or more statistical is not always the same as more scientific or better science. It only looks that way.

Neither does more numerical or more statistical always lead to better decisions. And while we may be intuitively bad at dealing with probabilities, models don’t necessarily improve our decisions. Research has even shown that under some circumstances (especially with great uncertainty and little knowledge) less information actually can be an advantage.

Decomposing a statement, phase 3

Point three: what is the connection between analysis and management? We already touched upon the subject before and I have come across examples where the right things were done in spite of the risk assessment that came before it. Granted the analysis/assessment was in most cases not the best, but the risk management was initiated without any analysis whatsoever.

I am not familiar with any research about the connection or correlation between the quality of risk assessments and the quality of measures taken or safety management in general (leave alone the level of safety). There is a problem of how to measure these things, of course. We generally assume that good risk assessments contribute to good decisions and good safety management. Not at least because a good risk assessment gives opportunities for a systematic and documented review of possible problems (risks) instead of leaving it to coincidence what person, what competence, what authority or what political agenda was part of a decision process. On the other hand, organisations that are good at safety management are probably doing better risk assessments and using them better, so causality is bound to be fuzzy.

Finally, I mentioned crossing a street as an example above. Many decisions about risk are actually based on perception with no analysis at all. These decisions are not necessarily better or worse, they are different. The judgement about better or worse is only in the eye of the beholder. Many people do not want a nuclear power plant in their neighbourhood, based on their perception of the risks. Analysts may think this is crazy, because they can calculate a ridiculous low risk. Who can say what is right from a risk point?

Sorry…

Summing up, I am afraid that there is little which supports the rather strong quote from the book. You Cannot Manage What You Haven’t Analysed? It sounds superficially okay, but if you really think about it, it is more a smooth sales pitch from a consultant. It is also a good candidate for another Safety Myth.

 

Also published on Linkedin.

Safety-itis

A few years ago, I encountered the below sign before entering a construction site. For now, let’s not dwell on the text that appears to value administrative action above anything else (and probably will be incomprehensible to the often foreign, predominantly Eastern European, workers) and instead look at the pictograms on the bottom that inform us about some safety requirements on that location.

Taking the information on that sign literally, one was obliged to wear an impressive collection of gear: safety gloves, fall protection, a hard hat, safety boots, gas masks, safety goggles and hearing protection.

It looks like they wanted to make sure that they didn’t forget anything, covering their asses to the max. Interestingly they screwed up on that part, because it was work at the rail tracks where high-visibility reflective safety dress is prescribed per regulations. And of course, there was no working at height at that location.

In my book I call this safety-itis, a serious condition that seems to have befallen many a company. Just drown people in a huge number of obligations.

Safety Dancing

Things can also be done quite differently. The other day, a colleague of mine had a little excursion to the renowned Norwegian glass factory at Hadeland. Visitors to this facility can observe glassblowers performing their fascinating trade. Being a safety nerd, like many of us, my colleague noticed the sign below.

 While the first part is interesting, I’d like to concentrate on the second part of the text right now.

Glassblowers clearly deal with high risks and a lot of energy. Hazards include extremely high temperature and heavy and sharp objects. Still, many of these people perform their jobs in t-shirts and shorts (check for example this dude).

If you observe a glassblower at work, it looks at times as if they are dancing, moving in intricate patterns around their blowing tube with the hot piece of glass on its tip. And while I’m no dancer myself, I presume it’s not easy to do so in protective full body gear…

A True Story

This reminds me of an anecdote from my own professional past at the workshop in Haarlem. Most people never have the chance to look under train carriages. Hidden for most of us, you will find an impressive amount of wires, pipes, appendages and what not. Many uneven and sharp (or blunt) objects where you can bump and harm your head while you are working under the train, fixing stuff or just doing ordinary maintenance.

In my time, engineers would often wear just a cap (which would prevent grease, rust, dust and other stuff getting on their head and in their hair). Later we also had safety caps (like a baseball cap with a Kevlar inner) for those who wanted them. There were barely ever head injuries (I recall none in seven years time). Like the glassblowers, engineers performed kind of a ‘dance’ when they walked in the small space under the train. They knew exactly when to duck and when to bend to avoid bumping into appendages and pipes.

When I moved on to another job, my position was filled by a hold-the-railing type safety officer. One of his first actions was to bully the management in the departments about the ‘intolerable’ lack of head protection. After all there were sharp objects under the trains, and people could get cuts and infections from the dirt and rust under the trains. And so on. Within no time there were signs all over the place, pointing out the mandatory wearing of hard hats followed by some heavy enforcement.

The result: a spike in neck injuries. Because of the focus on enforcement people complied, wore hard hats and kept bumping into all kinds of stuff under the trains. After all, they moved like they always had done. The problem, however, was that they had “grown” by a few centimeters. While they did not get any cuts or infections, the energy was transferred to their necks leading to some injuries. Lost time injuries even, which hadn’t been there before.

Conclusion

Looking back at the construction site above, I clearly prefer the Hadeland solution. I do of course appreciate the difference between a construction site and a glassblowing workshop with highly skilled professionals in relatively stable environment and activities. Still, an overkill of useless mandatory gear is not helpful at all and rather creates cynicism and ridicule with regard to safety rules.

The message today: sometimes it should be obligatory to wear safety gear, without leaving a choice. I fully support mandatory wearing of PPE in cases where this makes sense. But just as often one should leave room to use your head and skills.

Do not interfere with skillful dancers, they may actually hurt themselves while you are trying to protect them!

 

Many thanks (once again) to my friend and colleague Beate Karlsen for spontaneously providing the inspiration for a little blog.

 

Also published on Linkedin

Feedback is a necessary element of learning. According to Wikipedia, Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause-and-effect that forms a circuit or loop. The form of this information fed back to the control mechanism or initiator often determines further action.

In most simple terms: if you get positive feedback (result as wanted) you will do more of what you did before, if you get negative feedback (result not as wanted) you will probably change what you did before in the hope to get a positive result, or you will stop what you did altogether.

Much of the work on behavioural change relies on feedback (whether this is the optimal way to go is a question not to be discussed here). Carrots and Sticks, and the C of ABC are very much about feedback. Also the Plan-Do-Check/Study-Act cycle and the scientific method are based on (among other things) feedback. 

Trial - error - learning

Feedback is also what trial and error is all about. Mommy says to toddler “Do not touch the stove. It is hot”. Toddler follows human nature and is going to explore for himself anyway. Toddler burns his fingers and learns the lesson through a harsh and direct form of feedback. Toddler might even have some second loop learning (“Must listen more often to mommy, she knows stuff”) although that may go overboard when the teenage years arrive. But I digress…

Feedback teaches us about our actions and the system we are in and this contributes to our learning. We have to realise, of course, that some things are complicated or complex and in some cases what we perceive as feedback of a cause-and-effect relationship is not necessarily that. We may get back to that another time, for now we stick to another thing related to feedback.

Feedback in practice: A silly example

Feedback must be specific and understandable to be useful.

The other week I accidentally ended up on the Amazon page of my book and I saw that some people had left a review. Until then I had not realised that this was a possibility, so this was nice surprise. I had gotten feedback on the book through various channels, and here was yet another one.

Obviously, I was very pleased to see some extremely positive reviews. The interesting thing is, however, that one often learns most (at least with regard to things that should be changed) from critical or even negative feedback. So far two of those.

One reviewer complained about the small print. Point taken. I had received similar feedback from two or three other readers previously, and while I still stand back the decision to keep the number of pages below 300 and not delay the publishing date by redoing the entire layout (ETTO!), I see that a larger font would have been better. This is good feedback. Lesson learned. For the next book, I will do the print differently. I will also be able to fill a book more quickly, so kind of a win-win situation.

The other reviewer, however, managed to leave the comment “Not what I expected”.

Okay… What to do with that kind of feedback? It could in fact have been useful if he had specified what these expectations were. Then I could have decided if his expectations matched with my objectives when I wrote the book. Did he expect something in full colour? A couple of how-to checklists? Long, thorough discussions of safety subjects? All valid expectations for him (or her), but none of these were included in my plans this time. I might consider them for another book, however, if I was convinced that they would fit the purpose.

Since the feedback says nothing about the book, but only something about the reviewer, there will alas be no learning from this kind of feedback. Well, apart from that it served as a learning example here. So at least we have that and it wasn’t all useless. And maybe I should work on my mind-reading powers...

Feedback? Yes, Please!

I am here to learn, to explore, to discuss and to improve. Professionally and personally. In that respect I think it is valuable and great to get (and give) feedback - positive AND negative, as long as it is constructive. So please keep it coming, to me and to others, so that we can improve. 

 

Also published on Linkedin.

At the moment, I am working through Todd Conklin’s books. A worthwhile exercise because they get me thinking about learning, how we involve people and how we do assessments and investigations.

I started with his latest book about asking better questions; a subject that very much appeals to me. One thing that stuck out right from the start, and also made me wonder, is that he favours asking “How” over asking “Why”. Going back, I noticed that he even dedicated his previous book “Pre-Accident Investigations” (that I have yet to read) to people who asked “How”.

Therefore, in line with the latest book, I wondered if “How” really is a better question than “Why”?

From-why-to-how-Podcast

Before we continue with my musings, you might want to hear Todd himself on the subject in a rather short podcast. Here he argues that “How” is less inflammatory that “Why”. It is less blaming. It is a better way to start a conversation and avoid defensiveness than “Why”. The biggest pay-off for Todd is that it is much more operationally informative. “Why” ends usually up on some level with an incentive, which ends up with a choice (and had that choice not been made then the accident had not happened). According to Todd, “How” forces you to look at context and understand conditions. “How” takes you from the human element and puts you into the system/organisational activity. Complex systems do not have individual moments (causes). They have motions and conditions that should be understood.

Todd also says not to overthink it, just try it. Language changes and thinking changes and it may give you better opportunities for improvement.

Overthinking “How” and “Why”

I am sorry, because maybe I am going to overthink it. However, since Todd makes the point so forcefully, I just have to reflect a bit on it. I agree strongly that choice of language and the way you use it affect how you think and what you will find. The problem is that for me personally “How” does not add that special extra, or leads me more towards context than asking “Why”.

It may be a language thing (having other languages than English as my mother tongue), or my background (that includes both engineering and law school), but for me “How” asks for a mechanism. Granted, this may leave some room for context and conditions, but it mainly describes the event and not much of the surroundings and background. For me, “How”, is a rather technical question and often stops with direct cause(s). More or less. Feel free to disagree and experience this differently.

The driver for safety investigations should be curiosity, not accountability. I cannot see that “How” is a better expression for curiosity than “Why”. It is a different expression for curiosity for sure. Still, “Why” asks for (a) reason(s), and suggests clear-cut causal connection(s). By that, I do agree with Todd that “Why” may (mis)lead you to disregard complexity, while “How” leaves room for complexity even if it doesn’t ask for it explicitly. Still, both “How” and “Why” can do so if you ask it often enough in many different directions.

For me personally, “Why” would also be the better guide to develop ideas around local rationality. However, this also connects to the major drawback of the word. As Todd justly says: “Why” can easily lead you to decisions and choices that have been made and a major focus on the human element. The step towards judgement and blame is then quickly made. That may very well be the biggest issue with “Why”: the way “Why” has been practiced in the past, especially in the RCA-context of 5 Why linearity. “Why” is pretty much connected to the ‘old view’ of safety where errors are seen as causes and problems instead of as symptoms. Therefore, using “How” instead of “Why” can be very much of symbolic significance rather than that it really is superior.

Is “How” Really Better Than “Why”?

Trying to conclude: is “How” a better question? I do not know (yet). It is a different question, however, and that alone enrichens your view by using it.

A word of caution, however. I have not asked the man himself, but I am rather sure that Todd agrees: Asking “How” is not a silver bullet or magic wand. Asking “How” alone does not make you a better investigator. And the last thing we want or need is a ‘New View LEAN Consultant’ selling us a ‘5 How Model’. 

That is why I would like you to actually think about the question for yourself. A simple instruction like “Ask how, not why” without any deeper understanding will/may backfire. I am afraid that without proper mind-set and training any simple question (especially when asked repeatedly) can or even will lead you astray. It may even be more important what you want to achieve with a question than what question you actually ask (although some questions are very unwise).

Besides, why think binary anyway. We should not think “either/or”, but rather “both/and”. Maybe a good way would to be starting with “How” (I fully agree that it is a better opener for a discussion), and move to “Why” later on? Maybe it is even better to ask a longer question in the line of: “What conditions were present and may have contributed?”. It is not as short and snappy as “How” or “Why”, but then, should we not be reluctant to simplify?

Many thanks to Timothy van Goethem, Vincent Steinmetz and Beate Karlsen who helped to shape these reflections. Still, I have to stress that the above opinions and views are my responsibility and not necessarily the truth. I would love to hear yours, discuss, explore and learn!

 

Todd Conklin bibliography:

Simple Revolutionary Acts - Ideas To Revitalize Yourself And Your Workplace (2004, ISBN 978-0595320653)

Pre-Accident Investigations - An Introduction To Organizational Safety (2012, ISBN 978-1409447825)

Pre-Accident Investigations: Better Questions (2016, ISBN 978-1472486134)

 

Also published on Linkedin.