The other day a friend of mine forwarded an invitation to an event on safe behaviour. This invitation advertised (apparently successfully because the event was full-booked in no time) among others with the following quote:

“Experts will give you tools that enable you to have employees working safely by themselves” 

(my translation, which alas doesn’t perfectly capture all nuances)

This text looks perfectly innocent, well-intended even. After all it appears to aim at improvement of safety! But let’s look closer at what they are actually saying.

Decomposing the quote from the end, “have employees working safely by themselves” says in fact that people aren’t able to work safely without someone prescribing how to do it. Even more, not only are they incompetent or ignorant when it comes to safety, apparently they aren’t particularly motivated either. They cannot or want not to work safely without an intervention.

But despair not. In the first part of the quote lies Salvation! Luckily there will be experts at this event that teach you (the safety practitioner) how to deal with these impossible workers.

Has anyone actually thought what kind of message and attitude towards the majority of the population speaks from this tiny quote? People are dumb, lazy, unmotivated and need experts and safety practitioners to do their jobs properly.

As far as I know, nobody ever comes to work with the objective to do a bad job; getting up in the morning with a thought of “today I’m going to have me a nice little accident”. If this event isn’t going to be a prime example of doing safety TO people instead of doing it WITH them, then I wonder what will.

I’m not particularly surprised that people have a certain perception of health and safety. If this is how we approach them in a condescending way, we shouldn’t be surprised that they find us arrogant and remote from reality. So my advice:

Watch you language!


Also published on Linkedin

It’s often heard that only our lack of imagination limits our hazard identifications or risk assessments. What we cannot imagine we cannot assess and there lies a problem because we might miss something important. Indeed, missing a hazard may cause serious problems afterwards, but I do not subscribe to the view that it’s a lack of imagination that limits our hazard identifications (if it is, I pity my colleagues who suffer from this condition).

In most cases I think it’s rather a sense of potential embarrassment that keeps us from coming up with certain scenarios. We don’t dare to mention them because people might think we’re not taking our job seriously (e.g. when we come with a scenario of UFOs landing on top of the building) or that others don’t take the scenario as a serious possibility (e.g. a plane crashing in the building). And smiling while working on safety seems to be a criminal offense - according to some anyway.

Film makers typically don’t have these restrictions and I’ve found myself on various occasions discussing scenarios from movies in a safety context with likewise open-minded colleagues. Take the opening sequence from Skyfall with the excavator ripping open a train. I have literally participated in hundreds of risk assessments that dealt with trains and excavators, often even both at the same time, but this scenario never came up! (But, close enough, excavators coming within the profile of the rail tracks tends to be a point of concern in almost any occasion and justly so).

Also the new 007 movie, SPECTRE, has provided some good material for discussion. This time it wasn’t the action or the stunts (so far - have to see that snow chase some more often, however), but the organizational/managerial/political aspect; more specifically the role of C. 

I don’t want to give away too much of the plot (mild Spoiler Alert), but there have been major discussions around me if Andrew Scott’s character is really one of the bad guys or just a misguided (and somewhat incompetent MBA-type) bureaucrat who gets lured the wrong direction in his idealistic quest for certainty and making the world a safer place.

Pending repeated watching the movie and assembling evidence, consensus for now leans toward the second choice. My friend Alan Quilley commented that the C character reminded him a lot of some safety guys that are doing all the wrong things for the right reasons. Examples of that are all around us… Plenty of Safety Programs build on them. Traditions and Established Beliefs are important drivers as is a lack of Critical Thinking. Safety Slogans and Safety Absolutes like Zero Accidents and Safety First are clear proponents of the Wrong Things for the Right Reasons.

Having 007 around with his licence to kill, solutions appear to be easy. In a movie at least. Solutions of that kind are not really an option in our work as Safety Professionals, of course. But it would be a good start to ask ourselves whether we are really doing the Right Things. And then maybe reconsider because the Right Reasons alone may be not enough…

Since the end of the year is nearing we might consider this as a Professional New Year’s Resolution… And while you’re at it: if applicable (you decide, but be honest) do something about your imagination, embarrassment and sense of humour!

Also published on Linkedin

The safety profession does have a bad reputation in some cases. One of the general perceptions is that safety work takes a lot resources and time without giving proper return on investment. While safety professionals will have a much more nuanced (and sometimes naïve and idealistic) view on the matter one has to admit that if you want to do a good job this can be demanding and time consuming. For example when one has to do a risk assessment. These strive optimally to be as complete as possible as possible. After that it’s an almost just as big job to get it documented in a proper, usable and understandable way.

Of course there are quite a few risk assessments (and other ‘safety’ activities) that are purely done as an administrative exercise as window dressing or to satisfy some requirement from regulations or internal procedures. Those should cut out immediately and stop unnecessary waste of time and energy. But that’s not what we talk about here. The question is if we can increase effectiveness majorly for the risk assessments that have to be done by working smartly and creating the possibility to relocate resources to actual improvement.

There is little literature on the subject so far (although I’m sure that many companies are practicing some form of recycling and reuse), but my friend Rune Winther has done some work in this direction to work towards a systematic method. Results so far are encouraging. For a series of similar assessments there was indicated an 80% reduction in use of resources from the first to the third assessment.

Winther’s paper describes that there is often a significant overlap of hazards between similar projects and the number of hazard and failure modes that are unique to a specific project may in fact represent a minor part of the hard and failure modes. There is a serious possibility that time is ‘wasted’ on identifying hazards and failure modes that are already well known and often adequately mitigated. Many safety professionals can probably confirm this finding from their own experience. The aim should be to use as little as possible effort on the identification process, while still being complete. Systematic reuse is one way to solve this.

However, one must watch out that reuse and recycling of material not just ends up in a simple copy and paste job. I remember all too well an episode sometime in the early 2000s when we were contacted by the Health and Safety Inspectorate that had visited a contractor and checked a couple of safety plans and other required documentation. They wondered why working on the rail tracks (which the project they had visited was all about) at one place included the painting of a railway bridge about 200 kilometers away.

Luckily there was no real harm done (if I recall well the inspector actually thought this was funny), but of course it does send a wrong message, it doesn’t really give an impression of safety management being taken seriously (the safety plans had after all been approved by our project manager) and worse: there are other ‘copy and paste errors’ thinkable that might even have led to dangerous, maybe even life-threatening situations. Another danger that lurks in cases like these is of course the possibility of systematically copying errors which may spread an isolated problem to other situations.

Also, if one wants to reuse information it’s important to check if the situation or system that has been assessed previously indeed is similar to the one at hand. One must one watch out that one doesn’t overlook things that are unique in the current situation when compared to previous assessments. Important keywords are the context (which may be quite different from situation to situation, even if they seem similar) and interfaces and interactions (for example as a consequence of the integration of a sub-system into the system as a whole).

To tackle these and other problems, Winther argues that reuse should be based on hazards defined on a subsystem level, because these hazards will be more generic than hazards defined on a system level. It’s important to use clear generic subsystem definitions because these are an aid to evaluate the relevance and completeness of generic lists of hazards and failure modes. Good definitions help to identify the differences between a generic case and each specific case.

As often with research, more is needed, but as many of us will assume from their own experience there is a good potential here to improve and do things in a smarter way. It’s important to go forward systematically and with caution, to accept some trial and error and to do a good deal of tinkering. The results may be very rewarding indeed. I for one am looking forward to hear about your experiences!


A Pragmatic Approach To The Reuse Of Qualitative Risk And Reliability Analyses - Experiences From Analyses Of Railway Traction Substations (Rune Winther, 2015, presented at this year’s ESREL conference).

Get the full article through Researchgate


Also posted on Linkedin

In the beginning

When I started as a young HSE advisor way back in the early 1990s it was quite common when I got a question about working conditions and occupational safety issues that I would turn to the really good guidance documents that the Dutch Health and Safety Inspectorate had issued. More often than not I would find a good answer or recommendation there.

This was before the renewed Health and Safety at Work Act came with its paragraph about risk assessment and evaluation. Risk had been at the core of the safety profession for a long time, of course, but I think it wasn’t until halfway the 1990s that the majority (in The Netherlands, at least) came to recognize the importance of systematic study of hazards and risks. And so my approach initially was just like I had seen around me: more rule based than risk based.

Referring to the guidance from the Inspectorate had the added bonus that it would give power and backing to the words of a rather junior advisor. Or so I thought. Having grown in the profession (and otherwise) I’ve come to adopt another approach and another view on the matter.

Fast forward two decades

A while ago I attended a conference where one of the HSE professionals presented the results of one of his recent projects. The problem dealt with a relatively novel way of working that among other things allows for greater flexibility and more efficient handling of cases on location. The new method also has some clear drawbacks with regard to ergonomics and working conditions.

We expected to hear how to deal with the latter, but the answer was quite sobering. The presenter started waving with some regulations, mentioned the regulator and revealed that he had found out that employees were allowed to do this activity at a maximum of two hours a day. This posed a particular problem because the call centre distributing the work would need to have a registration in order to avoid giving work to people that had “used up their quota”. I was simultaneously baffled and fuming and spoke up accordingly.

Episodes like these annoy me in particular for three reasons:

  1. It approaches the problem from the wrong angle.
  2. It communicates a false sense of certainty and oversimplifies a relatively complex issue by reducing it to a single number.
  3. Things like these give Safety/HSE a bad name.

Let’s look at these three statements in more detail.

1: Approaching from the wrong angle

In the whole of the presentation not once the concept of risk was addressed, except implicitly in the sense that it was ‘forbidden to work longer’ because this could cause harm. Having matured from my early 1990s practices I would say that as a rule of thumb most HSE questions should start with some kind of risk assessment - not necessarily a formal one, mind you.

I find the risk approach much more useful than looking for ‘a rule’; especially in communication to the people you’re supposed to help. Instead of giving a “because the rule says so” answer that may be totally incomprehensible to people, it discusses the problem from a logical point of view - not in the last place their point of view. More importantly, it is also the approach that will lead you to finding a good solution that addresses the problem in a proper way.

What a lot of people apparently don’t realize is that regulations almost per definition are compromises and often driven by political and other agendas. Following a threshold value does not necessarily means that you’re ‘safe’. The only really safe value for exposure to asbestos fibres would be zero, for example, but this is in many areas not realistic through background exposure. Regulations often take into account this kind of considerations.

To illustrate the point further I’d like to mention John Adams’s discussion of the legal limits for drinking (one of his many thought provoking observations) in his fabulous book “Risk”. Firstly the way the human body is affected by alcohol depends on many things and is very different for each person. If the legal limit is x promille this will be an average (or even arbitrary) value and not the ‘real’ safe level.

Secondly, a legal limit may give the illusion that it’s okay to drink and drive, as long as you are below this limit. As said, alcohol affects different people in different ways and some people may be affected a lot, even if they are below the legal limit. The legal limit is thereby often a crude rule of thumb and again the only really safe level would be zero.

Then there is another aspect that often is misunderstood. I really doubt that the rule that was mentioned by the presenter is actually a proper rule. I suspect it rather to be guidance given by the regulator. There’s a fine difference between regulation (“you shall”) and guidance (“it would be wise if you”), but many HSE professionals fail to appreciate that. Many think that everything said by the regulator is law. It’s not.

2: Reducing a complex problem to a number

As one can see from the extremely simplified case description above there are many sides to the problem. On the negative side the novel working method is ergonomically far from optimal, but it has other strong benefits like greater flexibility, efficiency and there are also safety and health gains, for example because it reduces the need to travel back to base or involving other crews.

The “but only two hours a day” approach singles out the (possible) negative effects of the novel way of working without any context or looking at other effects, positive and negative. A correct way of looking at the problem would be to look holistically at the difference between the old and new situation (which inherently includes the context) and then see what the net decrease or increase in risk is. Risk management should be a constant trade-off between various objectives, not a mindless striving for zero risk with total disregard for other factors.

Besides, I’m not quite sure how this would work. Is it 2 hours per 24 hours, 2 hours per working day or 2 hours per calendar day? The latter would mean that you actually could do the activity 4 hours on a night shift, if you take the rule in its most literal sense. The rule gives a number as if it were a certainty; as if 1 hour 59 minutes is good, while 2 hours and 3 minutes means that you’re in serious trouble with permanent disablement looming in the future. Sorry to disappoint some, but safety isn’t an exact science, so it usually doesn’t work that way.

Actually, I doubt that the activity in question will be performed two hours in a stretch ever. The job these people are doing in-between involves lot of other activities that probably will relieve the physical stress built up and help to level out negative effects of ergonomically suboptimal working conditions. But then, take in consideration that I’m no ergonomist or physiotherapist, so I may be mistaken (another reason for the holistic approach and risk assessment as the basis).

3: Giving Safety a bad name

It’s bad enough if we hear things like these during a conference where only other HSE professionals are present who hopefully speak up in a critical way, as I did. But it’s really bad if things like this are uttered to people (managers or operational personnel ‘out there’) that may have a critical stance towards all things HSE in the first place.

Concluding that the employer had to get track of what his employees were doing so that he could manage the exposure means adding a new layer of bureaucracy. As if we haven’t enough of those in the HSE world, most of them adding little or no safety. In this case HSE did exactly what some people expect them to do: come with another procedure and make work even more difficult without any good reason. That’s what it means to be doing safety TO people instead of WITH them.

It also blows the problem out of proportions, like this is the most important activity that has to be monitored. Well, I’m sure that it’s not. These people are involved in much riskier activities on a daily basis. Only driving to and from their jobs is probably an activity with higher risk without any special monitoring or preventive measures.

Finally, looking up “what is allowed” is a very, very, very lazy way of doing safety. The guidance documents from the Dutch inspectorate that I mentioned earlier included a disclaimer that said something in the line of: “This is how we interpret the law, if you follow these guidelines you are compliant, but of course you are welcome to find other, as safe, or even better ways”. Safety guidance is not meant as a barrier for innovation. The catch, of course, is that the burden of proof rests on you. It requires some hard work to take the risk based approach and look at the problem from the bigger picture, with proper expertise involved, not in the least the people performing the job! But it’s usually worth it.

Let’s face it. Advice like “only for two hours a day” is what gives the Safety and HSE profession a bad name.

Earlier during the conference someone asked why it was that HSE professionals often aren’t invited to take part in projects from operations and that there seem to be two separate tracks: the operational track and the HSE track. Well, it’s exactly answers like “You are only allowed to this two times per day” that makes sure HSE professionals are NOT invited to think about new solutions!

Please, let’s not do things that way!


Disclaimer: I don’t want to talk trash about the colleague in question, because I’m pretty sure that he sincerely wants to do a good job in the best interest of the employees whose health and wellbeing he is trying to protect. This was probably not his finest moment and we should use the opportunity to learn from it and avoid these pitfalls. It should be our aim to help people doing a better and safer job, after all, and not be a straightjacket that provides so many constraints that they rather don’t ask HSE professionals for help. Also we should help fellow HSE professionals getting out of the rule based routine and teach young and beginning HSE professionals approaching things in the right way in the first place. This would have saved me a couple of years of finding out for myself…


Also published on Linkedin.

After my decidedly critical and somewhat negative blog a couple of weeks ago (check this link) that was sparked by the rhetoric from a risk assessment tool vendor, I thought it might nice to follow-up with a piece with a more positive orientation.

Here are 10 Do’s for Risk Assessments, taken from the extensive experience of me and my colleagues during the past decade. In no particular order and without claiming completeness:

1: Added value

If done well risk assessments will give you added value. Keeping these ten tips in mind will help. In order to achieve this added value it’s important to start in the right way. Make sure that it’s clear for the people that are going to work on the risk assessment know why we are doing this and what objective is intended to achieve. If you are going to do a risk assessment with compliance with some regulation as the goal then chances are that you will do so, but little more. Aim higher, for example to find possibilities to improve safe production and you can steer to reach that goal.

There’s also this rule of thumb that says that the earlier you start thinking and doing risk assessment in the process, the better, more effective and cheaper possible actions will be. Including an action as integral part of the design of an operation or piece of equipment is definitely to be preferred over an add-on or paste-on solution afterwards.

2: Keep it simple!

Want to involve others? Then make things as accessible as possible for them. Some regard risk assessments as something complex and difficult; well usually there’s no need for that! For most applications the method doesn’t need to be difficult. As long as you go through things systematically and with a critical mind (e.g. thinking “what if…”) you’ve achieved a lot and kept everybody on board. In fact, they have probably done the main part of the job because they often possess the real knowledge about the job to be done or the system to be designed/build.

Basic risk assessment is not difficult… We do it all the time in our everyday lives (usually rather unconsciously and most of the time more or less successfully) when crossing the street, checking if the milk smells funny or deciding what coat to wear. If you transfer these skills to a work situation and do this structured and systematically a lot is won.

3: Competence and involvement

Another thing that affects the quality of a risk assessment is involving the right people. And do get this right: it is both about involving and about the right people. Participants in your risk assessment must have the proper knowledge about the system or activity that you are going to assess. You will want, and need, those people who know about the “job-as-done” (and not so much those who deal with “the job-as-imagined”). Don’t tell the participants about their risks: involve them and have them discuss and discover themselves. In some occasions this process has even more value than the eventual product (the risk assessment report) as this discovery and discussion will lead to greater understanding among the people involved.

And of course the one leading the risk assessment process has to be competent, but more about that below.

4: Shit in = Shit out

I tend to say that this is the First Law of Quality Management. If you put the wrong stuff into a process you shouldn’t be surprised to get substandard results. Good preparation of the assessment always pays back. Make sure you have a good description of the system, activity or change that you are going to assess, and have this ready (for at least 80 to 90%) before you start. Sometimes you can do this “on the fly”, but in most occasions this will lead to a major waste of time because people will start discussing their different views on the subject and use valuable time that was actually planned for the assessment. Bad preparation then is a source of confusion and frustration that easily can be avoided.

Part of this good description is a clear scope. What is it we are going to assess, and with what objective, where are the boundaries: how far do we go and what are the interfaces at those boundaries and what is the impact of these interfaces on risk. There’s a major difference if we are going to assess the design of a piece of machinery, the very production of this machinery in our own workshop or if we are outsourcing the production to China and then transport the machinery to our factory.

Also: don’t over-eat yourself… it’s wise to keep the assessment’s scope within a manageable size. If things are too large you will probably have a hard time. Rather split up the assessment in more practical parts - as long as you remember to check the various interfaces!

5: Rich information

Remember that risk matrix is tool (and a tricky one too, which may be the subject of a future article) and not a goal in itself. Some seem to think that risk matrices are an easy way to communicate risk (e.g. by showing that “We have x Hazards in the Red Area and y in Green”). In fact this is a very superficial, weak and poor way because this ignores essential rich information like assumptions that often determine your risk. Also it doesn’t give you a clue what to do - at best it indicates an area where action is necessary.

Neither should you fall in the trap of being too brief. Yes, keep your assessment as short and concise as possible, because this will increase the chances of being read (and hopefully understood). But don’t fill an assessment form with keywords that have little meaning for people that were not involved in the assessment. And even for the people who were involved these keywords will lose meaning over time without the proper context. Rather resort to storytelling with short descriptions in prose of the scenarios, consequences, probabilities and conditions these depend upon.

6: Communicate clearly

Proper communication has everything to do with giving rich information. And make sure that you do it in a language that the decision makers (and others) understand - so try to avoid numbers that may lead to mechanical decision making or wrong conclusions and beware of jargon, abbreviations and acronyms that some readers may not relate to or even don’t understand.

Make sure to clearly discuss and communicate limitations and uncertainties with regard to the assessment including assumptions that had to be made. Assumptions are often forgotten in communication, but they are essential to the validity of the assessment because if an assumption turns to be not true, the entire assessment is suddenly built on quicksand…

Keep in mind that the Summary may be the most important part of your assessment report. Often this is the ONLY part a decision maker has time to read, so you have to make sure that all the important elements are there. 

A good summary includes at least a brief and concise description of the assessment’s scope, objective, boundaries and the most relevant assumptions and hazards, a conclusion with regard to the assessment as well as suggested/recommended actions. It’s essential that neither the summary, nor a conclusion can bring something new that is not discussed elsewhere in the report. In the past we’ve encountered regularly assessment report where out of the blue something appeared in the summary or conclusion that had no apparent relation to what was discussed in the assessment. Doing so will seriously weaken your advice.

7: It’s not all about safety

Risk assessments are often initiated from a safety point of view, but no need to look at them with this limited view. With a variety of competence gathered to do the assessment, why not use the opportunity to find good solutions across specializations and fields. While often seen as being in conflict, safety and production can and should go hand in hand. Risk assessments can be an opportunity to improve both if you keep an eye on the big picture and avoid tunnel vision on just one of the two.

8: Fresh eyes

Experience is good, but I’ve witnessed assessments where the participants had been in similar sessions many times before and went through the moves, doing what they always did. This may seem very efficient to some, but it may also very easily lead to groupthink, unfortunate conformity and that important elements are overlooked. Fresh eyes can be valuable in these cases: someone who isn’t tricked into jumping to conclusions because he or she hasn’t been through this many times before, someone who has the ability to be genuinely surprised and curious and someone who can ask the crucial critical questions.

This doesn’t only apply to participants, but also to the process leader. One must consider the advantages and disadvantages of going for an internal or external process leader. An internal leader will have the advantage of knowing people (and how to approach/deal with them) and may know of challenges, but an external process leader will have fewer problems resisting the temptation to hop over some elements that are assumed to be known.

9: Leading the process

The main weapons for the process leader or facilitator are preparation and competence, both of which we discussed above. In the case of the process leader this competence shouldn’t necessarily be about the subject of the assessment (this may help in some cases, but it also can be counterproductive e.g. as we saw above if it causes the process leader to jump to conclusions). The process leader’s competence should be about the risk assessment process and leading, coaching and facilitating this process.

During the assessment a process leader should have good improvisational skills so that he can switch between methods if necessary. He (or she) must also be able to switch between subjects and levels - a risk assessment often needs a good level of detail, and often also that helicopter view. And it’s so easy for participants to derail into pet-subjects or private agendas; the process leader must be able to draw them back on track in order to get to the wanted result: a risk assessment with added value.

10: Right tool for the right job

Finally one should ask oneself if risk assessment actually is the right tool to use for the job at hand. Thinking about risks is hardly ever a wrong thing to do, but a risk assessment isn’t some kind of magic wand that makes go away all your problems. Neither is it a good instrument for the identification of everything, like to check if routines are implemented or to see how certain activities are perceived. I’ve been in situations where I’ve suggested a manager to go and talk to some of his people instead of doing yet another risk assessment.

 Also published on Linkedin.