background2

Below is an overview of the contents of The First Rule of Safety Culture:

Foreword(s)

Contents

Part 1: Overture

A safety culture tsunami

History lesson

Culture – climate. Tomato – tomato?

Rules of engagement

Talkin’ ‘bout…

Intermezzo: Expelliarmus Safety

Part 2: Culturebabble

Culturebabble

Reason-ing around a buzzword

Buzzwords and bandwagons

A comfortable cause

Intermezzo: Safety culture from Hell

Part 3: Common culture confusions

Measurability

Bob the culture builder

Culture through progression

One best safety culture

Sum of the parts?

Culture through safety DNA

Oneness, sameness, harmony

Intermezzo: Culture - tool or lens?

Part 4: Instrumentalised culture

Safety culture = following safety rules

Codified culture

Culture by algorithm

Top-down-ism

Toilets tell about culture

Safety culture certification

A ladder on quicksand

For culture’s sake, hold that railing!

Culture is a slogan on the wall

Intermezzo: Does safety culture exist at all?

Part 5: Dark sides of culture

Culture as a straitjacket

Are you one of those…

Power, not culture

Everybody’s responsibility

ISOfication

Intermezzo: A ‘mood-y’ view on culture

Part 6: Ways forward

Culture is here to stay...

Relationship centred

Structure, not culture

Doing instead of having

Words create worlds

The First Rule of Safety Culture

Further reading

The safety immaturity model

The author

Acknowledgements

Index

 

© Copyright mindtherisk.com

Does the World really need another book about Safety Culture? Were this a book about how to engineer, create, build, or manage your Safety Culture, the answer would be a clear NO! However, this is not one of those books. This book is not a Safety Culture as the Path to Bliss in Five Steps-type text. This book is about thinking critically about Safety Culture. The world desperately needs books with that perspective.

Structured into six parts, around forty compact chapters discuss Safety Culture discourse, approaches, and applications critically. For example, whether we should see culture as a tool to fix something or rather as a lens to study and understand. And if you ever wanted to become a culture architect, hopefully you will think twice after reading this.

The book also tries to offer some useful and practical suggestions for different (possibly even better) approaches, or at least different ways to think about these subjects. These suggestions for more fruitful ways forward include The First Rule of Safety Culture.

 

"A great example of curiosity and Nietzschean anxiety!" (Anthony 'Smokes' Smoker) 

 

See the contents of The First Rule of Safety Culture.

 

Available as paperback and Kindle from Amazon.

Please check the various marketplaces: .com, .ca, .au.co.uk; .de; .fr; .es; .it; .jp.

Alternatively contact us directly and we can ship a hard copy!

 

© Copyright mindtherisk.com

Some information about the book Preventing Industrial Accidents, published by Taylor & Francis/Routledge.

A reader’s guide

Just a quick overview of what to expect in this book. Although it is recommended reading the book start to end, the chapters are as much as possible assembled such that they can be read on their own. There are ten chapters. The first three provide an introduction and basic overview of Heinrich’s life and work. Chapters 4 to 10 discuss the various main themes in his worth with a varying level of depth. The final chapter ties things together and reflects upon the contemporary value of Heinrich’s legacy.

  1. Introduction
    Setting the scene, why discuss old safety ideas, how is Heinrich cited in safety literature, limitations.
  2. A biography
    What it says, with attention for the historical backdrop.
  3. Heinrich’s work
    Provides general overview and defines main themes, to be discussed in later chapters. Influences, developments. Heinrich’s success and longevity.
  4. A scientific approach
    Heinrich claimed to approach safety scientifically. Critique. Was Heinrich a Taylorist? Why it made sense to Heinrich. Missing data.
  5. The economics of safety
    His first break-through theme, hidden cost, safety and efficiency, critique.
  6. Causation
    A subject he emphasized and discussed extensively. Causation in early safety. The accident sequence (dominos). Focus on direct causes. Why it made sense to Heinrich. Duality in causes. The 88:10:2 ratio. Problems with this. Underlying and ‘other’ causes. A more complete model.
  7. The human element
    Humans as something to control, and how. Psychology, accident proneness and ancestry. Humans as an asset.
  8. The role of management
    Foremen, supervisors, top management and employer. Management literature of the time. Professionalisation. Safety as part of everyday business. Responsibility. The Axioms. Safety management.
  9. The triangle and reacting on weak signals
    His most famous metaphor. Also, his most misunderstood. What is it? What is it not? What Heinrich said. Ways to read it. Interpretations and attributions. Critique.
  10. Other main themes
    Professionalisation of safety, practical remedy, and social engagement.
  11. Heinrich in the 21st century
    A chapter that ties together several threads and reflects on Heinrich’s relevance for safety students, scholars and practitioners today.

The book concludes with a glossary and an appendix, presenting the Heinrich bibliography.

The blurp

Herbert William Heinrich has been one of the most influential safety pioneers. His work from the 1930s/1940s affects much of what is done in safety today – for better and worse. Heinrich’s work is debated and heavily critiqued by some, while others defend it with zeal. Interestingly, few people who discuss the ideas have ever read his work or looked into its backgrounds; most do so based on hearsay, secondary sources, or mere opinion. One reason for this is that Heinrich’s work has been out of print for decades: it is notoriously hard to find, and quality biographical information is hard to get.

Based on some serious "safety archaeology," which provided access to many of Heinrich’s original papers, books, and rather rich biographical information, this book aims to fill this gap. It deals with the life and work of Heinrich, the context he worked in, and his influences and legacy. The book defines the main themes in Heinrich’s work and discusses them, paying attention to their origins, the developments that came from them, interpretations and attributions, and the critiques that they may have attracted over the years. This includes such well-known ideas and metaphor as the accident triangle, the accident sequence (dominoes), the hidden cost of accidents, the human element, and management responsibility.

This book is the first to deal with the work and legacy of Heinrich as a whole, based on a unique richness of material and approaching the matter from several (new) angles. It also reflects on Heinrich’s relevance for today’s safety science and practice.

Leftovers

Some material had to be edited out from the final manuscript. Some of it is presented on this website.

Feedback

Read some of the feedback regarding the book.

 

© Copyright mindtherisk.com

On this page, you find some material that had to be edited out from the final manuscript of Preventing Industrial Accidents.

<This page is to be supplemented with more information in the future.>

Biographies

Originally intended as endnotes for Chapter 2 and 3. Presented below in the order you find these gentlemen in the book.

Butler

Louis Fatio Butler was born in Hartford on 23 July 1871 as the son of an US army officer. He started working at The Travelers Insurance Company aged 19. He would spend his entire career with that company, working his way up, starting as a clerk in the railroad ticket insurance division. He then successively was assistant actuary of the company (1901), actuary of the accident department (1901), assistant secretary (1904), secretary (1907) and vice president (1912) of the company. On 8 November 1915, he was elected as the third president of The Travelers Insurance Company and The Travelers Indemnity Company, succeeding the late Sylvester C. Dunham. The Travelers prospered under Butler’s leadership and a new branch; The Travelers Fire Insurance Company was organised in October 1924 of which he also was the president. Butler was involved in the social life in Hartford, member of several clubs and interested in civic affairs. Besides leading The Travelers, Butler also was director of the First National Bank in Hartford and of the Travelers Bank & Trust Company. A father of four, he passed away on 23 October 1929. His funeral service at St. John’s Church was attended by Travelers agents and managers from all over the country.

(sources: Hartford Courant: 9 November 1915; 25 October 1929; 19 November 1929)

Granniss

Edward R. Granniss was born 1899 in New Haven, CT. He graduated from the University of Connecticut (1922) and Brown University (1924) as a Mechanical Engineer. After working for ten years for the Travelers, he worked for the National Safety Council and various other organisations. During the War he served as a Colonel and Director of the US Army’s safety program. After the war he returned to insurance working for Royal-Globe and had countless functions, among which President of the Washington ASSE. He retired in 1964. Over the years he would probably be Heinrich’s closest and most enduring collaborator. Granniss passed away 14 November 1990.

Ashe

According to his credentials in the book, Sydney Withmore Ashe was Chairman of the Committee on Safety and Health National Association of Corporation Schools, member of the Educational Committee, and secretary of the Foundry Section of the National Safety Council. Apparently, he was an experienced educator, Head of the Educational and Welfare Department of the Pittsfield Works of the General Electric Company and had previously written books on electric railways and electricity.

Beyer

David Stewart Beyer was a rather big name in the early safety movement. He was the manager of the Accident Prevention Department of the Massachusetts Employees Insurance Association (the name of this company was changed to Liberty Mutual in 1917). Formerly, he had been Chief Safety Inspector of the American Steel and Wire Company. He was active in various safety organisations, as the director of the Standardization Committee of the National Safety Council, chairman of the Standardization Committee of the Boston Safety Society, and member of the American Museum of Safety, to name but a few.

Beyer moved on to become the Vice-President and Chief Engineer of the Liberty Mutual Insurance Company of Boston. Fun trivia, he was married to opera singer Maria Conde who debuted in December 1917 in the Metropolitan Opera and dueted with none other than Caruso. She wrote poems and attended her husband on safety rallies, inspiring her to compose a verse titled Safety Last (published in Safety Engineering in 1919).

Cowee

George Alvin Cowee (1887-1975) was manager of the Bureau of Safety of the Utica Mutual Compensation Insurance Corporation. He would later become Vice President of the company. He was an active writer but would only write one book on safety. Besides this, Cowee published on geography (1911), insurance (1942) and banks and stock markets (1931, 1938, 1960).

Tolman

The main author, William Howe Tolman, was the director of the American Museum of Safety. The authors claimed that their book was the “only comprehensive work on safety that has yet appeared in the English language.” (Tolman & Kendall, 1913, p.ix) When compared to Van Schaack’s earlier text, one may indeed conclude that the latter was mainly about safeguarding certain situations, while Tolman and Kendall presented a larger picture.

Van Schaack

David van Schaack (1868-1929) was one of the pioneers of the early safety movement. He worked as director of the Bureau of Inspection and Accident Prevention of Aetna Life Insurance Company in Hartford, Connecticut. He was one of the people who organised the National Safety Council, served twice as its president, and was active in the development of the national safety code program.

DeBlois

Lewis Amory DeBlois, born in Brownsville, Texas, on 3 October 1878, was a graduate from the Harvard School of Engineering, 1899 class, and had been organising the safety work in DuPont from 1907 onwards before becoming their first VP of Safety. He also was the co-founder and first president of the Delaware Safety Council in 1918 or 1919. The Delaware Safety Council was the brainchild of Irenee DuPont - the President and owner of DuPont - and DeBlois.

In 1920, he was chosen as vice president of the National Safety Council, and in 1923, he took over as the NSC’s president. After 22 years with the company, he resigned from DuPont to take a position in NYC in May 1926 as executive vice-president of the Greater New York Safety Council. In 1929, he became the Director of Safety Engineering Division of the National Bureau of Casualty and Underwriters in NYC. After that he faded out of sight. He passed away 26 February 1967 in his home in Sharon, Connecticut, aged 83.

Williams

Sidney J. Williams was an engineer with 20 years of experience as an industry executive, from the Wisconsin Industrial Commission, formerly chief engineer with the National Safety Council, and at the time of publishing the book Director of the Public Safety Division of the NSC. During Depression times, he would be the safety director of the Civil Works Administration.

Lange

Not much biographical information is available on Lange. From his writing it becomes clear that he had several years (possibly decades) of experience as a safety engineer under his belt. He has worked as a safety engineer for the Industrial Commission of Ohio, lectured to college engineering students about safety and before the publishing of this book, he made a three year trip to Europe to study industrial conditions. This delayed the release of his book, making it partly outdated before it was published.

Dow

Marcus Allen Dow was a notable figure in the early safety movement, working as a safety advisor in various organisations as the New York Central Lines. He was involved with the National Safety Council, including chairing the transportation section, and being its President in 1922-23. Before, he had been the first president of the New York Safety Council. His pioneering work included making safety movies for education, including Steve Hill’s Awakening (1914) which was the first safety film telling a dramatic story. His main interest seems to have been public safety and transport safety, which also is the subject of Stay Alive!

Fisher

Boyd Archer Fisher appears to have been management consultant. He was born in Iowa, studied at Harvard, graduating in 1910 with a BA degree in social science. He was involved in management training during World War I. Later, he was involved in setting up the US Rural Electrification Administration. During his career he was mostly involved in matters that today would fall largely under human resources with several papers and several books, including Industrial Loyalty (1918) and Mental Causes of Accidents (1922). In later life, he lived in Oregon.

Chase

The author, Stuart Chase (8 March 1888 - 16 November 1985) was an American economist, social theorist and writer. He wrote numerous books and pamphlets on a great variety of topics, including semantics, economy, and societal critique. With his 1932 book A New Deal he coined the term for President Roosevelt’s economic programs after the Great Depression. One of his critical works is a pamphlet on waste in the modern world which was rewritten and republished several times (1922, 1925, 1930 and 1931). Chase means a different kind of waste than other authors (including Taylor and Heinrich), or today’s LEAN systems. According to Chase, waste is a larger issue than mere production efficiency. In his opinion this is merely a mean to an end (namely, the increase of profit) while Chase’s aim is on another level. As he says, you can produce unnecessary or inferior - and thereby wasteful - products in a very efficient way. Neither does Chase apply the avoidance of waste in a reductionist way (i.e. optimizing a machine, process, or factory), but has a systems approach by observing the entire economic system as one.

Roos

Nestor Robert Roos (19 August 1925 — 20 February 2004) worked with Petersen at the University of Arizona where he was a Professor of Insurance and Director of Safety Management. Among his work, one finds the Governmental Risk Management Manual (1976) which he wrote with Joseph S. Gerber.

 

© Copyright mindtherisk.com

On this page you can find some feedback on the book Preventing Industrial Accidents.

“You have presented a very thoroughly researched pack of source material. I applaud your forensic analysis of what he actually wrote and how you have highlighted the errors made by those who take the headlines, add their own interpretation and then rubbish it – either a deliberate 'straw man' to suit their own safety philosophy, or lacking rigour making a detailed assessment based on an incorrect assumption (Type lll error!).”

“I just finished reading this excellent piece of work. The book offers many new views and insights in Heinrich's work. I can highly recommend the book to everyone who wants to know more about safety (science), the historic context of Heinrich's views/opinions and the opportunities Heinrich's work offers for safety management.”

“I literally devoured your degree thesis on the same topic! You did justice to Heinrich's work!”

 

Read the review in OHS Professional Magazine, June 2021 (Australian Institute of Health & Safety).

 

© Copyright mindtherisk.com

Another contribution to SafetyCary's blog, this time discussing the Myth of All Accidents Being Preventable. As of September 2020, there is a new and improved version on the SafetyStratus website.

For those feeling nostalgic: it used to look like this (but the new link is obviously better):

This is the audio recording of the presentation at the Risk and Resilience Festival at the Technical University at Enschede in November 2019 - combined with the slides. The sound quality is modest because I was walking towards and away from the recording device, sorry about that!

A few notes

07:50 The transcript of David Woods’s presentation used to be available from: http://csel.org.ohio-state.edu/videos/Transcript_Birth_Of_Resilience_Engineering.pdf. However, when I checked before uploading the clip it was gone…

12:50 Find the Eurocontrol White Paper on Safety-II by Googling (as suggested after the presentation), or just go to: https://www.sintef.no/globalassets/project/hfc/documents/eurocontrol-2013-from-safety-i-to-safety-ii_-a-white-paper.pdf/

Note that many slides include references to the sources and suggestions for further reading.

More impressions from the festival in these clips: my teaser (https://www.youtube.com/watch?v=7UHxhsBlXpw), and the aftermovie (https://www.youtube.com/watch?v=O2R4XFN2qgc).

Enjoy!

 

This is the first in a series of video blogs aiming at busting some safety myths. This one deals with the nonsense of counting causes and presenting the results as something meaningful. Do not be fooled by the pretense of research!

An In Memoriam for Gerald Wilde, including a bit about Risk Homeostasis, written for NVVK Info.

Dutch text only.

Check the riskhomeostasis website.

This is not a blog about getting married and the traditions and/or dress code attached to this happening. The known phrase just popped in my mind because it characterises my new little book quite nicely. This turned out to be a more or less spontaneous project that presented itself in-between other activities and I decided to pursue the idea. Now, I am pleased to present to the world If You Can't Measure It - Maybe You Shouldn't.

The main inspiration for this book came shortly after doing a great master class at Nyenrode Business University about measuring safety and the problems connected to this. I thought it might be beneficial for the greater safety community to make some of the things discussed there available to a wider audience. Also, some readers’s feedback on Safety Myth 101 was that they would have liked a book about just one subject, instead of a wide spectrum. And so it happened, resulting in something old, new, borrowed and blue.

  • Old: the original core of the book is the Measuring, Goals and Indicators chapter from Safety Myth 101 along with some articles that were published elsewhere - mostly in updated, reworked and enhanced versions.
  • New: this basis I supplemented by a series of completely new chapters that provide a framework to work with, some new cases and examples and a chapter with suggestions for different, possibly better, approaches.
  • Borrowed: standing on the shoulder of giants, I use and reference relevant material. The chapters in themselves are compact and accessible, while endnotes provide details and references for those who want to dig in properly. There is also a chapter with recommended literature.
  • Blue: through a twist of randomness the book’s cover became blue instead of any other colour.

And, as said, while Safety Myth 101 covered a wide spectrum of subjects, world If You Can't Measure It - Maybe You Shouldn't concentrates on just one topic, even if it is a topic that is extremely wide in itself. The title is obviously a play on the established management ‘wisdom’ that you need to measure things in order to manage them. The book discusses pros and cons of this belief (or should I just say, myth?) and many other related subjects. I hope it will contribute in a more critical, and useful approach to measuring safety and using numbers and statistics.

Enjoy. And feel free to provide feedback!

Now available!

 

CAUTION: Reading this book or parts thereof may

seriously harm your professional beliefs and habits

 

From the back cover:

 

You drive to your job on a beautiful Monday morning. The speedometer shows a steady just-below-50 km/h. On the radio, the newsreader tells you about the unemployment figures, the number of casualties of an earthquake in South-East Asia, and that the Dow Jones has fallen some points. Upon entering the gate of your company, you pass a sign that proudly announces that today is the 314th day since the last Lost Time Injury. In the hallway, you see the LEAN Kanban board that shows, among other things, production figures and sick leave statistics. At 8:30, you are all expected to gather around the board and discuss what is presented there. In the elevator to your floor, you quickly check what has happened on Linkedin. You are pleased to see the number of ‘likes’ that your latest post has drawn. You walk on to your desk where you see a pile of papers. On the top is a copy of the newest balanced scorecard that your boss’s secretary must have dropped there, Friday afternoon. While sipping your first coffee of the day, you check your calendar and are reminded of the annual performance review at 10 O’clock.

So far, you have not done one tiny piece of actual work, but you have been confronted with a mass of figures, measurement and metrics already. They are around us, all the time. But why? Do they help? How to deal with them? This little book intends to help you think about them in different, maybe better, ways and handle them better.Thirty rather compact chapters offer a critical view on measuring, indicators, metrics, goals and statistics within a context of safety. The book also tries to offer some useful and practical suggestions for different (possibly even better) approaches, or at least different ways to think about these subjects.

 

To see the contents and a quick overview, go to the Contents Page.

 

Order from Amazon.com.

Ii is our pleasure to present the thesis for the partial fulfillment of the Human Factors and System Safety program from Lund University.

Titled Heinrich’s Local Rationality: Shouldn’t ‘New View’ Thinkers Ask Why Things Made Sense To Him? the thesis deals with both the 'old' and 'new' views of safety, while providing insights on Heinrich's work drawing on an unique collection of material from the 1920s to 1940s.

Download by following the link.

Abstract

Herbert William Heinrich is one of the most influential pioneers within safety. His concepts, originally from the late 1920s, influence safety practice and theory, even today. In recent years, Heinrich’s legacy received increasing critique, also from contemporary safety authors that generally are counted among the ‘new view’. The objective of this thesis is to explore how ‘new view’ thinkers/authors discuss Heinrich and whether they apply the approaches they advocate to Heinrich’s work.

This thesis starts by offering a biographical study of Heinrich and an overview of his work, which establishes a timeline and the context necessary to put Heinrich’s work into perspective. This background analysis also touches on Heinrich’s influences, how his work developed and how his work is cited. This part of the thesis meant literally archaeological research, uncovering more of Heinrich’s work than has been available/accessible to safety science before and through this it presents a unique collection of resources that give a richer picture of both the person Heinrich, and his work.

The thesis then turns to a characterization of the ‘new view’ and a selection of authors to work with. It is reviewed how these ‘new view’ authors discuss Heinrich and his work. It shows that ‘new view’ authors discussing Heinrich rarely employ ‘new view’ approaches. Seldom do they seek deep explanations or explore alternatives. Mostly they take a position and discuss things from that point of view. In addition, they bring forward claims and attributions with regard to Heinrich that are not substantiated and are open for questions when studying Heinrich’s work properly instead of cherry picking.

The discussion suggests that substantial critique is only one element in the ‘new view’ discussion of Heinrich’s work. The normative and judgemental language also serves as a pedagogical device. In addition, the increasing ‘new view’ critique of Heinrich can be seen a part of establishing a new movement that needed something to contrast its message to as a tool of persuasion. In the way of classical storytelling, the ‘new hero’ needed some ‘villain’ to overcome in order to make the message looking better.

This illustrates that critique of ‘old’ safety theories and approaches contains substance as well as ethical implications. When we offer critique, we have a choice of what story we want to tell - just as is the case with accident investigations. Critique does not only serve scientific purposes, or to propose better/alternative approaches. The discourse of this critique can also serve other goals, for example ideological purposes, or even be a way to market a product to sell.

As an alternative analysis, the thesis offers an attempt to recreate Heinrich’s local rationality regarding three topics and explores why it made sense to him to subtitle his book ‘A Scientific Approach’, to offer a linear model and focus on direct causes, and to discuss ‘real’ and ‘true’ causes. This exercise shows that, just as they give us a better and richer understanding of accidents and organisational events, ‘new view’ approaches prove to be useful for the analysis of texts. Looking for the local rationality of an author suspends judgement and contributes to a better and deeper understanding.

The thesis concludes by suggesting some areas for further study.

Just published from the ICSC 2018 conference:

Brave New World: Can Positive Developments in Safety Science and Practice also have Negative Sides?

Abstract

The last few decades have brought advances and positive developments in safety. Traditional approaches to prevent accidents are supplemented and enhanced with approaches that stress positive instead of negatives, that humanize, that are more systemic, that appreciate complexity, context, variability and adaptability. One must wonder, however, are there any negative effects we should be wary about? This paper reflects about this question by using five mechanisms: 1) the language we use to discuss things, 2) the emergence of professional tribalism, 3) the evolution of ideas and concepts, 4) side-effects of success, and 5) ethical dilemmas we may encounter in our everyday work. New developments will come with by-products that we cannot prevent. Suggestions to deal with this include open, critical minds, and awareness of possible pitfalls. Being prepared to deal with the inevitable variations that will occur is important, as are balanced approaches in how professionals discuss and handle concepts and tools.

Open access - get the full paper here.

An In Memoriam for Jens Rasmussen, written together with Frank Guldenmund for NVVK Info.

Read it here.

Mind The Risk has now opened its own YouTube channel. In due time this will be filled with a variety of clips, but of course this will take some time.

Clips to be watched here:

I had the pleasure to have a conversation with Andrew Barrett from Safety On Tap about Myths and more. Please listen to this great podcast - and other episodes on the Safety On Tap website.

 

The May 2017 issue of NVVK Info (the magazine of the Dutch Society of Safety Science) features an opinion piece about the Certification on the Safety Culture Ladder.

You can read the original Dutch version here, or check the English translation below.


A Ladder on Quick Sand - A Critical View at the Safety Culture Ladder

The safety industry has brought a large number of ineffective and contra-productive initiatives into this world. The problem is that many of those initiatives are generally well intended, sound intuitively well, seem to have effect, and most of all: make us feel good. What these initiatives often lack, however, is a sound theoretical basis and there is insufficient attention for adverse side effects.

The relatively recent certification on levels of the Safety Culture Ladder absolutely fits this category. Thinking critically, the Safety Culture Ladder-circus is based on a couple of Myths. I will discuss five of those below. First, a bit of background for the uninitiated.

What is this Safety Culture Ladder?

The origins of the Safety Culture Ladder (‘Veiligheidsladder’) are in the accreditation system for railway contractors of the Dutch railway infrastructure manager, ProRail. On one hand, this was intended to stimulate safe work by contractors. On the other hand, the certification was also a tool in the awarding of jobs to contractors.

How does this work? The basis is the known safety culture/maturity ladder model. This “ranges the safety awareness and behaviour on five levels”. The more an organisation integrates safety responsibility, reflection and investments in the way they conduct business, the higher their score. The higher the level the organisation is rated on, the higher the likelihood that they are awarded a job they have signed up for. The ‘Safety Awareness Certificate’ is valid for a period of three years max. Every year the holder of a certificate will be audited to check if the organisation still lives up to the criteria attached to the certification.

The Dutch organisation for standardization, NEN, took over the Safety Culture Ladder in June 2016. [7]

Myth 1: Safety Culture is measurable

One major problem of ‘safety culture’ is that many people think that it is a ‘thing’. It is not. Apart from fungi and yeast, you will not find culture in nature. It is simply impossible to collect and measure all culture of an organisation. Objective, scientific measurement of culture is in fact impossible. To worsen the matter of subjectivity: there are numerous definitions of safety culture [1, 5]. In general, these mention things like common or shared values, norms and behavioural patterns within an organisation or part of the organisation.

(Safety) culture is extremely complex. We can describe some of its characteristics, but providing a complete picture is not possible. Todd Conklin [3] compares measuring of complexity with trying to measure a river. We can measure the level of water, how fast it flows, the pollution and temperature of the water, but none of this really describes the river, not even combined. At best, we describe a part of the river at some moment in time.

We see the same with so-called culture assessments and measurements. A couple of questionnaires are sent out, to get an impression how people within the organisation think about certain things. Sometimes additional observations on location are done. Afterwards this information is summarized into a short profile. This is like pointing towards the box at the right hand side of the Wikipedia page when trying to describe the Rhine.

A so-called safety culture assessment is nothing more than a snap shot. Making this snap shot can be compared to anthropologists who travel to some tribe in the rain forest. There they take a couple of Polaroid pictures. Then they travel back home, where they study these pictures safely within the walls of the university and try to determine the tribe’s culture based on these snapshots. Proper culture research is intensive and hard work on location. Researchers spend a lot of time (often years) on location and become more or less part of the community they are researching.

Myth 2: Safety Culture is makeable

Culture emerges from the interaction between people. It is a social construct. From these interactions, certain patterns in thinking and behaviour emerge spontaneously at some point in time, and at some point, these patterns will start to affect the interactions. A simple definition of culture is “The way we do things around here”. Countless factors determine the way people do things and how they interact. Research and experience teaches us that some factors can have certain consequences.

We can make an effort to enhance of weaken these factors in an attempt to get the desired effect and to influence culture through this. We can facilitate growth of certain phenomena within the organisation, but there is no guarantee that these will develop in the way we wish. We cannot design culture on a drawing board and then implement it with a five-point plan. Culture is not a mechanic phenomenon that is makeable and manageable to our whims. We do not live and work in a laboratory with controlled circumstances, after all, but in an inherently messy and complex world with a multitude of influences, goals and uncontrollable or unexpected factors.

Myth 3: There is ONE best Safety Culture

Culture helped to bring us where we are today. When our ancestors lived on the savannah with numerous natural threats, their interactions contributed to the survival of the individual and the community. These interactions and the culture that emerged from this depended on the circumstances. The culture of a desert tribe is very different from the culture of a tribe living in the rain forest. Culture helps handling specific problems in certain circumstances.

Therefore, it is nonsense to try to copy what companies like DuPont do. What is right for them is, may be entirely useless for other companies. Surely, one should see what works for others and if you can learn from that, and possibly implement it in a form that suits your organisation. Copying blindly, however, because they have such a ‘great’ safety culture and that ‘must’ be ‘good’ for us also, is a fallacy and a way that is bound to fail or lead to adverse results in many cases.

The credo One Size Fits No One applies also to safety. Therefore, it is an illusion to describe an ideal safety culture in some standard, or to define levels of safety culture. It has to be tailor-made for each organisation. There may be similarities in the characteristics, but every culture will be unique.

Myth 4: Safety Culture is homogenous

Often you hear that a company has ‘a safety culture’. This suggests that the culture within that company is homogenous. That one will find the same elements everywhere within that organisation. Everyone should be able to confirm from personal experience that this rarely will be the case. Within an organisation, stuff works differently at the accounting department than at the maintenance crew, and sales will probably differ hugely from production.

These differences are necessary to do the work of these different departments in an effective and efficient way. Accounting may be more precise and formal, governed by the strict rules of bookkeeping and their closeness to top management, while the maintenance people have to solve problems quickly when something happens. They will have a greater focus on improvisation. Sales will have their main attention on customers and may promise they sky, even if production can hardly make these promises true.

It is problematic to talk about one organisational culture. Even if there are many similarities between the units of an organisation, there will be always countless nuances and differences. Therefore, it is highly unlikely that the entire organisation will be on the same level; usually an organisation will be on several levels at the same time. Sometimes depending on the part of the organisation, sometimes depending on the subject.

Assuming that it is possible to measure “something”, then the result of a certification will always indicate some kind of ‘average level’ without a real meaning. What if most departments do rather well, but maintenance systematically screws up? It reminds me of this story of a person with a healthy average body temperature of 37 degrees Celsius. Regrettably, he did not survive with his head in the stove and his feet in the fridge...

By the way, you cannot separate an organisational culture from national/regional cultures. They may be much stronger than an organisational culture anyway. I just have to think of my own experiences during my stint in offshore with the major differences between Dutch and British employees, for example with regard to the need for procedures and risk appetite.

Myth 5: Certification is a Good Thing to do

During recent decades, some people have come to regard certification as a solution for most problems. We see them popping up for all kinds of subjects on various levels: personal, companies, systems and products. Certification does have a few positive effects, but also significant negative side effects. One of these is that many will do what is necessary to pass the requirements for certification and not any more than that. One will choose the way of least resistance (‘learn for the exam’), instead of doing what is really necessary to live up to the underlying goals of the scheme. This is a typical example of means that become the end. Certification should never be a goal in itself; it is merely a tool.

We see some reversed logic at work here. The thought is obviously that a good company should be able to get a great score without much effort. The selection of companies, however, is done on basis of certificates and the certificate does not show you whether you get the top of the class, a company that barely passed, or a company that managed to fool the system. I have heard stories of cases where an audit resulted in Level 4, because auditors had been convinced without even observing operators out in the field. That way contractors are selected by excluding others and seemingly you can document that this happened with due diligence.

I fear that the Safety Culture Ladder will be just another victim of the ISO-illness. Rubbish will be produced in a highly controlled manner, while everyone lives in the illusion of delivering quality. No, that is not just a start-up problem of the Safety Culture Ladder scheme; this is an inherent problem with all kinds of certification.

And we haven’t even mentioned the enormous incentives that will undermine any good intentions. After all, a requirement like “if you reach this level on the Safety Culture Ladder, then we will award you with a job worth millions” a major stimulus! What you measure is what you get. If you require your contractors to have Zero LTIs, you can be pretty sure that they will have pleasingly low accident levels. If you require you contractors to reach a certain level on some certification scheme, they damn well will be at least on that level. I presume that if you would require them to prove that they employ at least 1% Martians, a good deal of contractors will deliver that proof. [2]

And so

A friend asked me recently if the Safety Culture Ladder was representation of reality. Of course it is not. It is a model. As George Box said “All models are wrong”. And then he added “...but some are useful”.

The Safety Culture Ladder can definitely have added value when used properly. It can show you in a very simple way where opportunities with regard to safety (culture) can be found. One of the ‘fathers’ of the culture ladder, Mark Fleming, called it at the time the Safety Culture Maturity Model. He did however caution “the model is provided to illustrate the concept and it is not intended to be used as a diagnostic instrument”. [4]

The model has seen a metamorphosis from a descriptive/educational model, to a tool for self-evaluation (Hearts & Minds), to an audit tool, and now even a way to certify and award jobs. This is a clear example of how originally good and useful ideas are hijacked, twisted and abused until they do not contribute anymore and rather facilitate the opposite. A pity.

Will this improve safety? Maybe, a bit. Just like ISO 9000 has improved quality. Researchers do not fully agree about that, by the way and the so-called evidence for ISO’s positive effects are rather anecdotal than thoroughly scientific. [2] And just as with ISO, we have to ask ourselves seriously whether it will be worth the amount of paper work and adverse side effects? I am very sceptical! Regrettably, I fear that just like in the case of ISO many organisations not really have a choice, for that the vested interests have already grown too big.

References

[1] Antonsen, S. (2009) Safety Culture: Theory, Method and Improvement. Farnham: Ashgate.

[2] Busch, C. (2016) Safety Myth 101, Mysen: Mind The Risk

[3] Conklin, T. (2016) Pre-Accident Investigations: Better Questions. Boca Raton: CRC Press

[4] Fleming, M. (2001) Safety Culture Maturity Model. Norwich: HSE Books

[5] Guldenmund, F.W. (2010) Understanding and Exploring Safety Culture. Oisterwijk: BOXPress

[6] Hendriqson, E., Schuler, B., Winsen, R. van, Dekker, S. (2014) The constitution and effects of safety culture as an object in the discourse of accident prevention: A Foucauldian approach. Safety Science 70 (2014) 465–476

[7] http://www.veiligheidsladder.org/en/

 

Why we often miss the point when we think having the key to success.

This blog is based on (the first half of) my paper and presentation for the NVVK Congres in March 2017. The original version, found in the special conference magazine can be found here.

 

The problem sketched in the Call for Abstracts said: “We are right, but it’s not acknowledged. Or we have a great plan, but no-one feels like it. Are we too ambitious, are we too progressive? Also in situations where we succeed there may be questions. Why is success sometimes luck and when is success guaranteed?”

My hypothesis is, that we often miss the point when we think that we have found the key to success. Sometimes we should ask ourselves whether we should hunt after success, or at least what kind of success we should be after.

To explore this further, let’s split the problem into three questions:

  1. What tends to catch on?
  2. What is success actually?
  3. What is needed for real success?

Question 1: What tends to catch on?

What things have considerable success? What sells well, both in general, as within safety? Sticking thematically to things that pretend to be about improvement, we could for example check popular management and self-help books. You have surely seen them on airports or in bookshops. Some of those sell millions, with or without the help of celebrities like Oprah Winfrey.

Without doing a thorough study, we can see that many of these books are characterized by:

  1. Big Promises: you will be more beautiful, lose weight, become richer or smarter, you get more friends, find the love of your life, satisfaction and health.
  2. Suggesting a relatively easy way to your goal. Often a simple linear solution.
  3. Fix yourself or something around you. All you have to do is rediscover a ‘secret’ or ‘forgotten wisdom’. Often one hidden deep inside yourself.
  4. There is a certain program or number of steps which you have to follow.
  5. One best way. A one-size-fits-all approach that ignores context, personal characteristics and the like.
  6. This way, often attached to workshops or a program, is faster than common approaches to get to your goal.

Titles of million sellers like How To Win Friends And Influence People (1936, Dale Carnegie), Think And Grow Rich (1937, Napoleon Hill), The Power Of Positive Thinking (1952, Norman Vincent Peale), Awaken The Giant Within (1991, Tony Robbins), The Success Principles (2005, Jack Canfield) and The Secret (2006, Rhonda Byrne) are very illustrative.

Mind you, let’s not underestimate the value of some books found in the self-help section. Many of them are worthwhile as an easy way to familiarize oneself with scientific principles (‘pop science’). Some critical remarks are due, however:

  • One wants answers, while questions might be more useful.
  • One wants simple, fast, decisive solutions, while it’s often about things that require time and hard work.
  • One assumes simple, linear cause and effect relationships, while the problems are complex.
  • One assumes that ‘more’ better is [1.].

When we look at the safety community to find out what catches on there, it is not really different. We see a similar pattern, because humans love simple explanations and solutions that give a perception of certainty. Especially when they make intuitive sense and give a warm fuzzy feeling. Like when some research claims that 88% of all accidents are caused by human error? [5.] Seems to make sense, because most accidents do have a human component. So, let’s start a program for behavioural change or safety awareness!

We find solutions that promise quick results, like less sick leave or reduced accident metrics. Even if these fast results are mostly achieved by playing the metrics and not by attacking their causes. [3.]

We find solutions that seem to promise control, because people do not like uncertainty. An illusion of control seems often more important than the really addressing of a problem. Therefore we choose solutions like management systems, rules, procedures and supervision and compliance.

We also find many easy solutions, quick fixes that seem to address the problems, but really the only thing they do is treat symptoms or aim at other things. The growing and ever-popular demand of certificates is a typical example. In theory an ISO-certified company delivers quality, in practice it could just be a case of extremely controlled production of rubbish by that company. The same applies to steps on safety culture ladders. [3.]

We find particularly more of what we already did (like ‘incentives’ or observation schemes). Often old wine in flashy new skins, labelled with the trendiest buzzwords. A quote commonly attributed to Einstein says: “We cannot solve our problems with the same thinking we used when we created them”.

A possible cause of all this might be a wrong understanding of what success actually is, or should be. That is the subject of our next question.

Question 2: What is success actually?

What is success? And what success are we actually talking about? Some obvious options:

Sales figures? Low accident statistics?

Often this is not the best option. Because the usual perception of success is one of more, faster or bigger is better, success is often expressed in a number/metric. That also applies to safety. Then efforts are mainly aimed at improving the metric, whether this really contributes or not.

Something that gets much positive response? Something that gives us a warm feeling?

This also makes me think of the “50.000.000 Elvis Fans Can’t Be Wrong” LP. Not all music lovers will respond positively to that album. Another example is the contagious “Dumb Ways To Die” campaign. The song sticks forever in your mind and the clips gained millions of ‘likes’, but whether this contributes to improving safety is not quite clear.

Safety improvement?

It seems obvious that this is where safety professionals’ main attention should be (but not always is, conflicting interests are not unheard of…). But: what is safety? How can we use ‘safety’ to determine whether we are successful or not? Traditional methods (counting accidents, compliance) are definitely not good enough and also newer approaches (like culture assessments) have major shortcomings. Monitoring processes sounds like a good suggestion, but is no 100% guarantee, because complexity is hard to catch in processes. A quantitative assessment based on a variety of points of view instead of one indicator is a possible way forward, but definitely not an easy one, and many will probably regard it as ‘soft’, ‘vague’, ‘difficult’, ‘time consuming’ or ‘subjective’.

Therefore I do not have an easy answer to the question what success is, but as we shall see, maybe we do not need one anyway. Most likely an answer is found in the middle and it will probably display elements of several options. Recently I have encountered a couple of alternative views of success that are much more useful and valuable than the superficial on material or numerical goals oriented ‘improvements’. As far as I am concerned, these alternative definitions facilitate a nuanced, qualitative view on the matter. Also these definitions make clear that success depends fully upon your context and perspective!

In his TED Talk, Alain de Botton [2.] says that you cannot succeed in everything. Every definition of success has to acknowledge that it will miss out on certain aspects; quality often does not go together with quantity. We are extremely sensitive to outside influences (family, peers, media) that partly determine how we see things. Therefore it is important to make sure that how we perceive success is our own view. It is bad enough if we do not achieve our aims, it is much worse to discover that you really wanted something else. This is why he argues for and individually oriented definition of success.

Frédéric Laloux [6.] speaks about the success of organisations and says that this is connected with the purpose of a (self-managing) organisation. “How do you define success? Profitability, market share, or increase in share price? Those are straightforward to measure, but (…) not very relevant. (…) the interesting question is: To what extent do the organization’s accomplishments manifest its purpose? This is the kind of variable that resists being reduced to a single measurable number.” Philosopher Victor Frankl agrees: “Success, like happiness, cannot be pursued; it must ensue, and it only does so as the unintended side-effect of one’s personal dedication to a cause greater than oneself”. For safety professionals this could be an indication that our success is connected with how we fulfill our ‘greater cause’: improve safety, or supporting the purpose of our organisation. Then success emerges, generally not expressed in numbers of accident or something similar, because complexity resists summarizing into a metric.

Todd Conklin [4.] discusses ways to describe complexity. He says that numerical indicators often are limiting and compares complexity with a river. You can measure how fast the stream is, or how high the water is, but that only describes very limited features of a river. That is why Todd says that “We are focusing on changing our safety results when perhaps we should be focused on changing our safety processes”, because “One of our biggest issues as safety professionals is that we manage to our organisation’s outcomes instead of managing and understanding the processes that we use to create those results”.

Success is therefore often not what you think. Real success is often counter-intuitive. If everything goes smooth during an exercise that may very well be an unsuccessful exercise. You tend to learn most from exercises where things go wrong.

Question 3: What is needed for real success?

Taking the above into account, we may conclude that success (within safety or otherwise) not is the easy, straightforward concept that we often assume it to be. Simple interventions improve indicators, but not necessarily the real world of sharp-end practitioners. Simple interventions can temporarily fix symptoms, but not the problem (an aspirin for your head ache). They tend to have unexpected side-effects and create new problems (for example by using an increased safety margin for more efficient production).

It can be very tempting to fall for simple solutions and quick fixes. High Reliability Organisations that have been operating for a long time in high-risk situations teach us however about ‘reluctance to simplify’ [8.]. Therefore it is important to ask questions, instead of to be heading directly for answers. Robert Long [7.] writes: “If you have all the answers, what need is there to listen and to learn?” We fool ourselves when we think that we have solved complex problems with linear answers and simple solutions. Acknowledge the complexity of things. Realise that you cannot know and predict everything. Acknowledge and deal with uncertainty.

Another thing is that we may be focused too much on safety alone. We tend to see safety in isolation, while we should work more towards win-win situations by looking for improvements that improve safety as well as production and other business objectives, or just make the work easier, simpler or more fun. Often, however, it seems like safety recommendations do the very opposite of that - and they tend to have that reputation anyway. Often even before they have been tried out...

References

[1.] A. Bergsma, J Happiness Stud (2008) 9: 341. doi:10.1007/s10902-006-9041-2

[2.] A. de Botton, http://www.ted.com/talks/alain_de_botton_a_kinder_gentler_philosophy_of_success

[3.] C. Busch, Safety Myth 101 (2016, ISBN 978-82-690377-0-8)

[4.] T. Conklin, Pre-Accident Investigations: Better Questions (2016, ISBN 978-1472486134)

[5.] H.W. Heinrich, Industrial Accident Prevention (McGraw-Hill, 1941)

[6.] F. Laloux, Reinventing Organisations (2014, ISBN 978-2-960133-50-9)

[7.] R. Long, Real Risk - Human Discerning and Risk (2015, ISBN 978-0-646-90942-4)

[8.] K.E. Weick and K.M. Sutcliffe, Managing The Unexpected: Assuring High Performance In An Age Of Complexity (2001, ISBN 0787956279)

 

Just Culture

Day 3 was entirely dedicated to Just Culture. Where does the perceived need for just culture come from? And what is Just culture anyway? It appears that the term Just Culture has gone off the rail in various places. The term has even become a verb - “I have been just cultured”. For many organisations it has become a term about abuse rather than justice.

“Just Culture? We know it when we see it, we’ll know if it is really just when you start applying it”.

In many organisations a Just Culture policy is very much a reminder of who is in power. A typical example of a definition of Just Culture is from Eurocontrol:

"Just Culture" is a culture in which front-line operators and others are not punished for actions, omissions or decisions taken by them which are commensurate with their experience and training, but where gross negligence, wilful violations and destructive acts are not tolerated.”

The key word in all these Just Culture definitions is the BUT. Everything that comes before the but is to mollify, what comes after will steer the direction of actions. An additional problem with relation to Just Culture is that there will be external forces (police, internal affairs, regulators, politicians, media, …) calling for accountability, liability etc.

Just Culture attempts to draw a clear line between acceptable and non-acceptable actions. The Systems Approach does not look at WHO is responsible, but WHAT is responsible. What is it that sets people up for failure? Failure and success are joint products of many factors.

But, may some people say, isn’t the consequence of this view that people will “blame the system”? Worryingly these concerns come from inside the profession. Reason (1999): “has the pendulum swung too far? We need to revisit the individual”. Wachter (2009): “more aggressive approach of poorly performing practitioners”. Shojania and Dixon-Woods (2009): “performance and behaviour of individuals”. These are outcries without evidence of a better solution.

[At this point the Learning Lab was side-tracked into a rather lengthy discussion of intrinsic (autonomy, mastery and purpose) and extrinsic (sticks and carrots) motivation which I chose not to take notes - please check for example Barry Schwartz’s work some good explanations. One interesting observation, however was that some managers/organisations seem to require from people “you have to have intrinsic motivation”. Ha ha ha]

Safety Culture allows the boss to hear bad news. This requires a willingness to disclose, which requires trust. Just Culture is an approach to deal with this “pull and push”. The ‘but’ provides a compromise between the Systems Approach and the possibility to punish the individual.

Retributive Justice

Retributive Justice is about punishment. It asks:

  • Which rule has been broken?
  • By whom? 
  • How bad was the breach? (outcome plays a major role, even though it should not - many investigation policies are driven by severity, especially in regulations)
  • What should the consequences be?

These four questions still fuel most ‘just culture’ systems. These therefore become retributive in nature. This is backward looking (back at the case) accountability.

An often heard comment is: “You have nothing to fear if you have done nothing wrong…”. How scary is that? Should you fear the worst? Or should one see this as a promise for ‘fair trial’ (which you often just can forget in non-democratic organisations). There has been argued that practitioners should judge cases instead of prosecutors and judges. Would that be a way to go? Or should one educate the judges and prosecutors (“if you can’t beat them, join them”).

Are there shades of retribution? How to go from honest unintended mistakes through at-risk behaviour and choices that increase risk to negligence and conscious disregard? The problem with judgements is that they are judgements (and not objective facts)! When you look at a definition of negligence, you will find that there are many (!!!) judgements in that definition. For example: what are ‘normal standards’, ‘prudent’, ‘reasonably skillful’, etc. Who gets to say? The government? The judge? A jury? In organisations it’s often done by default, not by conscious decision!

Just Culture processes are to assess someone else’s fallibility by people who are fallible… Higher up in organisations the system is often perceived as just (Von Thaden, 2009), but blame drives reporting underground (Frederick & Lessin, 2000). To make this system work there are two safeguards needed:

  • Substantive justice: How the rules are made fair and legitimate.
  • Procedural justice: A legitimate process for rule breakers, offering protection and an independent judge. 

The problem: NONE of this exists in the non-democratic organisations we work in. Most countries have a defence against the arbitrariness of justice and (especially) the Crown since the Magna Carta. Ideally, one should have a judge with no stake in the case, but strictly seen is this impossible and in organisations it isn’t even considered. Another problem is that one needs knowledge of the messy problems in order to be fair. But this can lead to conflicts with independence - people who know the messy details are probably or have been part of the environment…

A way to enhance Substantial Justice is to have worker/practitioner input in the rules that are made. This creates also greater ownership and helps to be closer to work-as-done.

Procedural Justice and Just Culture is problematic in organisations because they are not democratic. They are hierarchical (top-down), there is no separation of powers and power goes over justice, law or rule.

There are some possible solutions, but all have problems:

  • Have a judge with experience from the field (but: is his/her knowledge up to date? Independence?)
  • Peer reviewer, from other organisation (but: they may know the people anyway? differences in the messy details?)
  • Do it bottom-up.

Process protection:

  • prior notice of case and evidence
  • openness to scrutiny of case and evidence
  • knowing what is at stake
  • fair opportunity to answer case, bring advocate
  • fair opportunity to bring own case
  • right of appeal
  • separate guilt-phase from penalty-phase

The Just Culture Process

There are several just culture ‘decision models’ around - many are a variation of the one pictured below and can be found in many (SMS) documents.

 

How badly this superficially good looking model works was discussed by example of the case of freely distributed mosquito nets in Africa and while this act is extremely well-intended, there are many complex side-effects (the tiny mazes catch small fish and wipe out populations, insecticides get in the water, etc.). Can one hold a mother who goes fishing with the net responsible for these effects?

A common principle of justice is that you are innocent until proven guilty. The process pictured above shows the opposite. Starting with guilty on the left, you are guilty until proven innocent! To make things worse: these things are usually totally arbitrary because you can argue for about any junction in the flow-chart.

Restorative Justice

Restorative Justice is an alternative approach that goes way back. For example, the ancient Hebrew laws about land and property disputes would typically be solved with restorative justice. Also the TRC in South Africa builds on these principles.

Braithwaite (1999): “A process where all have an opportunity to discuss how they have been affected by an injustice and to decide what they need to repair the harm”.

Four questions for Restorative Justice:

  • Who is hurt?
  • What are their needs?
  • Whose obligation is it to meet those?
  • How do we involve the community?

What cultural conditions do you need for things like these can happen? And does it only work for things that could have gone wrong, but just went right (someone made a serious mistake, but no one was hurt), or also when outcomes are serious?

The day was closed by an exercise of a serious accident and the discussion if some people are really beyond restorative justice because the injustice seems so grave to us. Or is it the process that is unleashed onto people such that it doesn’t leave space for restorative thoughts anymore?

Much depends on where you stand ethically, what moral school you adopt. Every approach makes certain analytical or other sacrifices through the inherent limitations of the underlying models. You can deal with this by basing your position of evidence (not just outrage) and paying attention to other positions.

 

Go back to Day 2 or proceed to Day 4 (available when someone sends me their notes...)

Moral Heroism

People seem to need moral heroes. See also the J&J case. Why is that? And who gets to determine who qualifies as a moral hero? What criteria do we use? Is a moral hero someone who is willing to make sacrifices in the purpose of something larger? Not necessarily has personal gain? Who sticks to his principles?

What view to adopt? Claus Jensen wrote in his Challenger book that there is no point for engineers or managers to be moral heroes. Feldman on the other hand says: “Engineering societies need to require engineers to act in accordance with the prevent-harm ethic.”

And, when looking at the bankers from the NRC article, or the J&J executives… Are they amoral (without any moral - see NRC “leaves no room for moral”) or immoral (unacceptable to a standard).

Time to explore some moral views.

Moral Philosophy 101

Philosophy can roughly be divided into:

  1. Critical thinking, rhetoric
  2. Descriptive philosophy (existentialism, nature of being, nature of knowledge, etc, etc)
  3. Prescriptive philosophy (how should things be?), including moral philosophy.

Sidney took us through a crash course in Moral Philosophy by discussing the five main schools. As often, there is no good and bad. It’s what you prefer, what you choose, what fits your profession:

  1. Virtue ethics
  2. Consequence ethics
  3. Duty ethics
  4. Contract ethics
  5. Golden Rule ethics

The problem with ALL five: What is Good?!

Virtue ethics

Following your moral compass. Virtue is a reliable habit, part of your identity, said to be independent of context. 

Aristotle: on each side of virtue is a vice…

Coward <---- too little --- Courage --- too much ----> Reckless

Scrooge <---- too little --- Generosity --- too much ----> Wasteful

Bragging, arrogant <---- too little --- Humility --- too much ----> Too dependent

Context does actually affect these virtues (tired, work, worry, …). And there are other problems, including conflicting goals, local rationality (context matters! sometimes a virtue works, sometimes it doesn’t) and virtues are underspecified.

Aristotle: when you get wise you get the ability to know which virtue to apply. Phronesis: practical wisdom.

Consequence ethics

In its most extreme form this is called utilitarianism: the greatest good for the greatest number (J. Bentham, John Stuart Mill). Just like virtue ethics doesn’t give a thing about consequences, consequence ethics allow you to ignore virtues. Consequence ethics has major appeals to many managers (makes very much sense in the J&J case: “Sacrifice 5% with breasts for the good of others and the company must be acceptable”).

The ‘fatal’ weakness of utilitarianism is the question of what is good. Utilitarianism cannot deal with justice and rights. Minorities generally draw the shortest straw…

Duty ethics

This is the ethics of principles. Very commonly found in for example the medical world (“doing the right thing”). Immanuel Kant is the ‘big name’: principles are universally applicable, they are not imposed from the outside but come from the inside. 

There are differences between rules (you follow them, depended on context, non-compliance can be punished) and principles (you apply them, they are context-independent, non-compliance is not punishable).

Duty ethics often clash with consequence ethics, often this is illustrated through the different views of practitioners and managers. Duty ethics are often related to the fiduciary relationship (trust): someone puts their well-being (life even) in your hands.

Contract ethics

Right is what the contract tells you to do. Right is what people have agreed among themselves. But: how will you know if the other party will follow their side of the contract. Thomas Hobbes (“Leviathan”) had the answer to this fear of a chaotic universe without rules: we need a souvereign and alienate our rights to this souvereign (e.g. through laws) who acts for us. This is why we need governments, judges, etc.

None of this really works in companies - there is no division of power, there will be no independent judges, so ‘just’ culture can quickly become a case of your contract telling that you can be screwed over for screwing up!

Golden Rule ethics

Everything you want people do to you, do it to others. Similar statements are found in most/all religions and cultures. The problem is again: Who decides what is good (for you and others)? Golden Rule ethics can be inherently egoistic.

 

Why Do People Break Rules?

After this theoretical excursion it was to the problem of ‘rule breaking’. Several reasons were mentioned: Because they can. Rules are sometimes silly. It gets the work done. To finish the design. To survive. Because there are conflicts between rules, or between rules and principles. Because you don’t know them. Because they are ambiguous. There is a number of theories that cover these reasons (and more):

Labelling theory:

Norwegian criminalist Nils Christie: “crime is NOT inherent in the action”. Stanley Cohen: “deviance is not inherent in the person or action, but a label applied by others”.

Control theory:

Rule breaking comes from rational choice and is often seen as a lack of external control. Hence the Orwellian principle of a panopticon: you see everything. Modern variations are e.g. Remote Video Auditing that is implemented in many hospitals. This primarily communicates a lack of trust. It also tends to make the behaviour that it wants to see slide out of view. See Lerner & Tetlock: Accounting for the effect of accountability. People will get better at hiding stuff, or act subversively (“accidentally destroy evidence”). Remember also that every viewpoint reveals something, but hides even more. Another thing to consider: if you do this to change behaviour you have already taken a decision of where the problem lies! The panopticon takes the view point that people don’t have an intrinsic motivation to follow rules.

Learning theory:

Fine-tuning: it gets the job done, for example the Columbia brushing over of foam.

Subculture theory:

People who defy rules (“the rules are silly”) gain traction and status within their subculture. Often this gets stuff done. Formal investigations are blind to this, also because it disappears underground when poked.

Bad apple theory:

There are some bad apples (“idiots”, accident-prone people) and they break rules. Are bad apples locally rational? They do fulfill some function! Accident proneness is an old concept, but it proves to live until today.

Resilience theory:

Safety is a non-dynamic event. GSD: Gets Shit Done. Finish the design.

 

A case

Much of the afternoon was spend discussing a real ongoing case which has to remain confidential. Themes discussed included ”errors” and criminal prosecution, second victims, just culture, negligence and social support.

The Lucifer Effect

Day 2 was concluded by watching and discussing Phil Zimbardo’s 2008 TED Talk.

Are we talking about evil people or evil structures? In the examples that Zimbardo shows (e.g. Milgram, Stanford Study, Iraq prison) it were basically good people who were corrupted by the circumstances (7 steps - “all evil starts with 15 volt”). Not “bad apples”, but a “bad barrel”. Away from focus at the individual level towards looking at the system (“bad barrel makers”).

The same situation can lead to heroism or evil. To be a hero (Zimbardo mentions 4 different types) you have to learn to be deviant.

 

Back to Day 1, or proceed to Day 3.

From Human Error To Second Victim - Learning Lab Lund, June 2016

In June, I had the pleasure to attend another Learning Lab at Lund University. Titled “From Human Error To Second Victim” it was led by Sidney Dekker, assisted by Johan Bergström (and a few others).

Day 1

Ethical commitments

Sidney started by discussing some of his ethical commitments:

  1. No one comes to work to do a bad job.
  2. Adopt Bonhoeffer’s view from below. Why did things make sense to them?
  3. Try to see Justice above Power. This will take steadfastness and commitment - note that the organisations where you work are NOT governed democratically!
  4. Before something happens: celebrate worker autonomy, self-determination and expertise.

(Not part of this list, but I also wrote down: “Political correctness is a waste of time”, but cannot recall the exact context…)

(Later on during the day, Sidney mentioned two ground rules for critical thinking that are also worthwhile: Rule 1: There is no substitute for being well-read. Rule 2: Don’t believe a single word I’m telling you. Think critically for yourself and check stuff!)

Old View and New View recap

After this we launched in a brief re-cap of the Old View and New View (see Day 1 from the first Learning Lab). Old View includes seeing ‘human error’ as the cause of accidents, pointing the finger and trying to get rid of ‘bad apples’. The New View tries to understand complexity of the system, and sees ‘human error’ as a consequence of stuff in the system. People adapt to make the system work, the system is not inherently safe; people help to create safety. Therefore, the New View doesn’t intervene at behaviour, but in the system.

Interestingly, the New View goes all the way back to 1947. Already Fitts & Jones put “pilot error” between quotation marks in their landmark article and indicated that “human error” is merely an attribution, it’s a label.

Several cases were discussed from everyday situations where humans finish the design (e.g. attaching tassels to distinguish between light and ventilator switches) to aviation (e.g. placing a coffee cup over a lever in the cabin in order not to forget to pull it), or where the system lured people into error, including belly crashing B17 bombers and the Singapore Airlines Flight 006 accident in October 2000 where relevant factors in the system included lighting and signage at the airport did not measure up to international standards.

This is also a good case to illustrate the question how far do you want to go to find reasons for ‘human error’. Sometimes as far as geo-political reasons (Taiwan is not part of the United Nations)! Sidney stated that the New View has no stopping rule. It’s up to you, your choice. And pretty much an ethical one. There is always another question to ask, if you want and as long as it’s useful. You don’t find causes, you construct causes from the story you choose to tell.

After the first break there was a lively discussion around a recent article in the New York Times about medical error. One important problem with figures like these is how did anyone count them. Relevant is also the question if reports like these help us or hurt us. On one side they can help to start a discussion, get public attention and can create a sense of urgency with regard to safety work. On the other hand it dumbs down the matter and is a clear message from the Old View.

Local Rationality

Local Rationality was the next subject, starting with Adam and Even in the Garden of Eden because this provides kind of a blueprint for our typical thinking about choice as a basis for decision and error. This has through Augustinus (“failure is voluntary”) and Calvin found its way into Max Weber’s protestant ethic where success depends upon personal hard work and the individual is responsible for his own decisions.

Until the 1970s the main model for decision making was expected utility - and despite the fact that it has found much criticism, this model makes frequent returns, including lately into the world of criminology. Expected utility comprises four steps:

1. Rank options by utility function

2. Make a clear and exhaustive view of possible strategies

3. Work through probability of strategies

4. Choose the best, based on calculation

Expected utility assumes full rationality and plenty of resources to do these steps. It is strongly connected to our moral idea about choices that if you don’t succeed, you didn’t try hard enough.

Contrasting is the natural decision making that came from the 1980s onwards from Gary Klein and others (like Orasanu & Connolly: decision making in action). And already in the 1950s Herbert Simon stated that it’s impossible to be fully rational (and even coming approximately close is impossible) and concluded that rationality is bounded. A problem connected to the term ‘bounded’ is that it’s unclear what it is relative to. Who gets to place the boundaries. Also, bounded can have a somewhat judgemental tone. Terefore, preferred to speak about ‘local’ rationality. Local indicates a relation to the person at a point in time in a certain context.

The three main questions for Local Rationality:

1. What were the goals? Conflicts?

2. What knowledge did the people bring to the table?

3. Where were they looking? Focus of attention? (usually driven by points 1 and 2 and the environment/context)

Accountability

The next subject was what this means for accountability. As an illustration Sidney took a recent article from the Dutch newspaper NRC that discussed the situation of junior bankers in London who are expected to pull all-nighters and can be fired within five minutes. As the article argues: “pressure undermines all ethical sense”, everything is focused on corporate survival and “if you can be fired within five minutes, then your horizon becomes five minutes”. Besides, “everybody does this, everything is legal, so it’s okay”. This created a very special local rationality. How do you deal with creating local rationality and at the same time holding people accountable?!

As a preparation, participants of the Learning Lab had been tasked to read the Johnson & Johnson Risperdal case from Huffington Post and discuss local rationality and accountability at the level of executives. Were these normal people making normal decisions, or was it something else? Was this reasonable or not reasonable behaviour?

It was interesting to observe that many opinions on the case were rooted in retributive thinking, most likely driven by some kind of outrage. One of the aims of the Learning Lab was to explore ideas of restoration. Instead of meeting hurt with more hurt, seeing of hurt can be met by healing. There is no final answer to this, but we can learn different vocabulary. It can be difficult going (or not going) from WHAT is responsible to WHO is responsible.

The day was closed by discussion after a film clip (available online) around the squeeze pilots and other airline personnel are placed in such that air travel becomes as cheap as possible. If an accident happens (like in the Colgan Air 3407 case presented in the video)

To what extend can we defend the local rationality of an investigation board that comes with their probably cause and mentions NONE of the things in the video? How seriously do we think that they take their ethical duty? Because do mind: investigation boards have political agendas, their conclusions are written or edited by political committees, they adapt the tonality of their message, or massage acceptance. Their reports do NOT contain an objective truth.

 

Go to Day 2

Just had a nice little column published in the Dutch arbo-online web magazine:

Veiligheid tussen de regels...

For non-Dutch speakers, this means 'Safety between the lines'. The column is a more-or-less translation of the Safety Behaviour Language blog from a few weeks ago.

Day Five

The final day was used for summing up and discussing some stuff that participants felt had been left out.

Safety Culture

Upon popular demand Safety Culture became the first subject. Some of the discussion was centred around some mainstream applications of the notion of safety culture and the assumptions that underpinned this. More useful I found the discussion around the two approaches to safety culture (with reference to Frank Guldenmund’s work): Interpretist and Functionalist. The most often used notions of Safety Culture clearly come from the latter approach.

Functionalist:

  • Roots in organisational psychology
  • Top-down
  • Culture is something the organisation has and owns 
  • Culture, which is transmitted to its members via communication, reinforcement, etc.
  • Corporate culture
  • Ideal organisational culture exists and is strived for
  • Culture can be designed, created, engineered and measured
  • Culture is manipulated or controlled to serve corporate interests - management, ideology, policy, goals, strategy, etc.
  • Primacy of work-as-imagined
  • Normative
  • Common tools: quantitative, surveys, statistics, structure.

Interpretive:

  • Roots in sociology and antropology
  • Bottum-up
  • Culture is something the organisation does
  • Culture is created by groups as they learn to adapt to the environment. Culture is
    taught to new members - how to think and what to do.
  • Sub-cultures
  • No generic ideal culture exists
  • Culture is a complex emergent phenomenon - influenced by system in ways that are hard to predict
  • Culture emerges to allow to groups to interpret and reinforce collective identity, beliefs and behaviours, and to aid communication, discovery and learning
  • Primacy of work-as-done
  • Descriptive
  • Common tools: qualitative, interviews, observation, participation.

How did the concept come into existence at all? Combination of: 

  1. Scientific availability of theories about organisational culture (the work of Schein et.al.)
  2. Political/societal need for explaining accidents (e.g. Chernobyl, Herald of Free Enterprise)
  3. The use of the concept and language of safety culture as an explanatory model by scientists, accident investigators and managers. By doing so it legitimized its use. This is probably another good example of how our understanding of reality is shaped by our theories and ideas (Barry Schwartz).

Eventually it became an easy way out for explaining things. Especially because the mainstream applications of Safety Culture fit quite nicely with the bureaucratic machinery and safety management systems. In essence these approaches are very behaviourist.

Furthermore…

Not much time was left to look at possible ways of looking at trends in a Resilience Engineering framework. Some examples from a recent thesis were show and Smokes cautioned that Data does not equal Information and nor do Reports imply Answers.

Numbers are not important - it’s the quality of the information and what is done with it.

Setting a measure and getting a good score does not automatically improve real performance.

Collecting and monitoring does not automatically change things for the better (Hawthorne effect notwithstanding).

The we briefly looked at some possible proactive approaches from Resilience Engineering, including a paper by David Woods that measured things along two axis: Safety-Economy and Reactive-Proactive. Also we looked briefly at a healthcare example from the Health Foundation.

Smokes closed by remarking that “A big part of resilience engineering is about understanding weak signals”. Which is interesting because this provides kind of a full-circle back to the underlying idea of Heinrich’s pyramid. But that echoed also in the final discussion where some speculation passed about what would be next. Would Safety 3 be something about resilience because of systems (current RE very much focuses on resilience despite systems) or are we returning to Safety 1?!

Conclusion?!

And then the Learning Lab was over way too soon - a week well spend with a great assembly of peers, enthusiastic teachers, great discussions and much learning indeed. This summary can obviously only scratch the surface of five intense days in Lund (not mentioning that you don’t get the knowledgeable and enthusiastic teachers or discussions and reflections from engaged peers). Intense but worth your while - recommended for safety professionals who are really interested in enhancing and the width and depth of their professional knowledge!

Postscript

A rather funny incident happened at the end of the Learning Lab when we went outside for a group picture. One of the participants had brought a proper ‘old-fashioned’ camera which was used to shoot the scene. In order to get everyone on the picture some passing Japanese students were asked if they could take the picture. They were willing, but Riccardo needed quite some time to explain to them how to use the camera. You could clearly see them wondering: “How on earth are you going to ring someone or update your Facebook status with this thing?!”

 

Day Four

Where Day 3 introduced (more or less) Course 1 of the Master, Day 4 was to tackle Course 2 ‘The Sociology of Accidents’. An ambitious task. JB presented us with five different views on what risk is and why accidents occur. A nice summary of these is found in the SINTEF report that we had received as recommended reading beforehand.

Risk as energy to be contained

This is a notion of risk that makes immediate sense. Risk is seen as something that can be ‘measured’ and barriers have a certain reliability, as have humans. Many of the traditional risk management tools and applications have come into place because of this view. Here we find the basic energy-barrier model, the well-known hierarchy of control, as well as the Swiss Cheese Model and most of the models from the First Cognitive Revolution.

Of course the view can be challenged. Energy is not necessarily the enemy, but often something that is desired (just think of a nice hot cup of coffee). In complex systems you don’t always know where the hazards come from. Barriers can bring problems of their own: they are not always independent, they have side-effects and they can make the system more complex and thereby more risky.

Risk as a structural problem of complexity

Especially the last mentioned element gave rise to Perrow’s Normal Accidents Theory. Systems are characterized by a degree of coupling (loose to tight) and their interactions (from linear to complex). According to Perrow tight coupling must be controlled by centralization while complexity should be handled by decentralization. Since Perrow found that no organisation can be both at the same time, he posed that complex tightly coupled systems have to be avoided at all time. While most people in Safety have a relatively optimist view (risks can be handled), Perrow (a sociologist, by the way) is one of the few pessimists.

Note that Perrow did not really define ‘complex’. Also he sees it as a structural property of a system while Rasmussen sees it as a functional property.

One problem with the NAT model is that there are little dynamics in it. What if something goes from loose to tight coupled in an instant (see Snook)? Additionally it was argued that there were actually organisations that were centralized and decentralized at the same time. Enter the next view.

Complex Systems can be Highly Reliable

People like Weick and LaPorte (all organisational scientists) argued that some organisations (quickly labelled HRO) like air craft carriers or nuclear energy managed to move continuously between centralised and decentralised behaviour/management, depending upon the task at hand. Traits of HRO were thought to be:

  • social redundancy,
  • functional decentralisation,
  • maximised learning from events,
  • ‘Mindfulness’ and a state of chronic unease; constantly challenging the status.
  • Critics of the HRO-school argue that these so-called HRO are no real test case because of their near unlimited funds and rather specific/limited tasks. Also it’s a question of “Highly Reliable compared to what?”.

In the end HRO has rather become a business model than a real field of study although there are clear links to resilience engineering, especially because HRO doesn’t see safety as the absence of a negative, but the presence of positives.

The book to read, obviously, “Managing The Unexpected”.

Risk as a gradual acceptance

This view sees Risk as a social construct. Path Dependency is important; accidents have a history! Concepts are Incubation Period, Practical Drift (Snook once more - the behaviour of the whole is not reducible to the behaviour of the actors), Fine-tuning, information deficiencies and the Normalization of Deviance. Accidents happen when normal people do normal jobs. Accidents are seen as societal rather than technical phenomena. 

Barry Turner’s “Man-Made Disasters” (a book to buy when you have ample funding - check Amazon! But get the general idea in this paper by Pidgeon and O’Leary) is one central piece of work (by the way, I love the multi-facetted kaleidoscope metaphor in the Weick article). Another important piece of work is the Challenger investigation that spoke a lot of normal organisational processes: “fine-tuning until something breaks”. Diane Vaughan (book: “Challenger Launch Decision”) argues that safety lies in organisational culture, not in barriers.

Note the difference (nuance) between Reason’s latent failure (which indicates an error) and the notion of drift (not an error, but fine-tuning). Success can become the enemy of safety when we get too good at Faster Better Cheaper.

Risk as a Control Problem

Which is actually a good transition to the final and closely related view. Central in this is Jens Rasmussen’s famous 1997 ‘farewell’ paper “Proactive Risk Management in a Dynamic Society: A Modelling Problem” and the figure that pictures the ‘drift’ (as a consequence of fine-tuning, ETTO-ing and the like) of activities between three boundaries (economy, workload and ‘acceptable performance’).

The irony is that you often only find out where the boundary is by going right through it! The model is mainly the basis for a discussion to look at your challenges and how to deal with them. There is sometimes a cybernetic approach with networks (e.g. Nancy Levenson’s STAMP).

The enemy in this view are goal conflicts and distributed decision making, as well as success (again).

Dissecting accident reports

The day was closed with an interesting exercise where a number of public accident reports (by the CSB, RAIB and others) from various sectors were put on display and students were asked to pick one and then in groups find out what models, biases and possible traces from ‘old view’ safety had influenced the results.

During the feedback session many interesting reflections were made and experiences shared, including:

  • Investigations do sometimes contribute little to safety improvement, but are an important societal ritual to deal with pain and suffering. Sometimes we shouldn’t even initiate actions (e.g. German Wings) because they probably will be counterproductive, but societal and political demand may prevail…
  • Don’t use models but methods for investigation. Models make simplifications that may affect the results.
  • One participant mentioned that they never did recommendations in their reports. They mainly described work-as-done and then it was up to management to decide on eventual actions.
  • Another said they don’t list causes (these are only minor symptoms often) but rather discuss problems in an investigation report. The problems are usually more complicated than a list of causes. The US Forrest Service has even taken this to another level. They write ‘Learning Reviews’ instead of Accident Investigation Reports and start them with something in the line of “if you expect answers, stop reading and look elsewhere, you will probably only find more questions here”.
  • When doing an investigation, pick from various models. You are explaining something from your view and not writing an academic paper.
  • As Rasmussen said: You don’t find causes, you construct causes. 

 

>>> To Day 5

Day Three

On the third day we started with a continuation of the Reason discussion from the previous day. One of the participants commented: “Models have their own life - they’re simple, they have utility. But we make them rules, and they’re not” (which I found a very nice variation on the famous George E. Box quote “All models are wrong, but some are useful”). JB stressed that safety people have the tendency to ‘glue’ a model on whatever data we have (triggering the WYLFIWYF effect). We should try to do it the other way around: see what model the data might give us.

The question ‘What is Human Factors’ was answered by a participant with “I am a Human Factor”. As it turns out definitions differ somewhat, for example between US, UK and Europe. Also it’s a dynamic/evolving field as we saw when Smokes took us through a ‘History of Human Factors’ starting way back in the 17th century. Through early health and safety legislation, like the 1802 Act for the Preservation of the Health and Morals of Apprentices, the 1833 Factory Act (delayed by a decade over a controversy of fencing machinery) and the 1854 Merchant Shipping Act (interestingly triggered by human errors vs. overloading ships - a clear conflict of interests problem) we ended up around 1900 with the rise of worker compensation schemes (first in Germany, later in UK and USA) and the rise of insurance schemes as the main means of risk management.

From 1915 on psychology popped up as an explanation for accidents. This gave rise to the search for “psychological causes” for accidents (remember Heinrich?), including (from 1926 on) attempts to find the key to ‘accident-proneness’. One solution was the attempt to engineer out the human by increasing automation and trying to reduce the probability of errors. Also Behaviourism started in this period. A positive notion from this period is the realisation that safety can be cost-effective.

World War 2 gave new impulses, among others through cockpit design and research into vigilance (e.g. radar monitoring), fatigue and stress. 1947, Fitts & Jones can be seen as the proper start of Human Factors in the USA (a paper from that period already mentions that increasing information and complexity in cockpits was going to have contra-productive effects). From this period we also have the (1951) division of ‘man is good at/machine is good at’ division that is still somewhat relevant today. Interestingly one of the participants compared this view with Kahneman’s System 1 (machine) and System 2 (man) thinking.

The growing automation and the above mentioned ‘division’ led Lisanne Bainbridge to comment on the Ironies of Automation: When automation fails the human has to step in, but how can they?

After this we went through the three main movements in Human Factors:

  1. Behaviourism (ca. 1910 - WW2)
  2. First Cognitive Revolution (1960s-1970s)
  3. Second Cognitive Revolution (after TMI)

Behaviourism

This movement treated human mind as a black box that responded on inputs (stimuli) and gave reactions/responses or outputs (the ‘right’ behaviour). Characteristics of behaviourism include: selection, training, supervision, observation schemes, incentives and actions to ‘increase motivation’.

Pavlov and Taylor are names that fit here. Interestingly, behaviourism sees different approaches in the USA and Europe. The latter has often a more naturalistic approach (talking to people instead of experimenting in a laboratory).

First Cognitive Revolution

After the Second World War the first cognitive revolution happened. This tried to open the black box and tried to look into what happened in human minds: information processing and/or cognitive processes. Cognitive models that came from this movement included Miller’s Information Processing Model, Chris Wickens’ workload model (which was basically a more complicated version of the information processing model) and the notion of Situational Awareness (Endsley, 1995).

Despite the fact that several scholars have challenged the notion of Situational Awareness and the fact that Endsley has argued that the original idea was never intended for (as a design tool) what it is commonly used for now (as an easy, counterfactual and normative tool explaining accidents), it has shown to be a rather dominant model with a strong ‘common sensical’ appeal (see examples here, here and here -and many more around, just use Google).

Other studies from the movement worked on subjects like memory and fatigue. The language used in this movement was often strongly influenced by computers which were used as a model for the human mind ‘black box’. Many of the models that came out of the first cognitive revolution (alternatively labelled Cognitive Psychology, Cognitive Engineering or the Information Processing Paradigm) were mainly controlled laboratory based studies. This also applies to the work of for example Kahneman who, as coming from economics, is not seen as part of the Human Factors field per se (rather as a stream parallel to the first cognitive revolution), but who is relevant with his work about decision making.

Second cognitive revolution

This movement started more or less after the Three Mile Islands incident (but not solely, also e.g. the Dryden crash was one of the cases that had influence on the move away from ‘human error’). Human error was something that was only in the eye of the beholder and also something that only ‘works’ in hindsight. Defining papers include Rasmussen’s “Human Error and the problem of Causality of Accidents” (1989) and Hollnagel’s “The Emperor’s New Clothes” and also David Woods wrote some groundbreaking stuff.

While the first cognitive revolution was characterized by thinking that “cognition is in the mind”, the second movement understood it as “cognition in the wild”. The ‘black boxes’ closed again and instead there was a focus on relations, interactions and complexity. Humans were now seen as goal driven creatures in real dynamic environments. Characteristics of the real world include constant change, dynamics, many interactions, variability, emergent properties, side effects and effects at a distance and the need for multiple perspective in order to grasp (some of) the complexity. How can you predict outcomes in complex systems? It’s nearly impossible!

Complexity is an important theme in the second cognitive movement. It’s something that characterizes a domain and/or of problem solving approaches/resources. It’s not a thing per se, but rather a situation to be investigated. It’s typified by the fact that an aggregation of the parts will not capture all relevant aspects of the whole (which is why a reductionist approach will not be the solution to a complex problem). Often complexity is transferred to the sharp end instead of offering solutions for the whole (Woods).

Work-as-done is hard to predict because people will find new and different ways of using tools and methods. This is one reason why training for specific situations (as often seen in behaviourist approaches) is often useless. Instead it’s often wise to train approaches of how to deal with situations. Cognitive models from the Second Cognitive Wave include joint cognition, coordination, control and ecological design. 

Another idea that emerged was that one shouldn’t only look at what goes wrong, but very much at what goes right - the work-as-done. This obviously led to the Resilience movement which is all about success and sustained functioning. System performance is therefore often a measure.

Like Kahneman & co (who also mainly saw biases as a source of error) can be seen as a parallel track to the First Cognitive Wave, we find also a parallel track to the Second Cognitive Wave. This includes Herbert Simon with his bounded rationality, Gary Klein with naturalistic decision making and people like Gerd Gigerenzer. These scholars also tend to have more attention for success than for failure.

A great contribution from the group was when Paul mentioned an example of how a particular software firm treats problems where people have to be rung out of their beds. These are highest priority because this is seen as a highly unwanted situation. Using the canary in a coal mine metaphor: When the canary dies, you don’t get yourself a better canary, or hire more canaries (the typical business approaches to many problems, or rather symptoms) but you fix the problem!

 

>>> To Day 4

Day Two

After a brief recap (and revisiting) of the old/new view mindmap, the second day was dedicated to a bit of ‘Archaeology of Safety’, exploring the origins of safety ‘science’. As part of the preparation the participants had to read some excerpts from Heinrich’s 1931 book. I think this is a good thing because most people in the safety world discuss, dismiss or promote his work without ever having actually read something by him.

Heinrich deserves respect because he was one of the first who attempted to treat safety in a scientific way instead of just relying on rules. We can see this in the way he tried to deconstruct accidents and find patterns/order. A typical ‘scientific’ approach is to create categories. He also formulated a number of axioms of which the three central were discussed:

  1. Injuries are the consequence of a completed sequence of events
  2. Unsafe acts are responsible for the majority of accidents
  3. There are many more unsafe acts than minor injuries and there are many more minor injuries than severe injuries

We were then encouraged to discuss Heinrich’s legacy with a background in our own experiences and the organisations where we work. One may conclude that - although we have progressed - the Heinrichian approach in essence is still the predominant view in the safety world.

In the next exercise we had to find out how Heinrich got his axioms, what data he used, what he did with them and what biases he had in the collection and analysis of data. Part of this is described by Heinrich in his book, part is criticized by people like Manuele. I found the thinking about the biases to most interesting element, including:

  • The data is self-selected to blame (a large part was taken from closed case insurance records, so the people filing the claim had to gain something by claiming either the person or the machine),
  • The other part of the data was based on records from plant owners and probably biased by the people who wrote them,
  • Heinrich only allowed for either a human cause or a technical cause, if a case listed two he would chose one in a relatively arbitrary way, preferring the human cause,
  • He only looked at proximate causes (the thing happening right before the accident),
  • He believed in psychology as the solution for Safety problems.

A conclusion is that we should take the ‘scientific’ in Heinrich’s approach with a pinch of salt. As JB said: “Heinrich has a lot to teach us today - but be aware of the flaws!”. I think that what Heinrich has done was coming up with some good ‘common sensical’ ideas and the find (or construct) some kind of scientific looking basis for these ideas. Most likely Taylorist views influenced Heinrich’s work. Taylor (1911) sees behaviour as something that has to be controlled:

  1. Prescribe the method (work is decomposed in ‘atoms’; there is one best method to do the job; the expert is smart, the worker is stupid),
  2. Select and train,
  3. Monitor performance,
  4. Manager plans, workers work,
  5. For every act of the worker there are many acts of management.

Skipping 60 years we then went fast forward to James Reason and his first book “Human Error”. It was interesting to see that Reason’s work bears clear influences from Heinrich. I have long thought of the Swiss Cheese Model as being about the spaces between the dominos, so I had linked Heinrich and Reason there, but there is a much greater heritage that I didn’t really think all that much before. Finding Heinrich’s legacy and looking at differences was the theme for the next exercise.

There are quite a number of parallels/overlaps between Heinrich and Reason. This may be as simple as the use of words (“unsafe act”) and the domino-like structure of the SCM, but also the linearity of thinking, the idea that solutions were to be found in psychological factors and the notion that it was possible to decompose accidents in a number of sequences and the notion that humans might be a problem instead of a resource (something that Reason would have a revised view on in his book “The Human Contribution”).

Reason differs clearly from Heinrich in that his focus shifts from the direct causes (‘active failures’) upstream towards underlying/organisational causes (‘latent failures’), a greater multi-causality and especially the idea that things are depending upon context.

Some time was spent discussing inconsistencies in Reason’s work, including the ever shifting SCM (according to JB “a cartoonish way of illustrating defenses in depth”) and the disagreement within the Human Factors community about the notion of violations.

Personally I found this Day of the Learning Lab the least ‘giving’. This was partly because I had been through the ‘study and reassess Heinrich’ exercise a few years ago (read some of the results here). Another reason is that it was the part that had a rather negative tone, since it was more about deconstructing and criticising (which of course is important, but I felt we were a bit stuck in just one mode) than finding useful elements and how to use them.

 

>>> To Day 3

>>> And a reflection on something that happened (and not happened) this day

The Human Factors Master Program at Lund University includes a number of Learning Laboratories (follow the links for more information). The first Learning Lab kicking off the Master program for 2016 and 2017 took place from 18 to 23 January 2016. Present were 15 students enrolling into their Master Human Factors and an additional 10 ‘externals’ present just for the Learning Lab ‘Critical Thinking in Safety’ which was led by Johan ‘JB’ Bergström and Anthony ‘Smokes’ Smoker.

Participants came from all over the world, including Norway, Sweden, Denmark, the Netherlands, Germany, UK, Ireland, USA, Canada, Italy, Singapore, Qatar, The Emirates (probably forgot some - sorry!) and a wide variety of sectors, including maritime, rail, road and air traffic, police, military, medicine/patient care, pharmaceutical industry, biological safety, software release and information security (note the absence of oil & gas!) with backgrounds in accident boards, regulators and industry. A nice variation that would facilitate many great discussions.

Day One

The first half of Day One was spent on introduction and general information of the Master Program (which appears to be quite intense, but something I definitely want to do in the future). As JB said it’s “a program aimed to harness diversity”. Also the aim of the Learning Lab is not about teaching right or wrong human factor models, but about developing critical and analytical skills and asking critical questions. Paraphrasing Claude Lévi-Strauss: “what theory is good to think with”, or “what language is good to think with”. Jokingly he said we might be very confused on the final day, but at least “confused on a higher level”.

One of the things that came up during the introduction was how to talk about safety with people who don’t have safety in their everyday vocabulary. It’s important to look at things from their perspective! People at the sharp end aren’t interested in fancy theories, often they want something bite-size. But: don’t belittle them! They are often highly trained professionals in their fields, so watch out for a parental tone!

After lunch work started properly with a discussion of a couple of (newspaper) articles dealing with the November 1989 incident of a Boeing 747 during approach of Heathrow airport flew very closely over a hotel and caused some commotion. We were shown various narratives that highlighted different elements and then challenged to find ‘controversial’ items from the article and then take a different view that would make it not controversial.

An important lesson from this exercise: There isn’t an objective truth. It all depends upon the analytical choices you make. If you call something controversial that is your analytical choice. If you choose to defend a decision as non-controversial this is another analytical choice.

The final part of the day was used constructing an ‘old view - new view mindmap’ (see picture below, click to enlarge). One interesting observation (not yet noted in the picture) was that both views have an optimistic view on safety: you can learn from incidents and improve.

The usage of the labels ‘old’ and ‘new’ (or Safety I and II, if you want) is sometimes unproductive as it may suggest a notion of ‘old’ being outdated, something to be replaced or wrong. While one might want do discard some applications, it’s not a case of ‘new’ being better. Opposing the two can be unproductive; they are complementary tools, or rather sets of tools. As one of the participants remarked: “The idea that the two are separate is contrary to the new view”.

Another participant remarked later in the course that an important criterion for choosing what view to work with was the question what view give the best opportunity for prevention in the specific situation.

An interesting discussion arose when the question was raised how rules and regulations fit into this. Also the new view needs some kind of authority or regulator. One way forward would be from compliance based towards performance based. Rasmussen already in 1989 stressed that competence of the regulator would be important.

Another interesting insight was that patients, their family and other victims may not accept a ‘new view’ explanation (ref. Scott Snook’s “Friendly Fire”). There may very well be situations where one finds that their ‘needs’ deserve to be prioritized which leads to an ‘old view message’. Society sometimes demands different things than we professionals might want. This is a question of power and ethics. What story do we want to tell? Whose perspective does one want to satisfy? And, consequently, who are we going to hurt?!

>>> To DAY 2

 

A Dutch adapted version of the "2 Hour Rule" article has been published in the December issue of Vakblad Arbo.

Read the article here.

First published in Vakblad Arbo nr 12 2015, Vakmedianet.

The following column was published in the October 2015 issue of the Dutch professional magazine Vakblad Arbo (published by Vakmedianet).

Enjoy the Dutch version here. Or read the English translation below.

 

Relativity theory

Everything in life is relative. Or rather: nearly everything, because in the end even relativity is relative. Risk is a great example of relativity. There is a possibility of something happening or not, depending upon circumstances. I often say that there is only little black and white in safety. There are many shades of gray. Definitely more than 50.

Even so it often seems as if most safety professionals operate in a field that exists of mainly absolutes and black/white statements: We have zero tolerance for safety violations! Safety First! All accidents are preventable! I work safely or I don’t work! The risk in this situation is not acceptable!

These are all dubious statements that I gladly will get back to at a later point in time. Since this issue is about risk management it would be suitable to start with the latter.

Acceptable risk in itself is a relative measure, because it acknowledges that zero risk is impossible. Instead one assesses how much risk one is willing to take. Often people use some kind of a threshold value or the “red area” in a risk matrix and then take a decision. Simple isn’t it?

My recent change of branch effectively demonstrated me how relative the term ‘acceptable risk’ really is.

Take an activity with a high probability of serious or fatal injury. In most companies this will lead to a stop in activities until measures have been taken. High chance of fatality is unacceptable. Period.

Not for the police force. If there’s a shooting accident where lives of civilians are at stake (here in Norway the Utøya-drama is still very fresh in memory - and obviously the tragic incident in Oregon is another example) action is necessary to prevent worse. A colleague remarked: “Where everyone runs from, the police runs to”. The same applies of course for other emergency personnel who risk life and limbs in service of higher goals.

We often forget that taking decisions requires actually work and also should be so. Risk assessments are often based on simplified models and usually look only at one criterion. (Almost) every risk, however, has several sides, and therefore also criteria. Risk always depends upon context and (subjective) values. Conscious deliberation is therefore more important and more useful than simply managing by way of a colour or a number.

There’s a Dutch proverb that literally translates into: “One person's death is another's bread”. Maybe we should change that into “One person's unacceptable risk is another's challenge”...

 

www.arbo-online.nl

Here's an article that I wrote for the safety magazine of my former employer, Jernbaneverket. It's a (roughly) quartely magazine aimed at operational personnel (and others) with the aim to share knowledge about safety in the widest sense, be it information on accidents and incidents, new safety rules, best practices, or theoretical knowledge made understandable. And much more. I think it's a great initiative and I'm proud to have been part of it.

The article is in Norwegian. I apologize to non-Norwegian speakers. But take a look at the magazine anyway to get an impression of how this can be done. And if you're interested in Just Culture, why not just read the summary of Sidney Dekker's seminal book elsewhere on this website?

 

March/April this year I had the pleasure to attend the 2015 NVVK Conference - a great place to hear good presentations, expand the network and catch up with old friends and former colleagues. I also delivered a presentation on competence and professionalism.

The paper I wrote for this presentation has now been published in NVVK Info in an abbreviated version. The full 7000 word original version can be read here. Both versions are in Dutch language, please find a translation of the short version below:

Find it out by yourself? If only!

About Myths, Lack of Knowledge and Domain Blindness among Safety Professionals - and what to do about it.

Among others in practical working situations, during professional discussions (be it online or “live”) and during workshops/symposiums one regularly sees a significant lack of up to date professional knowledge/skill among safety professionals. There is a reluctance to take up a critical attitude with regard to certain established truths and practices. The same applies to professional development, to follow up on relevant literature or to look across borders. 

Although the opportunities are better and easier than ever these days with internet, digital libraries and discussion forums, effects of this are often absent. Apparently many safety professionals don’t find things out by themselves after all.

Case 1: Is safety a religion?

Within safety there appears to exist a number of dogmas that you are not allowed to touch. Fiery and often emotional reactions are the direct consequence, often comparable to reactions from religious fundamentalists.

One example was after a little article that discussed the safety dogma “All Accidents Are Preventable” in a critical, but nuanced way [Busch, 2014]. A storm of reactions in various forums followed. Some of those particularly sharp and even hositle: “You are no real safety pro if you do not believe that...”. Attempts at discussion and logical reasoning were in vain; references to relevant professional literature was dismissed by “Stop reading and get practical experience”.

Case 2: Do we actually know our stuff?

Like any profession also safety owns a jargon of its own, often supplemented by principles from other sciences. The correct use of these is important, both when dealing with ‘lay persons’ as with professional peers. Incorrect use can lead to misunderstandings and thereby directly or indirectly lead to accidents.

Typical examples where rather basic professional principles are used wrongly, is the mixing up of terms like risk and hazard or things like “An accident has happened, we have to do a risk assessment to find out what caused it”. Often one finds examples of lack of logical thinking, mixing up causes and effects, or that one isn’t aware of the difference between causation and correlation.

Typical are long running debates about Heinrich’s work. Reactions are often highly polarized and lack foundation. Quite alarmingly hardly anyone (on both sides of these debates) has actually read Heinrich’s work.

And so?

Obviously it’s impossible that everyone can be a leader, but a certain level of competence is to be expected of professionals. A master carpenter will have clear expectations of the skills of another carpenter on his level. Why should this be different for safety professionals?

Why are things the way they are?

Hearsay and not developing further

Often there is little knowledge of recent developments in safety. A couple of possible reasons for this include:

  • One doesn’t get the time. Some companies have a reluctant policy with regard to professional development of professional knowledge. This is not seen as ‘core business’ because it’s non-billable time. Professional development becomes then the Safety Professional’s responsibility.
  • The Safety Professional doesn’t take the time: reading and following courses costs time, money and effort, as do critical thinking and taking part in discussions. 
  • Even if the opportunity is there one doesn’t look for further development. One is satisfied with the status quo and the level one is functioning at the moment and doesn’t see the benefit of further development, or one simply doesn’t know what one is missing.

One functions mainly on hearsay without knowing or understanding the basics. This leads to a simplified, superficial and therefore faulty understanding of simple concepts. But beware, behind simple concepts usually lies a more complex understanding of matters, something which many safety professionals lack. Regarding the confusion and misunderstandings it’s good to have knowledge of the original texts and to think about those in a critical way.

A typical example is the understanding of Heinrich’s pyramid [Heinrich, 1931]. “Industrial Accident Prevention” presents a ration between serious, less serious and near accidents. Some regard this 1 : 29 : 300 ratio (or a variation) as a law of nature and I once visited a seminar where one participant seriously concluded that “our numbers differ from the pyramid, so there must be something wrong with our reporting culture”. Heinrich himself, however, (and many studies after him) already said that there doesn’t exist an universal ratio, but specific per sector, company or even department and activity. (The conclusion with regard to the reporting culture may be right, the problem is with the ‘so’ in the middle as there is no causal connection).

An uncritical attitude

Well-intended principles are often simplified too far into known slogans like “Safety First” that they often achieve the opposite. Causes for this are most of all two factors:

1. People have a hard time keeping means and end or vision and goal apart.

The classic example of mixing up vision and goals are ‘zero accidents’ goals. Obviously nobody wants to have an accident, but ‘zero accidents’ as a goal is not realistic (you cannot control everything), frustrating (when an accident happens you haven’t made your goal and the metrics are ‘negative’ for the rest of the year) and not practical (accidents happen so infrequent that you cannot use them for proper steering).

2. A complacent and uncritical attitude that prevents people from thinking about what they are really saying and the meaning of what they are saying.

What is the value of a ‘Safety First’ slogan as soon as our actions show that we have other priorities? That only has to happen once to tackle the ‘safety first’ message. Isn’t it better to be realistic? Hollow slogans will in time undermine other good initiatives.

Oversimplification coupled to lack of understanding and knowledge leads to dumbing down, dogmatism and fundamentalism. This kind of black/white thinking is often to a “if you’re not with us, you are against us” attitude where one cannot see that it’s very well possible to be critical about certain concepts and slogans while at the same time striving for the same goal (i.c. improving safety).

Does this concern me?

Sometimes it seems that safety professionals are afraid to think ‘out of the box’ or come out of their comfort zones. One doesn’t see the importance of that what is unknown or of that what lies outside the immediate scope, for example within other sectors of industry. This leads to so-called domain blindness: one doesn’t recognize the importance of lessons elsewhere for the own situation. Nassim Taleb [Taleb, 2012] says that domain blindness is one of the most important elements that will make you fragile because you’re blind for outside risks and because you leave opportunities unused.

Between related subjects, like safety, environmental care or quality there sometimes appear to be impenetrable walls. Some appear to think that safety is a profession that is confined to a limited area, dealing with hard hazards and PPE. Since almost everything affects safety there are hardly any limits to it, meaning that an effective safety professional needs to have some knowledge about human behaviour, statistical understanding, organizational and management principles and so on.

Safety professionals often stick to experts and work too little with line management or people that actually know the working processes. One problem that may spring from this is that safety and production or safety and the primary processes grow to become two different things. This leads to undesired consequences and loss of effectiveness, especially when boosted by slogans that set safety apart from production. Safety isn’t an option that can be bolted onto the primary processes, but something that has to be integrated.

Are Safety Professionals used correctly?

Many organizations see safety as something obligatory with a primary focus on complying with rules and regulations. Safety is often seen in a very strict sense of ‘prevention of injuries’. This means that there are missed opportunities because safety should be seen as something that adds sustainable value to the results of the organization. This approach means that a safety profession not only must stick to (prevention of) injuries and accidents, but has to approach safety in the wider sense of for example ‘prevention and learning from unwanted events’ or even better ‘control of risks’. This will also help blurring the boundaries to other professions (quality, environmental care) or functions and roles (management) and the integration of safety in other processes and business objectives.

Many safety professionals therefor must get out of their comfort zones and (top) management must start using their safety professionals better. One problem is that many safety professionals and managers never learned to take on this attitude and that there aren’t any real incentives to change and behave that way either. 

This is one of the most basic reasons why many safety professionals are stuck where they are in their development. Organizations that use their safety professionals in such a limited way will not see the importance of development. Personal professional development is very much dependent upon the environment. In this case: how mature is safety in the organization you are working in? Why bother about resilience if one is only interested in PPE and satisfying the regulator?

Conflict of Interests

This touches upon the ethical side of the safety profession. This is clearly the most dangerous reason that can stand in the way of professional development and critical thinking. There are various forms of conflicting interests, like:

  1. Commercial interests: there is this safety program we have to sell and this program in connected to (maybe even based on) certain slogans and ‘goals’.
  2. The ‘Sunk Cost’ effect: we have a management system or a safety program running which is coupled to these slogans or dogmas. So much effort, money and other resources have been invested in this that it’s hard to let it go, turn back or change direction.
  3. Social or political acceptability: especially loyalty with regard to managers, established contacts and/or clients (e.g. if one adopts a certain management system or safety program because the client has the same).
  4. For the sake of convenience: some of these slogan or dogma based systems offer managers an easy excuse to defer responsibility to employees. And also for safety professionals it’s much easier and more comfortable to call attention to the behavior of employees than to ask difficult (and possibly costly) questions with regard to design, maintenance or management.
  5. Other conflicts of interest: for example if the safety department is held responsible for the safety metrics. Then it can be tempting to tune the registrations in a certain way.

Solutions

What can others do?

Professional organizations (e.g. NVVK, ASSE, etc) can advise and regulators can stipulate minimum requirements. Educations and courses have to explain safety principles properly and avoid slogans that simplify and ‘untrue truths’. Another element that must be spent enough time on is the correct execution of our profession: how to apporach the job, what attitude to take and how to contribute to the results of the organization.

It’s possible to formalize requirements with regard to updating and widening of professional competence through certification and registration. One risk connected to certification is that one often goes through the moves to fulfill requirements, choosing the easiest way, instead of doing what actually is necessary to fulfill the underlying goals of the requierements.

Life style advices given by governments and various organizations only work if we follow the advices OURSELVES. A regulator can forbid smoking at public places and print warning texts on packs of cigarettes, but it’s you who have to stop smoking, exercize and eat healthier. The very same applies to professional knowledge and skills: it’s a personal responsibility and life long learning and development. 

Use the available sources!

These days we have more and easier access to information than ever before through internet, digital libraries, professional magazines and books. Online forums and professional groups are perfect spots to discuss issues and quickly get an answer, or get access to a document that we would have spent long time finding a few decades ago. 

The internet can be a treasure trove of information, but regrettably also a swamp of disinformation. How can you find the correct information among this great variety of opinions? The abundance of sources enables testing against various (scientific) sources. Critical and logical thinking and discussing with peers also help separating nonsense and useful stuff. And finally, the litmus test of practical application must never be underestimated.

Attitude

Scotosis [Spencer, 2012] a kind of cultivated collective blindness that is developed within certain groups to ward of knowledge that may disturb the way they perceive the world. The best remedy against scotosis is personal research of literature and theory, having an open mind for new influences, an open and critical attitude and testing things in the real world.

Attitude is probably the most important element. A continuous critical attitude and stance and critical thinking are among the most important job requirements for a safety professional. “Critical thinking requires knowledge” [Gigerenzer, 2013]. Besides knowledge it’s at least as important to question your beliefs every once in a while. So take active part in professional discussions, which not only means that you should talk, but especially listen.

Get infected!

Peers are one of the most important influences in our lives so it’s a wise thing to surround yourself with peers that can influence you in a positive way. It’s very important to look for a rich, inspiring and vivid professional environment with people from different backgrounds. It’s very important to interact not only with people who agree with you, but especially with people who disagree. Often one learns more from the contra-argument and discussion than from confirmation. One important precondition is that one discusses open minded. Even if you stick to your point of view you will have gained because the process will have given you a chance to reflect and review your point of view and arguments.

It’s also strongly recommended to find a mentor or a buddy who isn’t afraid to tell you the truth and can stimulate you to leave your comfort zone and explore new directions or get or in-depth knowledge in certain subjects. By the way, a mentor should never be regarded as a one way thing: the mentor gives the junior feedback, but the mentor at the same time gets feedback himself if the junior asks critical or curious questions.

Boundaries

Be curious and find the boundaries and cross them, both figuratively speaking as more literally. Don’t get stuck in occupational safety, but look at other domains. Also look across national borders: the possibilities are better than ever before. People on the other side of the world struggle with similar problems as you do and maybe even have found a solution. It’s surprising how locally spread some knowledge can be: friends in the USA knew relatively little about James Reason’s work while Dan Petersen was no part of the curriculum when I studied safety in the Netherlands.

If it’s no fun, nobody’s going to do any of this

All too often we encounter peers with a serious lack of humor: safety is serious business and there’s no way joking with that. Humor is regarded as being unprofessional.

In some cases humor may be inappropriate, but these will be exceptions. It’s actually possible to do serious things with a smile and often things will go easier too. Humor is important because it helps to stimulate creativity, lowers the threshold for self-criticism and increases the enjoyment of the job. 

What can we mean to others in our profession?

It would definitely be helpful if our academics used more accessible language in scientific texts. “Robustness: the antonym of vulnerability” [Aven and Krohn, 2013] is written unnecessarily difficult. “Robustness is the opposite of vulnerability” is understood by anyone without checking a dictionary. 

As peers we get better when we translate, summarize, clarify and share knowledge. Texts can be made more accessible by restricting professional jargon as much as possible, or by explaining what it’s about. There’s a danger for some that this may affect their status or even lead to loss of consulting hours. The greater good should make up for this.

Finally: as an experienced safety professional you should be open for rookies who come with questions or are clearly searching direction in discussions. Don’t patronize, don’t be snobby or arrogant and help your peers (This article would never been possible if it wasn’t for help from several respected colleagues, most of all Willem Top who commented extensively and critically on early drafts). We all started inexperienced at some point and it’s always nice to get help and not having to find out everything by yourself.

 

References

Aven T, Krohn BS. (2013) A new perspective on how to understand, assess and manage risk and the unforeseen. Reliability Engineering and System Safety 121: 1-10

Busch C. (2014) Myth busting - Episode: All Accidents Are Preventable. SafetyCary, juni 2014. Available from: URL: http://www.predictivesolutions.com/safetycary/myth-busting-all-accidents-are-preventable/ (visited 25 nov 2014)

Gigerenzer G. (2014) Risk Savvy. London, Penguin. ISBN 978 1846144745

Heinrich HW. (1931) Industrial Accident Prevention - A Scientific Approach. New York, McGraw-Hill

Spencer HW. (2012) Do You or Your Acquaintances Suffer From Scotosis? The Compass 2012. Available from: URL: http://www.asse.org/assets/1/7/HowardSpencerArticle.pdf (visited 25 nov 2014)

Taleb NN. (2012) Antifragile - Things That Gain from Disorder. London, Penguin. ISBN 978 0141038223

 

Since both David Woods and Sidney Dekker were in Trondheim for a PhD dissertation, SINTEF had used the opportunity and invited to a Resilience Workshop. Some 30 participants from among others aviation, railways, police, the national accident board, a few oil and gas and a bunch of PhD students turned up. Let me try to give a glimpse of the day.

Disclaimer: a LOT was said during the day (partly in a high tempo, and sometimes in little user-friendly words) and it’s difficult/impossible to catch everything, so this is obviously just scratching the surface…

Part 1: David Woods: Reaching For Resilience

Due to an “almost in time” arrival I dropped right in when David Woods started his talk. In general it’s thought that there are three drivers of Change and Innovation:

1. Connectivity
2. Sensors
3. Automation or Autonomy

Turns out one often forgets the most important one:

4. PEOPLE: Human Goals and Experience

We live in an adaptive universe. We live in it, we power it. Pressures and technologies tempt us in. We are trapped in its rules, and things don’t work like we think they do. David used the term Tangled Layered Networks to describe the things we’re in.

From collecting, analysing and sharing stories of resilience and brittleness in action we can empirically see general patterns of cycles of adaptive reverberations across networks. In our precarious present we can assess where and how our systems are brittle or resilient in the face of surprise. How can we engineer/develop for a resilient future?

David then went on discussing charting cycles of adaptations. How to get to florescence where changes in one area tend to recruit and open up beneficial changes in many other aspects of the network. The NASA were used as a (negative) example where the drive for short term Faster Better Cheaper (FBC) goals eroded the system that became brittle leading to several disasters. The challenge is to create safety under pressure.

There is a couple of explanations for Resilience:

1. Rebound from trauma
2. Robustness/well-modelled
3. Graceful Extensibility (the opposite to brittleness)
4. Sustained Adaptability

The first two are linear simplifications while the latter two are taking the look from the Adaptive Universe.

Our systems are facing “the dragons at the borderlands” that challenge normal functioning. Failure can arise through:

1. Decompensation
2. Working at cross-purposes
3. Stuck in Stale (what was effective yesterday may not be effective anymore tomorrow).

These can be countered by respectively:

1. Anticipation
2. Synchronizing over units and levels
3. Proactive learning (we don’t have to wait for accidents)

FBC pressure to make systems more efficient may deliver paradoxical results. On the surface things look good because short-term goals are met, but new forms of complexity and fragility emerge. We are highly competent when events fall within the envelope of designed-for-uncertainties. But sudden, large failures occur in the face of events that challenge or go beyond that envelope because the system is brittle and there is insufficient graceful extensibility.

Often the future seems to be implausible and the past incredible. How can we be prepared for the surprises at the borderland? Human talent and expertise is one resource. Initiative must be given to the sharp end (empower them). Another is to invest resources towards graceful extensibility. 

How do you manage complexity and tilt towards florescence as connectivity, sensing, automation capabilities grow? With a positive (graceful extensibility, initiative and adjust capacity to adapt) or negative (brittleness, shortfalls, mis-calibration, trapped in trade-offs) story? The basic rules to reach resilience are emerging to: Leverage the adaptive power of human systems!

Part 2: Application of Resilience in Wideroe

Endre Berntzen and Ivonne Herrera presented a tool that has been developed and implemented by Wideroe to assist their work with proactive observations where the aim is to observe “normal people doing normal work” (and not acting up to follow procedure because they are observed). They want their personnel to tell things like they are.

In order to prevent that pilots get the idea that their performance is evaluated when they make choices and trade-off to manage challenges and conflicting goals it’s chosen to use the “label free” term “observed gaps”.

In addition Wideroe collects various indicators that can be linked to the observations.

One challenge is to protect the data. The pilots are promised confidentiality in order to get the most out of the observations.

Pilots add resilience to the system. What skills do they need to manage the variations? One must give the pilots more space to make good decisions and do their job.

Part 3: Sidney Dekker: How to Engage Stakeholders in Resilience 

An extremely dynamic and engaged (without any Powerpoint slides, but a lot of acting, humor and irony and some rather sharp comments) Sidney Dekker discussed his subject by using the situation in Australia as a backdrop to illustrate what we’re up against in many instances. 

Australia has still very much a late Victorian system. Human factors are seen as a (the) problem, illustrated by the 1903 quote “moral and mental shortcomings”. This characterizes the Tayloristic Universe of most MBA Educations (MBA being the abbreviation for More Bad Advice, according to Sidney): there is ONE best way (which of course only works under strict conditions while the real world rarely is this simple). The Tayloristic view means that managers and their plans are smart, while workers are dumb and not motivated. Hence focus on compliance and safety posters/slogans to “motivate” the workforce.

According to Sidney one first step in engaging stakeholders might be to rip down all the posters and look what happens - if nothing else then as an experiment. Most likely nothing will change at the safety front, but chances are that people will feel that they are taken more seriously.

But there are big stakes in holding business up. At the moment about 10% of the Australian workforce exists of so-called compliance workers of which roughly one third works in safety. So there is clear interest to keep Tayloristic views, poster programs and myths like Heinrich’s 88% or accident proneness alive. 

The safety structures we have today are not enough anymore. In very safe systems Heinrichian and Reasonian (Swiss Cheese) thinking doesn’t work anymore. But it’s hard to get through the barriers raised by those that have business at stake. We need ‘micro-experiments’ to go against this ‘behemoth’. 

Sidney told about a couple of these experiments. One came from a tunnelling project with a fatality. This accident happened under normal conditions where the victim was making necessary every day adjustments to get the work done ("GSD" = Gets Shit Done). So by approaching this problem the traditional way we may be fixing the wrong problems - plugging the wrong holes in the barriers. Instead of asking about accidents we should ask people about the stupidest that the company has them done. 

With regard to “plugging holes” the story of Abraham Wald’s research on protection for WW2 bombers was told. He studied the damage on returning bombers in order to assess where to re-enforce their armour and had the unique insight that they had to work on the places where returning bombers did have NO holes because it was likely that damage on these places led to the loss of bombers (which then didn’t return to England for study…).

Saying ‘No’ under production pressure is a generic ability that supports resilience. Sidney told about one of his co-pilots that would say “Tell me if you see me do something that you don’t like, and I’ll do the same to you”.

Another example came from the oil and gas world where an employee encountered a lost time injury which would ruin the company’s at that time perfect LTIF metric and the connected bonus. HR came with a ‘genius’ solution to offer compassionate leave instead so the employee could take care of her sick child. The location’s HSE manager reached out for help in this situation and as a micro-experiment it was proposed to take down the ‘XX Days Without LTI’ at the gate and instead post a positive message there. Quite uniquely the HSE manager put his cell phone number at the billboard.

Our work should not be about bureaucratic accountability with systems and numbers that create a ‘Looking Good Index’, but we should have the ethical commitment to make a world with just a little bit less suffering and hurt.

Part 4: Open Space Workshop

After that it was time for the Open Space workshop (read more about the methodology on the Open Space website or Wikipedia). Each participant supplied at least two questions that he/she wanted to discuss further. These questions were clustered in four groups that were further discussed in smaller workshops:

1. Organizing for Resilience
2. Measuring for Resilience
3. Training for Resilience
4. Regulation, Compliance and Resilience

After about an hour the four sub groups returned for a feedback session.

Measuring for Resilience

I’m sorry to report that I somehow failed to make good notes from Anthony Smoker’s great feedback session. Any help would be appreciated. How to prepare for the next surprise without being surprised by the last surprise. Language does affect the measurement!

Regulation, Compliance and Resilience

Initially already discussion had been about proactive learning that doesn’t need to wait for accidents to happen. Legal systems, however, are mainly set up for learning after accidents (with national accident boards, etc.). Sidney Dekker proposed for every accident investigation to also do an investigation of a success.

In the feedback session main focus was on the homeostasis of regulations. You may choose a different approach (for example Safety I or Safety II) but the total amount of rules will stay the same, only in the traditional approach most will be actual procedures and working instructions while in the other approach most will be built into competence and courses with a minimum of formal rules. This may bring a challenge for regulators.

Training for Resilience

Some elements from this session (partly based on an ongoing EU funded project) were that it’s important to know what the actual job is all about (“work as done”) and that understanding of boundaries (can you train them without crossing them?) and trade-offs is essential. Should one train for compliance or for resilience, probably for both. Sidney Dekker added that one rather should train for objectives than for prohibitions. Theory and case based training and experience transfer are powerful tools. Training and resilience must be seen in their context.

Organizing for Resilience

This group discussed most about trade-offs and how to organize, concluding (as scientists typically do) that more empirical studies were necessary. David Woods pointed also towards the 4 I’s from his 2006 paper “How to make a Resilient Safety Organization”: Independent, Involved, Informed and Informative. One should able to question existing wisdom and customs.

Part 5: Wrapping up

David Woods summed up elements from the feedback and added his own thoughts. It’s important to keep in mind that Resilience is about more than safety, for example business continuity.

Measurement: can you map interdependencies or create tools to handle them when they emerge or matter. Examples of (hidden) interdependencies or relying on assumptions that not prove to be true in case of disaster (flood, hurricane, …).

Resilience is not just learning from events. What undermined our ability to anticipate? What undermined our ability to prepare? How do we handle the next event which will NOT be like the last event. How do we coordinate decision making at various levels? How do you scale your resources? Your challenges now are not the same as those tomorrow and our old way of doing things is unlikely to meet tomorrow’s challenges.

How do we train or practice at the boundaries? One has to train and coordinate across roles!

When implementing automation and the like often this is chosen as a possibility to make things more efficient (have the same performance with less resources), but this leads to de-skilling, fragility and problems. Instead one should opt for the resilient leverage approach where one has the opportunity to increase performance with the same resources.

Our challenge is to build partnerships within and across boundaries/disciplines. Let’s build leverages to be able to deal with future challenges and surprises.

Sidney Dekker closed off by concluding that the participants “leave with more Questions than Answers, and that’s good. Scientific progress doesn’t depend on more Answers but on asking Better Questions. So do have the courage to think differently!”.

 

Amsterdam Safety Symposium - A brief overview

Having just returned from this great event let’s do a brief and rather spontaneous overview of two days of learning and sharing with safety professionals from all over the world (literally I believe every continent was represented). And probably just as importantly: finally meeting the people one has discussed with online or directly per e-mail in the real world. It is interesting to be confronted with how misleading your expectations can be from a profile picture, especially in relation to height!

                     

Some participants already had the opportunity to meet the day before and the atmosphere was extremely positive from the start. Pieter Jan Bots (left above) opened the event, briefly explaining its background, introducing the organizational team and the location (WTC Triple Ace was a great venue, by the way, very professional, spacious and good to reach). Then he handed over to chairman for the days to come, Ralf Schnoerringer (middle above), who did a great job introducing the various speakers and keeping things in accordance to schedule as much as possible.

The first keynote speaker was Eric Heijnsbroek (right above), formerly of Tata Steel. Eric kicked off the first part of the Symposium, dedicated to how safety fits into production and his own experiences with safety in his period in Tata. This included an incident that happened “on his watch” where a known hazard was ignored for a long time by both management and employees. It took a serious accident to do something about the situation. One of the things that Tata introduced recently are so-called ‘safety teams’ where people from the work floor are taken out of their ordinary job for a month to work solely with safety in a practical manner and implement improvements. One of Eric’s messages was that our job is to make people feel uncomfortable.

James Loud then talked about “Taking Safety Seriously”, touching upon some (in the safety community) sensitive issues. He identified some of the symptoms of non-serious safety, argued that we should question professional traditions and stop managing to the metric. Instead of the often practiced Plan - Do - Hope - Pray cycle we really should start doing Plan - Do - Check/Study - Act.

After this the audience split up in several smaller groups for a workshop to discuss how they could support production and where the challenges are. Topics that were raised in the feedback session included Communication, Participation, Ownership, Culture (in various forms), Attitude and Acting on input. Jim Loud commented in this session that “We cannot define all behaviours and then operand condition people. It’s a compliance approach and a wrong one”.

Andy Stirling covered a lot of ground as keynote speaker of the second part of the Symposium which dealt with Resilience, Antifragility and Business Continuity. Andy’s presentation was titled “Risk, Resilience and Precaution: from sound science to participatory process”. Many people, especially policy makers want (or claim) to base their decisions on sound science and evidence, but science is not as black and white as people tend to believe - as illustrated by results from various energy studies. More often we should therefore be more focused on asking the right questions than trying to answer correctly and precaution may often be a prudent choice. Uncertainty require deliberation and especially Safety Professional must not try to use their proverbial hammer (i.c. Risk Assessments) in situations where it’s not appropriate, e.g. because we do not have sufficient knowledge about possible consequences or likelihood. So we should allow for greater skepticism and not make complex problems seemingly simple by crunching everything into a misleading number. Do mind: this does NOT stand in the way of decision making. The value of participation is not agreement, but having the opportunity to see things from different angles.

Linda Bellamy talked about “Safety and Resilience in the face of Uncertainty”. For this she was able to draw on the recent results of a European research project where a number of people in high-risk occupations (e.g. mountain guides) have been interviewed about the elements that help them to succeed in their hazardous jobs. The aim is to find elements that create success and model these in a ‘Success BowTie’. One interesting item for further discussion proved to be the Safety I - Safety II approaches and if we are actually able to learn from success, or only from failure (as one of the interviewees in the project had posed).

Carsten Busch and Nick Gardener took on the effort to do the Symposium’s only duo presentation, arguing we should be “Agile not Fragile”. Together they explained some key concepts from Nassim Taleb’s latest book “Antifragility” and applying this to safety. In the process they used an egg (broke because it was fragile), rock (very robust, more so than the stage floor) and bouncing ball (very agile and anti-fragile). Key messages were that it is easier to detect that something is fragile than that one can predict an accident from happening, that one shouldn’t fear disorder but balance it with some (but not too much) order and that tinkering is a good way of creating robustness, improvement and maybe even antifragility.

Piet Snoeck from Belgium talked about “Creating robust supply chain management by including new product regulations”. A solid discussion of challenges around robust supply chain management what is needed (processes/tools should be in place to assure product compliance) and how this can be accomplished (how the various actors within a company could act together).

Cary Usrey was the second speaker from the USA. He had the tough task of doing the final presentation of the day, something which he mastered very well with a lot of energy, humour and interaction. His “Seven Steps to Safety Sustainability” (note: not THE Seven Steps…) started with discussing John Kotter’s Process for Leading Change and then went one-by-one through seven important steps and the barriers that must be understood and overcome when following the process. One comment that sparked a lot of discussion from the audience was the often used ‘retraining’ as corrective action after an incident. As Cary rightly comments this is often little useful because it’s generally not lack of skill that is the problem.

The first day was closed with a Panel Discussion where most speakers from the day were invited to answer to questions from the audience. Among others the discussion from Linda Bellamy’s presentation about Safety I and II was continued here and various viewpoints on if one could only learn from failure, or also from success (e.g. by looking at things that went right, such that near misses did not become accidents).

After that a very fruitful first Day was over although a ‘core group’ decided to continue further discussions with drinks in the bar just across the conference center - introducing some foreigners to the pleasures of Belgian beers. Later that night participants joined up for a very nice dinner together downtown (and the most resilient extended this with a visit to the Irish pub and a city tour by night).

Jop Groeneweg was asked to open Day 2 and Part 3 of the Symposium with the themes of Risk Perception and Risk Assessment. Jop centered his presentation (humorous, engaging and energetic as always) on the subject of Risk perception. As an opener to illustrate risk perception he chose the current ebola scare and how (media fueled) risk perception dominates in the Western world while the actual risk in e.g. The Netherlands can be considered very low. Jop contrasted the views on various ‘thinkers’ (Spock, Simon, Kahneman and Fischhoff) to rationality and thinking about risk. One possibility to affect our risk perception, according to Jop, is to get our System 1 thinking engaged. Because of this one useful approach may be to adopt to a greater degree red flag systems - we are very well equipped to spot deviations and we should react on these and pose questions, instead of risk assessing everything. He also warned about the difference between the ‘paper world’ and the ‘real world’ - quite often management systems are in place and we are lulled into sleep by these.

Due to some unforeseen circumstances the intended parallel sessions had to be rearranged ending up with all presentations after each other. This only proved the antifragility of the group of people assembled and of the speakers who rose to the challenge and made it happen within the allotted time (even allowing for getting lunch and eating it during the presentations).

Beate Karlsen kicked off and talked in a very practical way about her experiences with what she called “Situation Blindness in Risk Assessment”. The biases and perceptions people enter a risk assessment with are things we often are not aware of, but which can affect the result (or the resources used) in a most significant way. Being aware of them we are able to tackle this problem, e.g. by asking questions to drill down to the real problem.

Frank Verschueren (the only representative from a regulator on the program) discussed a number of viewpoints on Risk Evaluation Methodologies and the challenges he has experienced in his work as an inspector. There is no one-size-fits-all approach and choice of methodology is relevant. Focus on the Worst Case gives often too much compression. One advised approach is using LOPA.

Thierry Lievre-Cormier shared some thoughts about “Risk Assessments in the Future”. He did a very visual presentation discussing: 1) Understanding risk and hazards, 2) Understanding design safety limits, 3) Understanding the precautions and 4) Keeping the precautions effective. After this he took a brief glimpse at future possibilities.

Linda Wright asked the question “Risk Ranking is Easy?”. During a recent research project on the site where she works she has looked into the question if people are using risk matrices consistently: how is the risk matrix used and what is the level of agreement between three groups of users: Managers, Safety Experts and Operators. The result was quite an eye opener as there was little agreement between the users. In all scenarios all possible combinations were provided by the users, showing that the risk ranking matrix is highly subjective and not easy to use. Interestingly Managers scored consistently lower than Safety Experts.

Mona Tveraaen showed a “Dissident view on Risk Acceptance Criteria” and built a strong argument against quantitative RAC (actually on RAC in general). Some of the arguments included that RAC draw the wrong kind of attention, use too much resources, they are not fit to be both ambitious and represent the current situation at the same time, they obscure a lot of information (not showing uncertainties, assumptions etc.) and they take away the decision making from the responsible persons giving it to experts. Instead one should use ALARP as the guiding principle for everything that isn’t ‘broadly acceptable’ or ‘broadly unacceptable’. She also made it clear to the audience that it is possible in Norway to buy a return ticket from Hell, and even a ticket from Hell to Paradis...

Dov Basel from Israel presented the interesting “Separation Distances Dilemma”. Israel is among the most densely inhabited countries in the world which means that it’s very hard to keep industrial or technical installations with hazardous materials (e.g. NH3) and the public separated. A number of challenges (not only safety, but also legal and financial) were discussed and some solutions presented.

Frank Guldenmund was the keynote speaker for the fourth and final part of the Symposium. He gave a thorough introduction into the subject of Safety Culture, by first covering a lot of ground about what culture is in a really practical way, for example by discussion the different perceptions of a seemingly simple act like a handshake. His presentation contained 16 (or more?) key lessons from various observations and summed these up in a number of ‘Don’ts’. The self-confessed academic did a great job of explaining a really difficult subject in a very understandable and easy way. (for those who aren’t familiar with his book on Safety Culture yet, find it here)

For Safety Culture the original parallel sessions went more or less according to program. Johan Roels opened the ‘New Perspectives’ section that took place in a smaller side-room. Interestingly that room filled up rather well, obviously quite a few people had become curious about what this ‘Silly Belgian Engineer’ (his own designation!) had to say. Without any doubt Johan had the presentation with most passion and a great interaction with the audience (the smaller room actually facilitated this). Johan explained why he felt that safety needed another paradigm (or mind set, or whatever word one chooses) and what he saw as a possible way forward, using creative interchange, getting away from fear and the vicious circle we often are locked into. He argued to abandon the traditional top-down, outside-in approach and go for an inside-out approach that travels ALL directions.

Arthur van Rossem’s presentation “Zero-Tolerance: Enforcement of working conditions laws in the Netherlands” brought yet another viewpoint to the Symposium, namely that of a lawyer. A very solid presentation about how the Dutch system of fines and prosecution of breaking the Safety and Health at Work regulations works which may have given some lessons to the many foreigners in the audience.

Regrettably it was impossible to be at two places at the same time during the parallel sessions in the afternoon, so I missed Anja Dijkman about “How to Engage Company Management to Improve Safety Leadership” and in the “Practical Experiences” stream Pieter Jan Bots speaking about the role of the supervisor and Ralf Schnoerringer about Contractor Management and Safety Management Implementation. Even worse, because of the fact that the two sessions had gotten out of sync, I only experienced Greg Shankland’s (above) closing remarks on “Bluprint for Safety” (a session I really wanted to see!), so I was heavily tempted to reply to the “Are there any questions for Greg” with a “Could you please repeat the last 20 minutes”… His main message was that your culture determines how well your safety systems work and there are tools to facilitate this. From the audience I sensed good response and quite a few questions were asked.

In the main hall then a rather spontaneous panel discussion was organized where some of the speakers (Dov, Mona, Greg, Ralf, Cary and Carsten) from the last two days were invited to respond to questions with regard to safety culture and risk assessment/handling. Some of the discussion was about the pros and (mainly) cons of incentive systems for reporting. Most of the speakers agreed that it should be about the actions that come from it, and not about gadgets or even money. Another subject was related to Dov’s presentation and it was discussed how various countries handled the problems of storing or transporting large quantities of hazardous substances in areas where many people live.

As part of the closing ceremony, Pieter Jan Bots once more entered the stage and asked Johan Roels to join him to receive the Black Swan Award for doing the most daring and passionate presentation. This honour was absolutely deserved. Congratulations to Johan!

Summing up: days well spent! A BIG thank you to all the presenters and organizers that put a lot of effort in making this a great event. It certainly would have been great if even more group members had participated in the event, but chances are that there will be another opportunity for that in the future since consensus was that this shouldn’t be a one-off event! Watch out for a new date in 2015!?

Paper for the International Rail Safety Council 2014 in Berlin October 2014

written by Carsten Busch, Kjetil Gjønnes and Mona Tveraaen

 

SUMMARY (full paper is available here)

What’s in it for us - that was the leading principle that Jernbaneverket followed when implementing the Common Safety Method on Risk Evaluation and Assessment (CSM-RA). This paper, based on practical experiences, describes how to benefit from the CSM-RA; by both making the best out of the smart elements in the process described in the CSM-RA and by organizing internally. It also discusses challenges that have been experienced with the application of the CSM-RA. This paper contains among others the following points:

  • The CSM-RA has added value instead of being an administrative burden.
  • Flexible application makes it useful for any project/change, not only for significant changes.
  • A practical, mainly qualitative approach has given a wider involvement and understanding outside the exclusive circle of risk experts, and hence a better basis for informed management decisions and for safety management.
  • Internally organized Independent Assessment Body can facilitate experience transfer and organisational learning while maintaining independence.
  • A pool of multi-disciplined professionals helps to build and maintain competence.

So what was in it for us? If applied in a reasonable way the process may give better organisational learning, more transparent and risk based decisions and – yet to be proven – the expected added value of a safety management that goes beyond the decision point and truly ensures lifecycle safety.

 

Awarded Best Presentation at the IRSC 2014

 

A critical view on Zero Harm goals and terminology. First seen on SafetyCary, November 2013.

The version below is slightly updated:

 

Zero Harm is an Occupational Disease

I’ll never forget Fred. He never knew, but he has been one of the people who helped to shape my thinking during my first years as a safety professional. With his lean and not even 1,70 m (~ 5’ 6”) short frame he didn’t have an impressive physical appearance, but his personality and commitment sure made up for it. He worked as a carpenter in the department that did the interior refurbishments of passenger trains at the Haarlem workshop. More importantly for me, he was a member of the emergency response and first aid team (which was one of the responsibilities of the workshop’s safety advisor) and a very active occupational safety and health representative of his department and driving member of the Occupational Safety and Health Committee.

The Arbowet (the Dutch equivalent of the Safety and Health at Work Act) points out that good OH&S management is shaped through cooperation of employer and employees. For this goal the representatives met with the workshop’s management on a regular basis. Fred, however, often used to have this typical ‘old fashioned’ union stance that employers hardly ever had the best of intentions with the workforce. I suspect that this was one way for him to stay critical – something he did very well. In the end, however, there may have been some truth to his stance. One day he had to call in sick and after an absence of some weeks he came to my office and brought me the news he had feared all along: he had been diagnosed with mesothelioma. Within one year he died.

For many, many years crocidolite, the so-called ‘blue asbestos’, had been used to isolate coaches of passenger trains, among others, for soundproofing and shock absorbing. Long before I started as a safety advisor at the workshop, Dutch Railways had started removing this highly hazardous material from their trains, but as a carpenter, Fred had been exposed to high levels of the invisible fibers during the 1960s and 1970s when there was a serious lack of awareness, let alone adequate protection. Fred spent his last year battling the disease and responsible parties. Obviously he lost the former, worn out by chemo treatment and cancer, but he at least had the pleasure to see success in the latter since he was one of the people who initiated a good compensation scheme for victims of asbestos exposure.

These days many companies have fabulous Occupational Health and Safety Policies. Growing awareness and modern legislation have come a long way. No self-respecting business wants to do damage to man or environment and so many of them have safety goals and follow up their safety metrics. Nobody wants to have fatalities, and so the ‘Zero FAT’ hasn’t only become a sales slogan for healthier food, but also a safety parameter that many a company boasts with. Of course most companies don’t stop at measuring FAT. No, most strive for zero lost time injuries (LTI) as well. Some departments of the company I work these days, for example, regularly celebrate when there has been a certain period of Zero LTIs. But how useful is this measurement, really?

Deep Water Horizon and Texas City have underlined what many of us knew before: LTIs are no parameter one should use when managing process safety. But isn’t it a good parameter for occupational safety then? Hardly. I won’t go into any of the common and highly valid reasons (like being reactive and highly dependent on what’s reported) why FAT, LTI and the like are weak safety metrics at best, and misguiding most of the time. This time I’d like to focus a bit on another drawback: there is quite a lot they don’t cover!

Just before writing this article the Ashgate newsletter alerted me to a new book, “Safety Can’t Be Measured”, subtitled “An Evidence-based Approach to Improving Risk Reduction” by Andrew S. Townsend. At the time I hadn't read the book yet (I have now, find review here), but the description on the publisher’s website sounded highly interesting, and the first chapter (which at the time was freely downloadable from their site) boded well.

In this first chapter, Townsend places occupational health and safety in a larger context by looking back at human mortality since the dawn of time. One interesting observation is that only since the 1920s has life expectancy been back on the level it had been some 10,000 years earlier! More relevant for us safety professionals, however, is the finding that accidental death in the workplace is insignificant when compared to the total of accidental deaths: car accidents and falls at home humble even the worst work-related fatality statistics. Even more interesting: deaths attributable to occupational health issues are estimated to be higher than accidental deaths in the workplace by a factor of 25! Oddly, these aren’t found in corporate safety stats or targets of pithy slogans.

Also, Fred never made it into the safety statistics, except maybe in hindsight on a national level. Yet his untimely and way too early death had a clear and direct causal relation to the safety and health at the workplace since mesothelioma is very specific for exposure to crocidolite. None of the commonly used safety metrics, however, covers these cases. In my time at the workshop I never had a FAT (yet one or two close calls), but every year we had to bury one or even several (former) colleagues who succumbed to asbestos-related diseases. Isn’t it ironic that if someone trips in the office, strains his ankle and has to stay at home for a day we’ll find him/her as a part of the LTI-index, but if someone dies 30 years after work exposure to asbestos there’s no mention in any of the statistics?

This is only scratching the surface of what isn’t captured by most traditional accident/injury oriented safety metrics. There are many factors related to occupational health that may cause cancer (and eventually be fatal) like some kinds of saw dust, ceramic fibers, radiation, polycyclic aromatic hydrocarbons and other chemical substances. But there is much more that may cause permanent damage to humans without actually killing them. Take for example the number one problem for many of the slightly older colleagues in my company: work related loss of hearing, due to exposure to noise over long periods of time. I doubt that any of those ever had a lost work day because of this, but their full hearing ability will never return.

Other examples of factors that over time will lead to occupational disease may include physical strain (be it a ‘mouse arm’ or a worn out back), solvents (affecting the neural system) or work-related stress (‘burn out’ probably the fastest growing occupational disease in the Western world). There are, of course, other factors that we don’t even have a full grasp of potential consequences of, like genetically modified substances and nanotechnology.

So, what would be a way forward then?

First of all, we should do what has been said by several others many times before: don’t stop looking back at what happened, counting injuries and incidents, but also, and especially, start looking forward. Find proactive metrics and measure most of all what your company does to create safety and health in the workplace. Use reactive data most of all as the proverbial ‘litmus test’ and as an opportunity for further learning and improvement.

Secondly, find metrics that cover the full spectrum of HSE(Q) and not only safety in the narrowest sense. Suggestions for widening the spectrum are mentioned above, but there are many more. Nobody knows the workplace better than your employees, so ask them. And do mind that there are factors that may escape our usual safety and health thinking, especially social issues like mobbing, substance abuse, gambling or isolation at work camps or rigs.

A keyword in finding better metrics is having a focus on the hazards and the risk these hazards pose. Then we can look what to do about them and what can happen if they fulfill their potential. But that might be something for another article. For now I need you to help prevent traditional safety metrics and ‘Zero Harm’ goals becoming the Safety Professional’s Number One Occupational Disease in that they blind us to some very important aspects of our job!

 

And the final installment of this trilogy on barriers, written together with Beate Karlsen. First published by SafetyCary.

The second part of the Barrier Trilogy, written together with Beate Karlsen and again originally published by SafetyCary.

The first part of a trilogy on barriers and barrier management that I wrote together with Beate Karlsen. Originally published on SafetyCary.