background2

Day Four

Where Day 3 introduced (more or less) Course 1 of the Master, Day 4 was to tackle Course 2 ‘The Sociology of Accidents’. An ambitious task. JB presented us with five different views on what risk is and why accidents occur. A nice summary of these is found in the SINTEF report that we had received as recommended reading beforehand.

Risk as energy to be contained

This is a notion of risk that makes immediate sense. Risk is seen as something that can be ‘measured’ and barriers have a certain reliability, as have humans. Many of the traditional risk management tools and applications have come into place because of this view. Here we find the basic energy-barrier model, the well-known hierarchy of control, as well as the Swiss Cheese Model and most of the models from the First Cognitive Revolution.

Of course the view can be challenged. Energy is not necessarily the enemy, but often something that is desired (just think of a nice hot cup of coffee). In complex systems you don’t always know where the hazards come from. Barriers can bring problems of their own: they are not always independent, they have side-effects and they can make the system more complex and thereby more risky.

Risk as a structural problem of complexity

Especially the last mentioned element gave rise to Perrow’s Normal Accidents Theory. Systems are characterized by a degree of coupling (loose to tight) and their interactions (from linear to complex). According to Perrow tight coupling must be controlled by centralization while complexity should be handled by decentralization. Since Perrow found that no organisation can be both at the same time, he posed that complex tightly coupled systems have to be avoided at all time. While most people in Safety have a relatively optimist view (risks can be handled), Perrow (a sociologist, by the way) is one of the few pessimists.

Note that Perrow did not really define ‘complex’. Also he sees it as a structural property of a system while Rasmussen sees it as a functional property.

One problem with the NAT model is that there are little dynamics in it. What if something goes from loose to tight coupled in an instant (see Snook)? Additionally it was argued that there were actually organisations that were centralized and decentralized at the same time. Enter the next view.

Complex Systems can be Highly Reliable

People like Weick and LaPorte (all organisational scientists) argued that some organisations (quickly labelled HRO) like air craft carriers or nuclear energy managed to move continuously between centralised and decentralised behaviour/management, depending upon the task at hand. Traits of HRO were thought to be:

  • social redundancy,
  • functional decentralisation,
  • maximised learning from events,
  • ‘Mindfulness’ and a state of chronic unease; constantly challenging the status.
  • Critics of the HRO-school argue that these so-called HRO are no real test case because of their near unlimited funds and rather specific/limited tasks. Also it’s a question of “Highly Reliable compared to what?”.

In the end HRO has rather become a business model than a real field of study although there are clear links to resilience engineering, especially because HRO doesn’t see safety as the absence of a negative, but the presence of positives.

The book to read, obviously, “Managing The Unexpected”.

Risk as a gradual acceptance

This view sees Risk as a social construct. Path Dependency is important; accidents have a history! Concepts are Incubation Period, Practical Drift (Snook once more - the behaviour of the whole is not reducible to the behaviour of the actors), Fine-tuning, information deficiencies and the Normalization of Deviance. Accidents happen when normal people do normal jobs. Accidents are seen as societal rather than technical phenomena. 

Barry Turner’s “Man-Made Disasters” (a book to buy when you have ample funding - check Amazon! But get the general idea in this paper by Pidgeon and O’Leary) is one central piece of work (by the way, I love the multi-facetted kaleidoscope metaphor in the Weick article). Another important piece of work is the Challenger investigation that spoke a lot of normal organisational processes: “fine-tuning until something breaks”. Diane Vaughan (book: “Challenger Launch Decision”) argues that safety lies in organisational culture, not in barriers.

Note the difference (nuance) between Reason’s latent failure (which indicates an error) and the notion of drift (not an error, but fine-tuning). Success can become the enemy of safety when we get too good at Faster Better Cheaper.

Risk as a Control Problem

Which is actually a good transition to the final and closely related view. Central in this is Jens Rasmussen’s famous 1997 ‘farewell’ paper “Proactive Risk Management in a Dynamic Society: A Modelling Problem” and the figure that pictures the ‘drift’ (as a consequence of fine-tuning, ETTO-ing and the like) of activities between three boundaries (economy, workload and ‘acceptable performance’).

The irony is that you often only find out where the boundary is by going right through it! The model is mainly the basis for a discussion to look at your challenges and how to deal with them. There is sometimes a cybernetic approach with networks (e.g. Nancy Levenson’s STAMP).

The enemy in this view are goal conflicts and distributed decision making, as well as success (again).

Dissecting accident reports

The day was closed with an interesting exercise where a number of public accident reports (by the CSB, RAIB and others) from various sectors were put on display and students were asked to pick one and then in groups find out what models, biases and possible traces from ‘old view’ safety had influenced the results.

During the feedback session many interesting reflections were made and experiences shared, including:

  • Investigations do sometimes contribute little to safety improvement, but are an important societal ritual to deal with pain and suffering. Sometimes we shouldn’t even initiate actions (e.g. German Wings) because they probably will be counterproductive, but societal and political demand may prevail…
  • Don’t use models but methods for investigation. Models make simplifications that may affect the results.
  • One participant mentioned that they never did recommendations in their reports. They mainly described work-as-done and then it was up to management to decide on eventual actions.
  • Another said they don’t list causes (these are only minor symptoms often) but rather discuss problems in an investigation report. The problems are usually more complicated than a list of causes. The US Forrest Service has even taken this to another level. They write ‘Learning Reviews’ instead of Accident Investigation Reports and start them with something in the line of “if you expect answers, stop reading and look elsewhere, you will probably only find more questions here”.
  • When doing an investigation, pick from various models. You are explaining something from your view and not writing an academic paper.
  • As Rasmussen said: You don’t find causes, you construct causes. 

 

>>> To Day 5