Print

From Human Error To Second Victim - Learning Lab Lund, June 2016

In June, I had the pleasure to attend another Learning Lab at Lund University. Titled “From Human Error To Second Victim” it was led by Sidney Dekker, assisted by Johan Bergström (and a few others).

Day 1

Ethical commitments

Sidney started by discussing some of his ethical commitments:

  1. No one comes to work to do a bad job.
  2. Adopt Bonhoeffer’s view from below. Why did things make sense to them?
  3. Try to see Justice above Power. This will take steadfastness and commitment - note that the organisations where you work are NOT governed democratically!
  4. Before something happens: celebrate worker autonomy, self-determination and expertise.

(Not part of this list, but I also wrote down: “Political correctness is a waste of time”, but cannot recall the exact context…)

(Later on during the day, Sidney mentioned two ground rules for critical thinking that are also worthwhile: Rule 1: There is no substitute for being well-read. Rule 2: Don’t believe a single word I’m telling you. Think critically for yourself and check stuff!)

Old View and New View recap

After this we launched in a brief re-cap of the Old View and New View (see Day 1 from the first Learning Lab). Old View includes seeing ‘human error’ as the cause of accidents, pointing the finger and trying to get rid of ‘bad apples’. The New View tries to understand complexity of the system, and sees ‘human error’ as a consequence of stuff in the system. People adapt to make the system work, the system is not inherently safe; people help to create safety. Therefore, the New View doesn’t intervene at behaviour, but in the system.

Interestingly, the New View goes all the way back to 1947. Already Fitts & Jones put “pilot error” between quotation marks in their landmark article and indicated that “human error” is merely an attribution, it’s a label.

Several cases were discussed from everyday situations where humans finish the design (e.g. attaching tassels to distinguish between light and ventilator switches) to aviation (e.g. placing a coffee cup over a lever in the cabin in order not to forget to pull it), or where the system lured people into error, including belly crashing B17 bombers and the Singapore Airlines Flight 006 accident in October 2000 where relevant factors in the system included lighting and signage at the airport did not measure up to international standards.

This is also a good case to illustrate the question how far do you want to go to find reasons for ‘human error’. Sometimes as far as geo-political reasons (Taiwan is not part of the United Nations)! Sidney stated that the New View has no stopping rule. It’s up to you, your choice. And pretty much an ethical one. There is always another question to ask, if you want and as long as it’s useful. You don’t find causes, you construct causes from the story you choose to tell.

After the first break there was a lively discussion around a recent article in the New York Times about medical error. One important problem with figures like these is how did anyone count them. Relevant is also the question if reports like these help us or hurt us. On one side they can help to start a discussion, get public attention and can create a sense of urgency with regard to safety work. On the other hand it dumbs down the matter and is a clear message from the Old View.

Local Rationality

Local Rationality was the next subject, starting with Adam and Even in the Garden of Eden because this provides kind of a blueprint for our typical thinking about choice as a basis for decision and error. This has through Augustinus (“failure is voluntary”) and Calvin found its way into Max Weber’s protestant ethic where success depends upon personal hard work and the individual is responsible for his own decisions.

Until the 1970s the main model for decision making was expected utility - and despite the fact that it has found much criticism, this model makes frequent returns, including lately into the world of criminology. Expected utility comprises four steps:

1. Rank options by utility function

2. Make a clear and exhaustive view of possible strategies

3. Work through probability of strategies

4. Choose the best, based on calculation

Expected utility assumes full rationality and plenty of resources to do these steps. It is strongly connected to our moral idea about choices that if you don’t succeed, you didn’t try hard enough.

Contrasting is the natural decision making that came from the 1980s onwards from Gary Klein and others (like Orasanu & Connolly: decision making in action). And already in the 1950s Herbert Simon stated that it’s impossible to be fully rational (and even coming approximately close is impossible) and concluded that rationality is bounded. A problem connected to the term ‘bounded’ is that it’s unclear what it is relative to. Who gets to place the boundaries. Also, bounded can have a somewhat judgemental tone. Terefore, preferred to speak about ‘local’ rationality. Local indicates a relation to the person at a point in time in a certain context.

The three main questions for Local Rationality:

1. What were the goals? Conflicts?

2. What knowledge did the people bring to the table?

3. Where were they looking? Focus of attention? (usually driven by points 1 and 2 and the environment/context)

Accountability

The next subject was what this means for accountability. As an illustration Sidney took a recent article from the Dutch newspaper NRC that discussed the situation of junior bankers in London who are expected to pull all-nighters and can be fired within five minutes. As the article argues: “pressure undermines all ethical sense”, everything is focused on corporate survival and “if you can be fired within five minutes, then your horizon becomes five minutes”. Besides, “everybody does this, everything is legal, so it’s okay”. This created a very special local rationality. How do you deal with creating local rationality and at the same time holding people accountable?!

As a preparation, participants of the Learning Lab had been tasked to read the Johnson & Johnson Risperdal case from Huffington Post and discuss local rationality and accountability at the level of executives. Were these normal people making normal decisions, or was it something else? Was this reasonable or not reasonable behaviour?

It was interesting to observe that many opinions on the case were rooted in retributive thinking, most likely driven by some kind of outrage. One of the aims of the Learning Lab was to explore ideas of restoration. Instead of meeting hurt with more hurt, seeing of hurt can be met by healing. There is no final answer to this, but we can learn different vocabulary. It can be difficult going (or not going) from WHAT is responsible to WHO is responsible.

The day was closed by discussion after a film clip (available online) around the squeeze pilots and other airline personnel are placed in such that air travel becomes as cheap as possible. If an accident happens (like in the Colgan Air 3407 case presented in the video)

To what extend can we defend the local rationality of an investigation board that comes with their probably cause and mentions NONE of the things in the video? How seriously do we think that they take their ethical duty? Because do mind: investigation boards have political agendas, their conclusions are written or edited by political committees, they adapt the tonality of their message, or massage acceptance. Their reports do NOT contain an objective truth.

 

Go to Day 2