A summary and discussion of “Managing The Unexpected: Assuring High Performance in an Age of Complexity”, a book by Karl E. Weick and Kathleen M. Sutcliffe (2001 version, ISBN 0787956279, a revised updated version has been published in the meantime).
The authors of this book work as (associate) professors of Organizational Behaviour and Psychology/Human Resource Management at the University of Michigan. This book sums up the insights they gained during their studies of so-called High Reliability Organizations (HROs). What is it that makes these organizations "safer", "better", "more reliable" or just "different"? But first a slight discussion/warning upfront…
The book mainly concentrates on organisations being resistant (and thus reliable) in attacking unexpected events that happen to them. There is not so much attention for the pro-active preparations like risk assessment or designing to avoid unexpected events. This should not be taken wrongly as unimportant when reading the book. I guess that these things are rather taken for granted because HRO-companies do so anyway. Besides, an important message from the book is that something will happen to you anyway. If you manage to avoid one thing through good preparation, design or management, another problem will roar its ugly head (which shouldn’t be seen in a fatalist way, of course). A quote in this respect: “If errors are inevitable, managers should be just as concerned with cure as they now are with prevention”. This explains the author’s statement that they recommend companies become better at being reactive (which is rather confusing without this context).
This - and a huge deal of repetitiveness in the first chapters - is my major point of criticism on the book, by the way. At one point the authors focus majorly on "Commitment to resilience". Nothing wrong with that of course, but they sing so highly the praises on "fire fighting" (p. 70, as opposed to structurally working on safety) that this piece of text is very, very dangerous when taken out of context. It sounds a bit as if structural, systematic working on quality and safety is something HROs don't do a lot... Which simply is not true (as said above), and this may be the structural shortcoming in this book: the authors focus só much on the things that set HROs apart from most other organizations that they neglect to tell what HROs do beforehand and in addition. HROs do have thorough analyses; invest a lot in competence of their staff; do build "defences in depth" and so on… It's not that they're just alert and ready to act… So, it's not a case of "this instead of", but "both and", something one might take wrong from these passages.
Actually, I almost get the feeling that the authors fell in the trap they described themselves: over-simplification. By focussing so strongly on the 5 virtues of HROs (as mentioned below) they almost forget entirely to tell what else there is: a complex combination of these processes with other elements as defences in depth, competence, communication, risk analysis, etc, etc.
The most important thing that sets HROs apart from other companies, according to Weick and Sutcliffe, is their "mindfulness". They organize themselves in such a way that they are better able to notice the unexpected in the making and halt its development. HROs strive to maintain an underlying style of mental functioning that is distinguished by continuous updating and deepening of increasingly plausible interpretations of what the context is; what problems define it and what remedies it contains.
As a contrast the authors discuss effects and symptoms of mindlessness. When people function mindlessly, they don’t understand either themselves or their environments, but they feel as though they do. They feel that because they have routines to deal with problems. And they think that this proves that they do understand what’s up. Although there is a grain of truth to that, what they fail to see is that their routines are little more than expectations that are subject to the very same traps as any other expectation. It is impossible to manage any organisation solely by means of mindless control systems that depend on rules, plans, routines, stable categories, and fixed criteria for correct performance. As stated in the book: routines can’t handle novel events.
The point is that you cannot write procedures to anticipate all the situations and conditions that shape people’s work. Anticipation has positive and valuable sides, but it also presumes a level of understanding that is impossible to achieve when one is dealing with uncertain and dynamic conditions. It gives people the illusion that they have things under control and blinds them to the very real possibility that they may have gotten it wrong. It may even be possible that each new formalized procedure makes it that much harder to do the work that is required.
Another possible trap that is noted by the authors is the too large trust put into planning (they even have added a sub-chapter called Why Planning Can Make Things Worse). Their argument says that if you understand the problems that expectations raise, you understand the problems that plans create. HROs do not ignore foresight and anticipation (and thus planning), but they are mindful of its limitations. Planning has some shortcomings. These are partly based on the fact that planners plan in stable, predictable contexts, which lulls planners into thinking that the world will unfold in a predetermined manner (Mintzberg’s fallacy of predetermination). Anything that is deemed ‘irrelevant’ to the plan gets only cursory attention, and thus plans can do just the opposite of what is intended, creating mindlessness instead of mindful anticipation of the unexpected.
According to the authors, mindfulness is reached by HROs though five processes:
1. Preoccupation with failure,
2. Reluctance to simplify,
3. Sensitivity to operations,
4. Commitment to resilience, and
5. Deference to expertise.
Some elaboration on these characteristics of HROs is given below.
Preoccupation with failure through:
- Encouraging the reporting of errors, elaborate experiences of a near miss for what can be learned.
- Not being lulled into sleep, because when a system is operating safely and reliable there are constant outcomes and nothing to pay attention to. (R.D. Haas: “There is nothing as blinding as success”). That however does not mean that nothing is happening, even though it is tempting to draw that conclusion. All it means is that the unexpected has not yet escaped containment.
- And although HROs take pride in their success, their feelings of pleasure are short-lived because they know that along with success comes complacency, a temptation to reduce margins of safety, and inattentiveness.
- A well-developed capability for mindfulness. This catches the unexpected earlier, when it is smaller, comprehends its potential importance despite the small size of the disruption, and removes, contains, or rebounds from the effects of the unexpected.
- Early detection as well as early management keeps the disruption from turning into a pressing problem. HROs learn from their mistakes as a result of swift processing.
- A good example/tool for alertness mentioned is the so called Churchill’s audit: Why didn’t I know? Why didn’t my advisors know? Why wasn’t I told? Why didn’t I ask?
- When it comes to mindfulness, it’s good to feel bad and bad to feel good. Remember the importance of surprise. Feelings of surprise are diagnostic because they are a solid cue that one’s model of the world is flawed.
Reluctance to simplify interpretations:
- HROs create complete and nuanced pictures (including divergent viewpoints between various expertises, levels of hierarchy or roles within the organisation). People are encouraged to look across the boundaries, and to have skepticism toward received wisdom.
- HROs are mindful about the question what they ignore.
Sensitivity to operations:
- Normal operations may reveal deficiencies that are “free lessons”. These signal the development of unexpected events. But these lessons are only visible if there is frequent assessment of the overall ‘safety health’ of the organisation.
- HROs are also run differently in the sense that they do not let hierarchies become dysfunctional bureaucracies, they provide everyone with detailed real-time information on what is happening, and they instruct everyone to be on-call to do whatever the ongoing operations require.
- A just culture (see also below) is needed. People (at the lower levels in the hierarchy) who refuse to speak up out of fear (of people way up) enact a system that knows less than it needs to know to remain effective.
- Being sensitive to operations is a unique way to correct failures of foresight.
Commitment to resilience:
- The signature of an HRO is not that it is error free, but that errors don’t disable it. Responses are (quickly) in place and the HRO manages to bounce back to ‘normal state’ reasonably fast after an unexpected event.
- To learn from error and to implement that learning through fast feedback are at the forefront of operating resiliently. Another reason for doing so (besides getting back into business a.s.a.p.): after the unexpected has occurred, there is a danger that (official) stories get ‘straightened out’, actions explained with hindsight knowledge and repeated, and then the learning stops!
Deference to expertise:
- HRO cultivate diversity… and [in case of unexpected events] authority migrates to the people with the most expertise, regardless of their rank (leadership role shifts to the person who currently has the answer to the problem at hand).
- But: HROs don’t simply assign the problem to an expert and then move on, instead decisions migrate down as well as up (in ordinary organisations they are mainly made by the ‘top’). Watch out: As soon as you have a specialist who’s very good, everyone else quits thinking!
- Rigid hierarchies have their own special vulneability to error. Stonewalling does not manage the unexpected.
- The classic command-and-control bureaucracy is adequate for a stable world, but too inflexible in times of change.
The overwhelming tendency within HROs is to respond to weak signals with a weak response [note here the difference between acting on a near miss, to waiting for an accident - cb.]. And HROs are constantly busy with this: What is unique about […] HROs is that they do these things mindfully in the belief that safety is not bankable. HROs are clear that you can't "fix" the safety problem, store up safety, and then move on to something else. And many HROs have good reason to do so, because they are often involved in high risk technologies (nuclear facilities, medical care, hi-tech research, air traffic control, etc): safety must be mastered by means other than trial-and-error learning, since in many cases the first error will also be the last trial.
Part of the book discusses in detail the relationship between Organisational Culture and the reacting to the unexpected. Culture is recognized as a very important item, because: culture may further your agenda, or it may defeat it. Not surprisingly, after reading the above…
The widely accepted definition of corporate culture is: a shared set of beliefs and expectations that matter to people within organisation. This includes norms, values, and consensus.
The renowned scientist Edgar Schein (one of the experts on corporate culture) advises to never start with the idea of changing culture. Always start with the issues the organisation faces. Always think of the culture as your source of strength. After all: it is the residue of your past successes. Even if some elements of the culture look dysfunctional, remember that they are probably only a few among a large set of others that continue to be strengths. If changes need to be made in how the organisation is run, try to build on existing cultural strengths rather than attempting to change those elements that may be weaknesses.
James Reason lists four elements of [HSEQ] culture that are necessary for people to be informed and safe. These show an interesting overlap with items discussed above:
- Reporting culture - information is shared within all layers of the organisation, bad news and good news.
- Just culture - not blame free in the sense that you will never be ‘punished’ or reprimanded, but a culture where it is possible to speak up and the messenger with bad news isn’t shot. People need to feel safe to report incidents or they will ignore them or cover them up.
- Flexible culture - the organisation quickly adapts to changing demands.
- Learning culture - lessons are learned from good and bad things happening. Widely through the organisation.
The second half of the last chapter is devoted to tips how to manage for more mindfulness in your organization, offering a huge toolbox with tips.
At the end of the book there’s an interesting quote whose message quite differs from many other management books: Mindful organisations do not seek a decisive victory, merely a workable survival that will allow them to achieve their productive goals for as long as possible.