background2

After last week’s heavier stuff, I'd like to tell this time a little story about the stuff that sometimes - with the best of intentions, no doubt - is released onto the world in the name of safety. This is a relatively harmless example, but it’s from a rather successful and reputed European consultancy firm. An example that doesn't necessarily give confidence in the scientific soundness of safety work. As a profession, we have to shape up!

I found this rather silly example in a recent book about safety culture and (mostly) behaviour that I was asked to review by an associate. Lots to comment on that one, which I will do at a later point. Suffice for now to say ‘old wine, new bottles’.

One of the building blocks presented in the book is the ABC model of behaviours: Activators, Behaviour and Consequences. The rationale behind the model: People display certain (wanted, or in this case safe) Behaviour because of a trigger that came before (an Activator, or Antecedent) and that what happens afterwards (Consequences). As simple as ABC, indeed.

The book then tells us that Activators account less for behavioural change than Consequences. Activators get between 5 and 20% and Consequences between 80 and 95%. 

Let’s stop here for a minute.

Firstly, I think what they mean to say here is that Consequences do facilitate permanent or long-lasting behavioural change more than Activators. But, that might be a slip of the pen. Still, I wonder, how would you know, except that it has intuitive appeal? We do things because of the things that happen to us.

Secondly, let's assume that these numbers are actually based on something substantial (I mean, other than: “We’ve used 5 and 95 earlier in the book, people recognize that. Let’s also take the 80/20 rule - everybody knows that one.”). Just for the sake of the argument. They mention that the number “depends on the source”, but the sources are not clearly specified (hidden further on in the book they do point to some sources, but when I tried to verify some of them, the results were Zero).

Thirdly, let's also ignore the crudeness of the ABC model (article coming soon). And, fourthly, we must not forget to ignore that the book takes the liberty to redefine A and (to a lesser degree) C.

Having done that, we proceed to the next chapter where we are treated to a discussion of positive feedback (a form of Consequences). We are encouraged to do so because, as we saw, acting on Consequences is 80% more effective than acting on Activators and, by the way, positive feedback is 80% more effective than criticism and other forms of feedback.

I’ll leave it to you to spell out what’s wrong here, and then I’m not talking about the question where that final 80% came from, by then I had already disconnected (alas, dutifully I had to struggle with another 140 pages).

No sources, most likely fantasized numbers and shoddy basic maths… What did I say about shaping up?!

Ow, crap, I did it all wrong - this wasn’t positive feedback, after all… Well, the book looks great and is colourful, how’s that?

 

Also published on Linkedin.

 

Originally, I was working on a piece about the ‘relevance’ of outcomes. Reading Ben Goldacre’s “statistics toilet book”, however, I came across the term Surrogate Outcomes. An interesting subject and so ‘relevance’ went on hold for a while. Surrogate Outcomes are when you measure something else than what you really are interested in because you have a hard time measuring one things and you know or assume that the other things is correlated to it. Based on your measurements you draw conclusions about the first thing. I had heard the term before, but this time it struck a note and I decided to look a bit deeper into the subject.

Surrogate Outcomes

In clinical trials, a Surrogate Outcome (or Surrogate Marker) is a measure of effect of a specific treatment that may correlate with a real clinical endpoint, but does not necessarily have a guaranteed relationship. Surrogate Outcomes are used when the primary endpoint is undesired (like for example death), or when the number of events is very small, thus making it impractical to conduct a clinical trial to gather a statistically significant number of endpoints.

One known example to illustrate Surrogate Outcomes is cholesterol. It is known that higher cholesterol levels increase the likelihood for heart disease, but the relationship is not linear. Many people with normal cholesterol levels develop a heart disease, while many others with high cholesterol do not. The primary endpoint in this case is ‘Death from heart disease’, but ‘cholesterol level’ is used as the Surrogate Outcome. A clinical trial may show that a particular drug is effective in reducing cholesterol, without showing directly that it prevents fatalities.

Because we have problems to measure Safety directly (huge problems, in fact), we use quite a lot of Surrogate Outcomes to measure our efforts. Somewhat confusingly, maybe, after reading the general explanation above: one of the commonly markers is the number of fatalities. Let’s look at a real life example to see how this works and what pitfalls we may encounter.

The Case

Last week, a news report on our intranet caught my attention. It was about the Road Safety PIN Award 2016 for Outstanding Progress in Road Safety, awarded to Norway by the European Transport Safety Council (ETSC). The article echoed the press release of the National Road Administration (Statens Vegvesen, SVV) that claimed that Norway was honoured with a European award for the safest roads in the world.

Let’s be generous and not dwell on the obvious mistake in the title of SVV’s press release, which is likely a slip of the pen of an over-enthusiastic PR-consultant, extrapolating Europe to Global scale. Let’s instead focus on other elements. Without having the possibility to go in-depth (road safety is a fairly complex, yet fascinating, subject that would require much more study), I’d like to address some important points.

How to Measure Safety?

Now this is a difficult question, and one for that to my knowledge no one has managed to find a satisfactory solution. This does not stop people pretending that they can, and so we find a wide selection of statistics everywhere. The most common way is of course counting the number of fatalities or injuries. Ignoring for the moment the dilemma whether one should measure something by its absence, I think that the biggest problem lies in the certainty with which some people conclude that absence of accidents (or negative outcomes) means that something is safe.

That is clearly a logical fallacy. It is a basic ground rule in safety (but one that is little understood and practiced by professionals, politicians and public alike) that while the occurrence of accidents can mean that there are problems with regard to safety; the absence of accidents does NOT mean that things are safe. You can achieve the same for example by pure chance, luck or under-reporting.

There is another major problem, namely the relative randomness of consequences like fatalities and injuries. Related to road safety, the difference between life and death, or between serious and light injuries may lie in things like what kind of traffic participant you are, the type of car, speed, the angle in which you are hit, and obviously the number of people involved in the accident. A much better indicator for safety than the number of outcomes would be the number of accidents (e.g. collisions). We cannot tell from the data, but maybe there are just as many accidents in Norway as in previous years and it’s just through some of the aforementioned factors that fewer people die?

It is hard for organisations like ETCS to get good data. The ETCS bases her report on numbers of fatalities (and serious injuries) that each country reports. They encounter very much all of the problems above, and some more. Even though I doubt that many fatal accidents will go unreported, some countries have shaky registrations and routines, some only register certain accidents, and definitions of what a serious injury is differ from country to country. Information about the number of accidents seems to be unavailable. I tried to find them for Norway, but I fear that this information is spread over many players, including the Police, National Road Administration, municipalities, insurance companies (even Statistics Norway does not have the info, I checked) and probably the minor things go entirely unreported.

Toying with Relatives

What I often find problematic, is how reports like this toy around with relative numbers. Even though I often prefer relative numbers to absolute numbers, they can also contribute to confusion or paint a more positive (or negative for that matter) picture than you would. Be wary when you are presented with a series of percentages that all serve to support a certain point of view. As Gerd Gigerenzer has taught us, we have always to ask “percentage of what”. And just check if you are suspicious.

I am not saying that the ETCS has done something obviously wrong, when they first write that there was a 44% decrease between 2010 and 2015 and then continue with “an impressive 20% drop in 2015 compared to 2014 levels”. It is just a bit redundant or unclear what they want to say. Most likely, they are trying to stress the good news. That message can be constructed in a number of ways, however. Look at the statistics and you can calculate that there was a 20% reduction in 2015 compared to 2012 levels as well.

As we saw, outcomes can be seemingly random and fickle. They can just as well go up again. The Norwegian minister of traffic acknowledges this. He is pleased about the praise from ETCS saying that “the best reductions were reached in Norway, where the number of road deaths decreased by 20%...”, but appears to be better briefed than the Council. “The number of accidents on Norwegian roads will vary due to randomness from year to year”, he said.

Indeed, the 20% reduction may sound like a lot, but from 2012 to 2013, there was an increase of over 40 fatalities (29%, just to confuse you further with percentages). Easy come, easy go. In situations where the variation can be so large, flinging around relative numbers compared to the previous year have little or no value and one should rather observe long-term trends. Still this practice is very common in many safety reports.

More Surrogate Markers

I do not know how the concept of leading and lagging indicators goes together with Surrogate Markers, but let’s just try and maybe start a discussion. If I am entirely missing the point, please point this out to me!

Judging from the use in clinical trials, Surrogate Markers tend to be leading indicators: lower cholesterol should lead to lower chance of heart disease. Fatalities, however, or accidents are clearly a Lagging Surrogate Marker for Road Safety. Apparently, the ETSC also uses Leading Surrogate Markers. Their press release states that “Declines in the level of police enforcement of traffic offences are contributing to Europe’s failure to cut the numbers dying in road collisions”, and continues “In a separate report on enforcement, ETSC found that, in over half the countries where data is available, the number of tickets issued over the last five years for use of a mobile phone while driving has reduced, suggesting lower levels of enforcement across Europe”.

In this case enforcement is seen as a Surrogate Marker for Safety (defined as ‘fewer fatalities’) because it is assumed that more enforcement leads to better compliance with traffic safety regulations leads to fewer accidents leads to fewer fatalities and better safety. This reasoning makes intuitive sense to many people, but is not without problems because things are not always that linear. More enforcement can also lead to a greater deal of keeping up appearances and after the control post is passed people speed up just an extra bit to make up for ‘lost time’.

There are other problems. The press release tells us: “Sweden, The Netherlands and Finland are among countries that have reported falls in speeding tickets issued”. I do recall that several years ago the Dutch Police force (and without any doubt Police in other countries too) had specific numerical goals for the number of speeding tickets per year. I do not know if they still have, but abolishing these ridiculous goals will probably lead to a different focus (most likely whatever new political goal they got). Speaking of focus, complaining about reduced traffic enforcement clearly ignores other priorities that might just be a bit more important for society right now - like terror attacks and dealing with the largest wave of fugitives in Europe since Atilla the Hun. 

Besides, also speeding tickets are nothing but a Surrogate Marker, because it is assumed that is says something about speeding behaviour. Of course, there are other possible reasons for fewer speeding tickets, like better general compliance of speed limits (not that I serious believe that this is the case, but hypothetically speaking). Interestingly, also this one is a marker that tries to measure something (traffic safety, or rather compliance) by its absence…

Like it or not, unless someone comes up with a brilliant way to measure safety as the ‘real deal’, we will have to work with Surrogate Outcomes for the time being (and all of future). This is okay as long as we understand the limitations and communicate within those limitations.

Where Is the Systems View?

There is one more comment that I must make. The cry for more enforcement is basically a claim that the system is safe if it were not for those stupid and non-compliant people in it. The title and the message from the press releases mentioned above also reflect this view. The SVV claims that the award is about “the safest roads" (not so strange after all that is what they are all about) while they probably should have talked about the traffic system as a whole.

Because, are the roads really so safe? Intuitively I would say Danish roads are much safer than Norwegian ones. And compare the German Autobahns to motorways in Norway and you can count proper motorways almost on one hand (even though there is clearly improvement in recent years). Many Norwegian roads can be characterised as narrow, there are many tunnels (many of them also pretty narrow), the roads go through challenging landscapes, they are strongly affected by weather and seasons and do not underestimate the presence of rather large wild animals. Thinking of those, collisions with large animals like moose or reindeer rarely lead to fatal accidents (or serious injuries for people), but lead to major to damage and have a rather high potential. Although there are many of these accidents, I do not think they reflect in the ETCS numbers at all because they lack particular outcomes.

So, when talking safety on the roads, this might be in many cases thanks to the drivers (who adjust their speed to conditions and handle the constant variability in a rather good way) and not be despite of the drivers. Another important factor is certainly the improving quality of cars, and the ability of many Norwegians to afford them.

It would be fun to do a proper study of all these factors, and I would not be very surprised if the findings would echo many of the remarks that John Adams already made in his brilliant book almost 25 years ago. Adams also mentioned already an issue that was raised by SVV Director Guro Ranes: “We are concerned about the negative trend for seriously injured pedestrians and cyclists”. Adams questioned the fact that there was much attention protecting the best-protected traffic participants (car drivers). This only lead to riskier behaviour, while there was little attention for the weaker parties.

Safe, or Not Safe?

Having made all these critical remarks, it must be said that there appears to be a steady decline in fatalities since the 1960s. Also, the number of (light and serious) injuries also seems to have a steady decline. If you feel like it, you can download numbers from Statistics Norway and play with them to check for yourself (as I did). 

I cannot say what has led to this trend, but there has been a lot of good and serious work on road and traffic safety, so one can assume that from a variety of measures at least some have had positive effects. Has the traffic system become safer? We have some circumstantial evidence pointing that way, but I would hold back hallelujah-stories, because there are still many hazards and also worrying new ones, like for example a growing number of East European trucks in doubtful state, with unfit tyres, etc.

As an aside, doing a bit of research on the web I found an interesting British take on the matter, commenting that they were the second safest, but were ‘punished’ for not making more progress from an already safe situation. Well, yet another argument to leave Europe…

A positive, yet critical, conclusion

One might wonder why some Safety People (among which yours truly) appear to be so grumpy. Can we not just be happy that there is a low number of fatal accidents? Should we not celebrate a low number of traffic fatalities? Yes, we should, because it is good news! However, should we also conclude that we are safest based on some outcome number? No! We just cannot tell without additional information! So please learn to be reluctant to draw quick and easy conclusions, even if it is flattering.

Road traffic is a complex system where safety is created by (or emerges from) a large number of factors and their relationships. Do not give in to over-simplification (especially illogical forms), and whenever you see positive numbers also ask for the Bad News and/or look for evidence that disproves a Good News hypothesis. Confirmation can be (too) easy. Trying to falsify may be harder, but it will make your findings more robust and valuable!

 

Postscript:

Not even a week went by, and one of the messages above was confirmed - fatality levels on Norwegian roads were much higher in the first half of 2016 than the same period the previous year. Sometimes we just hate it to be right...

 

Also published on Linkedin.

If you set out to find Safety Myths and Misunderstandings, incidents and causes are a rich field. Quite a few from that area can be found in my book, and here is another one illustration that I came across earlier this week.

There was this survey, intended to test some hypotheses about the human factor in safety and collect data for a ‘scientific’ article about these hypotheses. Information was gathered through an online survey tool that questioned safety professionals about the most recently incidents they were involved in as investigators.

Participants were asked to what degree four factors (‘causes’) had contributed to the incident. Participants had to assign a score between 0 and 100% to each factor, with a total for the four factors of 100%.

……% Technical

……% System

……% Culture

……% Behaviour

At this point I stopped reading and decided not to participate and wondered if I should ask the person behind the survey (of whom I had a rather high opinion, and so I did engage in a discussion - results pending) to stop this nonsense because this is so wrong on so many levels. Just a few issues:

  • Obviously, this is an oversimplification. Why these four factors? What model are these four factors based on? What about other factors, for example external factors, like forces of nature?
  • What level of analysis of the incidents are we talking about anyway? Behaviour is usually very downstream (close to the incident) while culture and system tend to be more upstream (greater distance to incident). 
  • How do you ensure that this is going to be a (somewhat) representative research? How do you ensure correct scope and applicability of the findings? How do you ensure that the incidents in the survey are (somewhat) representative? Is it okay to include complex cases and more trivial OHS incidents in the same survey? And, how would you know? The only indication one can get is from the first questions that differentiate between near misses and accidents, and ask for the outcome of the incident. But, mind you: outcome says nothing (!!) about the case's complexity.
  • How are you going to ensure interrater reliability such that each participant answers more or less from the same frame of mind and definitions? Besides, the definition of some terms (especially culture) was rather questionable.
  • What is the quality of the investigations this survey is going to be based on? Typically many investigations stop when they come across a convenient cause (often behaviour) and don’t look further.
  • What is the competence of the people who are going to answer? The survey was open on the www, so anyone could participate.
  • How are participants supposed to judge the contribution of a causal factor? And how would you know their method? Will they be counting how often a factor is mentioned? (which is useless and arbitrary) Will they do this by assigning a measure of importance? If so, does that mean that something on a higher organisational level should have more weight? Or the reverse, should something closer to the accident have more weight? What about underlying factors that affect more than one other factor? And so on.

This only scratches the surface of my concerns. Online newspaper do opinion polls on this level, and the quality that comes out of this survey will probably not be any better than that. So please, don’t come and call it scientific research.

Before even the first word of the report is written, I suspect that the results are going to be an extremely arbitrary opinion poll at best. Most likely, however, they are going to be extremely questionable. It might have been better to take a generally “accepted” number with a more or less known bias (like Heinrich’s infamous 88%) than come with a shaky survey that will only lead to even more confusion - even if it would “find” the opposite of what Heinrich wrote many decades ago.

It is important to look into the role of human factors in safety and incidents. But, please let’s do it properly and not by means of some make-believe-research. The First Law of Quality Assurance does apply for full: Shit in = Shit out. Things like this will do harm to safety, firstly because of the confusion it may create, secondly because it looks like safety people can’t get basic science right.

--- --- ---

Having said that… Of course, there is also a different possibility. The mentioned survey is not about incident investigation at all, but about the biases and gullibility of safety professionals… I hope the latter (and still then some of my concerns apply), but fear the other alternative.

--- --- ---

If this has tickled your interest in Safety Myths and Misconceptions, then make sure to check out the book Safety Myth 101. In the book, you can find 123 (and then some) Safety Myths, including explorations of their causes and suggestions for an alternative, better approach.

Among others the following Myths are relevant to the discussion above: #20 (understanding research), #83 (behaviour and human error), #91 (investigation and stop rules) and very much #100 (counting causes). Enjoy!

 

Also published on Linkedin as a follow-up to the post Safety Science? If Only!

Stian Antonsen, from SINTEF and NTNU had an interesting presentation at our HSE Seminar at Hell (which is a place near Trondheim in case someone wonders), May 2016. Here is a brief overview from his presentation where he discussed safety culture from a general, theoretical point of view with many examples from practice and from accidents.

Since the 1980s culture has been seen as one way to ‘excellence’, and looking at it from the other side, culture has been seen as a cause for accidents. This has led to a kind of parable and easy explanation that things went wrong because of the culture.

The Chernobyl accident was the first where culture was pointed out as a contributing causal factor. The culture there was among others characterised by the mission to produce electricity, no matter what. Also it was difficult to question decisions of a superior. Several accidents in the years to follow saw culture mentioned as a causal factor: Challenger, Clapham Junction, and Piper Alpha. The report of the latter even chose to express its view on the culture through normative language.

In Norway the term HMS kultur (HSE culture) has even found its way into legislation for the oil and gas sector. §15 of the Rammeforskriften says that (my approximate translation):

A good culture for health, environment and safety that includes all phases and areas of activity shall be fronted through continuous work to reduce risk and improvement of health, environment and safety. 

This can be problematic, because how can one enforce and comply with something as ‘vague’ as culture? (the regulator has published a guidance document on the subject that can be downloaded for free).

It’s organisational culture one works on, anyway. There is no such thing as safety culture - that is merely a sticker we stick on elements of organisational culture related to safety.

What is culture? One often used simplified definition says that culture is "the way things are done around here" which points towards some important elements of culture. There is a practice (‘the way things’), behaviour (‘are done’) and it happens within a group (‘around here’). It’s interesting that behaviour is both a cause and a consequence of culture. Experiences are important - they affect the practice that evolves.

Culture is something that is found between the lines and a very powerful social mechanism. Culture is a frame of reference, it’s like a pair of glasses that one sees through and that therefore affects how you look at things.

Some seem to think that culture and attitudes are almost synonymous. While ‘attitudes’ are found in the heads of people, one finds that ‘culture’ is something that happens between people. Culture can get us to abandon our own attitudes… (see for example the Milgram experiments). Often one sees that so-called culture improvement actions are directed against attitudes through awareness campaigns and the like. Attitudes are important, but the possibility to affect through campaigns, and the effect of campaigns is very limited. One should rather use effort for other actions.

Culture and Power is a subject that is relatively little discussed. Perrow has not discussed culture explicitly in his work because he found that when you look into accidents you will usually find pressure connected to power in the causal chains (e.g. Challenger accident). The dynamics of power are important for culture(s), in a group and between groups.

Can culture be used as a tool for change? There are "culture optimists" around who actually think that culture is a tool for change. Stian is very sceptical about this. As he said: It’s not possible to turn culture a bit up or down … Quick-fix management literature gives this impression that often borders on brain washing. Where goes the line between organisational development and manipulation? It’s like some organisations move the border from controlling behaviour to wanting to control "hearts and minds".

There are some serious limitations with culture as a tool for change:

  1. Organisations will included several different cultural units.
  2. Culture is a side-effect of social interaction. Culture changes through interaction and NOT through attitude campaigns, flaming speeches and noble visions. These can give short-lived positive feelings, but there is always a Monday…
  3. Culture is not something that can be managed. But one can affect culture’s ”conditions for growth”.
  4. You cannot decide what culture looks like beforehand. The big One Size Fits All programs have only small likelihood of creating real change. Rather use resources for many small drops close to people’s everyday work over a longer period instead of a HUGE program. (and neither should you lat consultants run the process, do things yourself) Culture changes are often set in motions Top Down, but in general these things work Bottom Up.

The objective should not be to create a common culture. There will always be different cultures and the differences are actually necessary. An objective should be to facilitate for a common language, communication and understanding between groups. And it’s essential that what is said and that what is done are aligned: Walk The Talk.

Stian has written a very fine book about safety culture: Safety Culture: Theory, Method and Improvement.

 

Culture is one of the most maligned words these days, in management speak in general, but in the safety world in particular, I think. When a word is mentioned so often, by so many people and especially as the problem or solution for almost everything, your buzzword radar should raise red flags all over and the wise thing is to adopt some healthy scepticism.

The other day, I read a new article from Safety Science by Sidney Dekker and Hugh Breakey about Just Culture. An interesting paper, about improving safety by achieving substantive, procedural and restorative justice. You should be able to download it from Science Direct (http://dx.doi.org/10.1016/j.ssci.2016.01.018).

A very interesting comment is ‘hidden’ in the footnotes of the paper (interesting comments often are - I know many people ignore them, but I tend to follow them up, because many worthwhile side-tracks are usually found in them):

“Whereas social science has gradually abandoned culture as a prime-moving mechanism of social life, safety science has embraced an almost nineteenth-century certainty about the importance of culture to the social and organizational order (Guldenmund, 2000; Myers et al., 2014). Safety science tends to follow the functionalist tradition of management science and organizational psychology, where culture is seen as something that an organization has – a modifiable or exchangeable possession or property which can be mapped with quantifiable data gathered through e.g., surveys.”

There are several interesting comments in this footnote. Let’s highlight some. 

What struck me is that some sciences appear to have discarded the concept of culture as a driver, while in the safety world it has been embraced as a solid fact. Does this mean that culture is very much One Big Myth? Or at least that the influence of culture maybe isn’t as big as we tend to assume? Interesting, and worth exploring at a later point of time. For now I don’t want to leave the concept entirely, because it does have a strong appeal, and I really need more information than just a mention in a footnote. But at least this should be a hint to handle the term a bit more carefully.

Whether culture-as-a-mover is a Myth or not - even so there is more than enough Culturebabble going around. As the authors correctly observe, the predominant view on culture within safety is a functionalist, normative view. Culture is generally seen as something that can be measured and managed, like other aspects of business.

Just one random example picked from many possible options, is the summary of a safety student’s master thesis that I came across a month or two ago. The student had looked into the implementation of a Safety Program (note the capitals, of course there was a Bradley curve in the thesis, as well as a reference the corporate goal to reach a World Class Safety Level of Zero Accidents) in a chemical plant. The average score in a perception review had gone from 3,5 to 3,8. Without explaining what that meant and whatever the context was, this was seen as proof that safety culture had improved, thanks to the Program.

This is not a stand-alone example. Alas it is a rather common modus operandi in the safety world. And while I find this approach highly questionable, I do want to note that even these applications can have positive effects and drive forward some improvement. But also they also contain risks, because positive results from such a simple survey can lead to complacency or reduced sense of urgency, as well as eventually more severe direct consequences.

The functionalist view also reigns supremely outside of safety, by the way. Only a few weeks later, I was in a meeting where a high ranking HR Manager thought that it was possible (and perfectly alright) to draw conclusions about the organisational culture based on a year old employee satisfaction survey without any special additional work. She even suggested culture as a KPI and it boggles my head even today, thinking how this is would be defined or managed.

Just a closing thought. As Sidney and Hugh write in that footnote, in Safety, culture is often “seen as something that an organization has”. Interestingly, most people use the (heavily simplified) definition “Culture is how we DO things around here”. That one should rather fit to the Interpretive approach that sees culture as an emerging property in an organisation and rather as something the organisation does than something it has. This just goes to illustrate that most people really don’t know what they are talking about and are only culturebabbling something that sounds interesting, trendy and somewhat sciencey.

 

My recently published book on Safety Myths contains a relatively short chapter on (Safety) Culture, featuring eight Myths ranging from “We Have Been Doing This for 30 Years”, culture and compliance, positive mind-sets and top-down culture changes to the certification of culture, responsibility and the question whether toilets tell about culture. And of course there is plenty of attention for buzzwords throughout the book!

Find more information about the book, including an overview of its contents and how to order on:

http://www.mindtherisk.com/the-book

 

This blog was also posted on Linkedin.