background2

My copy of this 2012 book is graced by an enthusiastic quote from Wikipedia founder Jimmy Whales. I don’t share his enthusiasm, but that may be due to the fact that I’m in another line of work and have been exposed to other literature about risk and familiar with much of the matter. A first reading left me with a rather skeptical feeling about the book, so I had to revisit it about a year later and re-read it.

I was mainly skeptical about Evans’ clear love with expression of probabilities in numbers (e.g. “you need to put a number on it” (p.3)), but more about that later. Let’s first do a chapter-by-chapter walkthrough. I’m building this very much on direct quotes from the book, some paraphrasing and occasionally I’ll add my personal thoughts, comments and observations, clearly marked as such between square braces [cb: blablabla].

There are 288 pages of which about 200 pages are effective main text with four appendixes about the Risk Intelligence test (which you also can try online) and research data and relatively extensive notes filling the last 20% of the pages.

1: Why risk intelligence matters

Evans defines Risk Intelligence as the ability to estimate probabilities accurately. At the heart of risk intelligence lies the ability to gauge the limits of your own knowledge - to be cautious when you don’t know much and to be confident when you know a lot. He starts the book by illustrating the subject with a number of examples from everyday life and work situations.

In recent years one encounters the so-called ‘CSI Effect’: people expect more categorical proof than forensic scientists can deliver. But: science rarely proves anything conclusively. Rather, it gradually accumulates evidence that makes it more or less likely that a hypothesis is true.

Handling risk and uncertainty is a problem for many people. One of those is that the illusion of control is a key factor in risk perception. An example is that of Homeland security and reaction to terrorism. One of the principal goals of terrorism is to provoke overreactions that damage the target far more than the terrorist acts themselves, but such knee-jerk responses also depend on our unwillingness to think things through carefully.

Global warming and climate change is another example that often leads to extreme viewpoints about Evans says that: “Both kinds of exaggeration seriously hamper informed debate. Without the tools to understand the uncertainty surrounding the future of our climate, we are left with a choice between two equally inadequate alternatives: ignorant bliss or fearful overreaction”.

There are always risks on both sides of a decision: inaction can bring danger, but so can action. Precautions themselves create risks. No choice is risk free.

Sunstein has argued that “in the face of fearsome risk, people often exaggerate the benefits of preventive, risk reducing, or ameliorative measures”. When a hazard stirs strong emotions, people also tend to factor in probability less, with the result that they will go to great lengths to avoid risks that are extremely unlikely. Psychologists refer to this phenomenon as “probability neglect” and have investigated it in a variety of experimental settings.

It’s a big mistake to think that we can offload the responsibility for risk intelligence. Indeed, research suggests that many experts have quite poor risk intelligence. Research into forecasting shows that knowing a little helps a bit, but knowing a lot can actually make a person less reliable. The financial crisis illustrated all too well the problems of relying too heavily on computer models: there’s a vital importance of more nuanced human risk intelligence in alerting us to risks even when the data tell us not to worry.

Most of us aren’t comfortable with or adept at making judgments in situations of uncertainty, and this is largely due to our reluctance to gauge the limits of what we know.

2: Discovering your risk quotient

Beliefs are rarely black or white, and usually come in shades of gray. Evans defines Risk Intelligence as the ability to estimate probabilities accurately. Probabilities permit you to express your degree of belief in relatively precise numerical terms, and being able to do so is a key component [cb: according to Evans] of risk intelligence.

In essence risk intelligence about having the right amount of certainty. Experts often think they know more than they really do. People who seek out diverse sources of information are likely to have higher risk intelligence than those with a narrower cognitive horizon.

Risk is more than danger. This is only one side of the coin, the other side is opportunity. There are upside risks and downside risks. According to Funston, risk intelligence is “the ability to effectively distinguish between two types of risk: the risks that must be avoided to survive by preventing loss or harm; and the risks that must be taken to thrive by taking competitive advantage”.

Many confuse risk intelligence with risk appetite (also commonly known as risk attitude), but the distinction is vital. Risk intelligence is a cognitive capacity, an intellectual ability. Risk appetite is an emotional trait. It has to do with how comfortable people are with taking risks - that is, with exposing themselves to greater danger in order to reap greater reward. Risk appetite governs how much risk you want to take, while risk intelligence involves being aware how much you are actually taking.

The author then discusses calibration curves and a Risk Intelligence test to illustrate how risks are judged and misjudged. 1987 research by Keren supports the view that estimating probabilities for the same kind of event repeatedly boosts risk intelligence [cb: the book does not explain if this is just for this type of event or in general, something which I think is highly relevant].

The ability to discriminate more finely between extreme probabilities is often what sets the most risk intelligent people apart from those who are less proficient.

Evans severely criticizes scoring methods used by NASA and US Department of Health and calls them “deeply flawed for a number of reasons. The first and most fundamental problem is that they are only as good as the initial estimates (usually provided by experts) that serve as input; if you put garbage in, you get garbage out” and “these methods fudge matters by using verbal scales instead of asking the user for numerical scales” [cb: this appears to be an important issue for Evans with his numerical focus. He doesn’t explain this stance towards verbal scales in this chapter, but returns to the subject later; with a only partly satisfying explanation as far as I’m concerned].

According to Evans rapid raising of awareness is the low-hanging fruit to improve Risk Intelligence. By becoming aware of our general tendency to be either overconfident or underconfident in our risk assessments and learning about a key set of underlying reasons why our brains so often lead us astray, we can go a long way toward correcting for the most common errors.

 3: Into the twilight zone

Risk intelligence works in the grey area between certain knowledge and total ignorance. The psychological trait of ambiguity intolerance leads people to respond to novelty, complexity and uncertainty in a number of ways that undermine risk intelligence, such as seeing things too starkly as either black or white, and reacting to ambiguity with feelings of uneasiness, discomfort, dislike, anger, and anxiety that intrude on rational assessment. For that reason people who can’t tolerate ambiguity are unlikely to develop a high level of risk intelligence.

It’s inaccurate and confusing when the media use “uncertainty” as a euphemism for “increasing certainty that things are going to get worse”.

When the desire for an answer becomes overwhelming, any answer, even a wrong one, is preferable to remaining in a state of confusion and ambiguity.

The fallacy of worst-case thinking: Many people react to the threat of climate change by either blocking it out altogether or becoming desperately concerned about it. It is hard to occupy the middle ground treating global warming as a serious threat but without freaking out about it.

Worst case thinking is so dangerous because it substitutes imagination for thinking, speculation for risk analysis, and fear for reason. By transforming low-probability events into complete certainties whenever the events are particularly scary, worst-case thinking leads to terrible decision making. It ignores half of the cost-benefit equation.

The all or nothing fallacy: Despite the old saw that “absence of evidence is not evidence of absence” in many cases it is. It can often be safely assumed that if a certain event had occurred or a particular thing existed, evidence of it would probably have been discovered. In such circumstances it is perfectly reasonable to take the absence of evidence as the positive proof [cb: I would prefer to say: strong indication] of its nonoccurrence or nonexistence. [cb: but no guarantee - see the discovery of actual black swans].

It is fair to ask for evidence before believing some new claim, but a stubborn refusal to believe something unless the evidence is overwhelming is a different matter. Such stubbornness suggests that the demand for evidence is merely a smoke screen and that no amount of evidence will ever really suffice. If I can’t disprove something to your satisfaction, either you known something I don’t or you have set the bar way too high.

In addition to fostering the idea that something is false just because it can’t be proven with 100% certainty, the all-or-nothing fallacy also engenders the opposite mistake: assuming that something is true just because it is possible. This strategy, the argument from possibility usually exploits the ambiguity of words like “may” or “might”. It is not uncommon to find people exploiting this vagueness by first getting someone to concede the mere possibility of something and then expanding the beachhead by assuming that a high degree of probability has thereby been established. The argument from possibility is often used by journalists to trick interviewees into giving an appearance of assent.

If it is possible to prove something without removing all possible doubt, might it not also be possible to know something without being completely certain and even to believe something without complete conviction?

Learning to feel comfortable in the twilight zone - developing negative capacity - is crucial in many areas of life, where certainty rare is justified. Developing risk intelligence requires getting the balance right, steering between the extremes of uncertainty tolerance and endless calculation.

4: Tricks of the mind

This chapter is mainly about biases and heuristics and draws a lot of the work of Kahneman and Tversky. [cb: One may regard it as a ‘quick version’ of “Thinking Fast And Slow”, but I would clearly recommend Kahneman’s book over this. But, by all means, if your time is limited you can do much worse than read this chapter to get a quick introduction or small refresher].

A heuristic is a rule of thumb, or cognitive shortcut, which the mind relies on to help it perform a task more easily. A bias is a tendency to make mistakes of a particular kind - not random errors, but ones that are systematically skewed in one direction. [cb: I chould check what Kahneman, or Gigerenzer, use as the definition, but I would personally define bias without ‘mistake’ – isn’t it simply a tendency to see things in a certain way? Often connected to ‘mistakes’, sure, but not necessarily always].

Most of the time heuristics work pretty well, but under certain circumstances they lead to mistakes. Just as the heuristics of the visual system sometimes lead to optical illusions, the cognitive shortcuts we use to estimate probabilities can lead us to make systematic errors.

Heuristics don’t always lead to biases; they often work pretty well and lead us astray only under certain conditions. And not all biases are caused by heuristics misfiring; some are “brute biases” that are always on, regardless of what conditions obtain in the environment. One such bias in the risk system is the tendency to overestimate the probability of pleasant events: optimism bias.

Availability heuristic: When estimating the likelihood of a future event, we scan our memory for something similar, and base our estimate on the ease of recall. If recall is easy, we assume that the event is likely; if it is hard to think of anything like it, we assume that the event is improbable. The availability heuristic is a good trick because, on the whole, the two variables correlate. The availability heuristic works well enough when we have to estimate the probability of things that are entirely within the realm of our own personal experience.

But: mass media in particular degrade the correlation between the frequency of certain dangers and the ease with which we can remember them. Merely imagining an event can also make it easier to picture and thereby cause us to overestimate the likelihood of something similar (imagination inflation). A basic problem that underlies imagination inflation: source confusion. If images of events come easily to mind, ask yourself if it is because you have personally experienced many of them or because you have read about them on the Web or seen them on TV.

The standard risk intelligence question (“How sure are you that…?”) can and should be asked as much of someone recounting a memory as of someone making a prediction.

Wishful thinking is a matter of allowing one’s beliefs to be influenced by one’s desires. People with high risk intelligence see the world as it is, not the way they would like it to be.

Psychologists have frequently demonstrated a systematic tendency for people to overestimate the likelihood for positive things happening to them and underestimate the likelihood of negative ones. This is optimism bias that might be called a “brute bias” since it does not seem to be a by-product of any heuristic, but is a fundamental feature of our psychology that operates regardless of the environment.

We can correct for this, e.g. by doubling our (optimistic) first estimates. Such corrections must not be so excessive that we fall into the opposite error of pessimism, but given the overabundance of wishful thinking, this is not a common danger. When it comes to making plans for the future, most of us need a dose of cold water, not a shot of self-esteem.

Source credibility: People tend to put more emphasis than they should on the strength of the evidence and not enough on its credibility, so when the evidence points strongly to one conclusion but the source credibility is low, overconfidence is likely to result. Griffin and Tversky suggest that people attend first to the strength of the evidence and then make some adjustments in accordance with its weight. Crucially, however, the adjustment is generally insufficient.

Confirmation bias: This is the tendency to pay more attention to information that confirms what we already believe and to ignore contradictory data. Francis Bacon already said: “The human understanding when it has once adopted an opinion draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by distinction sets aside or rejects; in order that by this great and pernicious predermination the authority of its former conclusions may remain inviolate”.

Pariser’s book “The Filter Bubble”: due to search engines and preferences we live in a filter bubble from which dissenting voices and different perspectives are silently excluded. We encounter less information that could challenge or broaden our worldview and as a result our opinions may harden into dogma.

Koriat’s study: It’s important to search for contradictory evidence. One way to improve risk intelligence is to expose ourselves to a greater diversity of opinion, and especially to seek out views that are opposed to our own.

Hindsight bias undermines risk intelligence because it prevents us from learning from mistakes. The saddest consequence of hindsight bias is a diminished capacity for surprise.

The mind-reading illusion is the tendency to think we are better at reading other people than we really are. The signs that most people look for when attempting to distinguish truth from deception are not reliable. When people rely on misleading or irrelevant cues, they are may become more confident that they have spotted a lie even though they are mistaken. [cb: Here one will find the main WTF moment of the book when Evans uses two quote from a Stephen King novella to illustrate the issue. Just skip those!]. The illusion of transparency is the opposite of mind-reading: we assume that others can read us more clearly than in fact is the case.

5: The madness of crowds

Onlookers often mistake confidence for competence. There is a common tendency for us to use the time that someone takes to reply as a proxy measure of their intelligence. As phrases such as “quick-witted” attest, people who quickly reply are often taken to be smart, while people who pause for thought may be regarded as a bit dumb.

Research suggests that people can see through blirtatiousness eventually, but it takes time. The default assumption, on first meeting a bunch of new people, is that the high blirters are smarter. The media tend to favour verbose idiots over thoughtful silent types.

There are subliminal ways in which social pressures affect risk intelligence, e.g. the widespread tendency to follow the crowd - during studies people did well individually, but as soon as they started to communicate in the group “any signs of collective wisdom seemed to evaporate”. Communication tends to inhibit the independent thought that is necessary if the average estimate is to be any good because of the natural tendency to conform to group norms. Groups can amplify the irrationality of individuals.

The only way to restore sanity to a mad crowd may be by sprinkling it with a few “crazy” people. High risk intelligence may go hand in hand with a certain eccentricity, or at least a refusal to bow to peer pressure. Keynes: “Worldly wisdom teaches that it is better for the reputation to fail conventionally than to succeed unconventionally”.

Communicating about risk: Another way in which social processes impact on risk intelligence arises from the ways in which information about risks is presented to us and the conventions we use to characterize it. Many well-intentioned efforts in this area are counterproductive. For example the colour code system from Homeland security: “This static, ambiguous and nonspecific system creates uncertainty, or indifference, among the population it is meant to help”.

When it comes to putting numbers on risk, it’s not the case that any kind of number will do, it’s probabilities that count. [cb: I do NOT agree, potential consequence is another important factor, as is uncertainty/knowledge. And let’s not forget that it’s not always clear what probability we’re talking about!]-

Different incentives may lead to different preferences: verbal versus numerical probabilities. Evans clearly prefers numbers which he calls more “accurate” [cb: isn’t this actually a desire for a false kind of certainty which contrasts with previous arguments? Odd.]. while he thinks that verbal probabilities have too much “wiggle room” which is why he titled this sub-chapter as “Weasel words and vague expressions”. He also rejects the argument that numbers may be too complicated to understand [cb: but doesn’t offer a good argument for his rejection].

When different individuals interpret the same labels to mean different things there is an “illusion of communication”. People may describe the probability of a given event with the same verbal label and conclude on that basis that they agree; however, since each person may implicitly attach different probability ranges to the verbal label, their agreement may be illusory. To complicate matters further, the same individual may attach a different probability range to the same label in different contexts. [cb: this is a highly valid argument in favour of using numbers, but probably no guarantee that people understand each other or assign the same urgency/risk/priority to a certain number].

One of Evans’ arguments in favour of numbers is that “measurement is a fundamentally quantitative process” (p. 120). [cb: Yes, but risk management isn’t persé quantitative, and I think it’s wrong to equate risk management and measurement. Neither are estimates something that can be measured, so I think the argument is weak].

The final section of the chapter discusses “How much doubt is reasonable” which serves as another argument in favour of numbers, but - like the previous section - has actually little to do with the subject of the chapter (“crowds”). 

The chapter concludes with the statement that risk intelligence can be developed and improved. It’s not a fixed, innate mental capacity. Rather risk intelligence is heavily dependent on the tools provided by culture. There is probably some basic neural mechanisms that enable us to wield the tools, but risk intelligence is a joint venture between biology and culture, not a feature of the bare biological brain.

6: Thinking by numbers

This chapter opens with a quote from George Bernard Shaw: “The sign of a truly educated man is to be deeply moved by statistics”. [cb: Since I honestly wondered if this Shaw quote wasn’t actually meant to be ironic, and thus misfires as starter for this chapter, I did some research. Check this interesting read about how quotes can go astray… (assuming that they did their research properly).].

Evan claims that as soon as probability is expressed in numerical terms it becomes possible to reason about it by employing the formidable tools of mathematics. [cb: the relevance of this statement is not quite clear to me because often there is little mathematics to apply to a probability. Additionally there’s the number’s illusion of precision that often stops all critical thinking or discussion. Another factor is that one only pays attention to the number and ignores estimation errors, assumptions or preconditions that may be essential to the number. A verbal account and a ‘vague’ label may be better suited for an informed discussion about the risk than a ‘precise’ number].

At the same time Evans cautions not to overplay the hand provided by mathematical models - it’s here where risk intelligence [cb: in the meaning of knowing how strong/bad our knowledge is] can help us.

Evans stresses the downside of computer models, e.g. by explaining how these have led to a de-skilling of the risk process on Wall Street, leading up to the financial crisis. Some economists worry that an overreliance on computer models drowns out thinking about the Big Questions. Accumulation of huge data sets led economists to believe that “finance had become scientific”. A model is only as good as the data that are fed into it [cb: this well-known quote is strictly speaking a flawed one. In fact the results are as good as the input, the model can indeed be perfectly alright.]. Failures to spot flaws in mathematical models has led to disastrous consequences as have examples of how models can be abused in the interests of scientific opportunism.

Even an abundance of data does not obviate the need for judgment and estimation. True risk intelligence is not achieved simply by making mathematical calculations.

The perils of probability: probability theory is most relevant in casinos, because there all you need to know is the rules of the game you are playing. Taleb says these computable ‘risks’ are largely absent from real life.

Risk intelligence involves crunching the data with your brain. You absorb the data by reading, watching, and listening, and then you mull it over in your head and come up with an estimate of how likely something is.

Probability theory is not completely irrelevant to thinking about risk, of course, but higher-order mathematical calculations are not required for most of the issues we must make decisions about in our daily lives, and mathematical logic is only one part of the overall risk intelligence equation. True risk intelligence involves a curious blend of rational numerical calculation and intuitive feelings in which neither is sufficient on its own.

Reason cannot function properly without certain emotions. Risk intelligence is no exception; epistemic feelings all play a crucial role here too. High risk intelligence depends upon: 1) Well-calibrated epistemic feelings, and 2) An ability to translate epistemic feelings into numbers. Epistemic feelings are well calibrated when they accurately reflect your level of knowledge about a particular topic.

Even if your epistemic feelings are well calibrated, however, you won’t be able to reliably translate them into accurate decisions about risk unless you can do a good job of expressing those feelings in terms of specific numerical probabilities. [cb: I’m still uncomvinced of the need of expressing feelings in numbers – especially because assessing of various alternatives often involves more than just one factor]. Risk intelligence does require a certain amount of numeracy [cb: yes, I agree, see also Gigerenzer’s work. But, understanding numbers is not necessarily the same as expressing a feeling in numbers. Evans uses a different explanation of numeracy than Gigerenzer and Paulos].

Interestingly Evans then quotes an example from the US Health Department where it turned out numbers had no effect because the numbers had no meaning for most readers [cb: which speaks for Gigerenzer’s view on numeracy and possibly against Evans’].

7: Weighing the probable

One must know how to make use of probabilities. Setting thresholds is a simple and useful way to make decisions based on probability estimates. Bet sizing is one example. This is the art of knowing how much money to wager on any given bet. Another is applying the Kelly criterion (keeping the bet to a certain fraction of your capital) or by making the size of your bet proportional to your degree of confidence.

Calibrating epistemic feelings means to assess how confident we are.

Some people think it can be misleading to express probability in terms of a single number, because it conveys a false impression of precision. [cb: indeed, I do] and as an alternative one might suggest to express the probability in ranges (e.g. between 60 to 80%). Oddly Evans is strongly against this suggestion and only finds it confusing. In his opinion a numerical probability (e.g. 72%) expresses our imperfection of knowledge: “it begins to sound as if probabilities are objective things that we are measuring imperfectly, such as temperature or mass, rather than subjective feelings. And this is profoundly misleading, for probabilities are merely expressions of subjective uncertainty”. [cb: In part I agree with Evans because probabilities aren’t precise objective numbers. But because they are expressed in precise numbers (or at least what looks like precise numbers) a lot of people, and especially decision makers, will perceive them as being precise numbers which can quickly transform a subjective estimate into a perceived objective truth].

It is much easier to perceive differences between extreme probabilities than differences between intermediate ones, and this can lead to flawed decision making (e.g. the difference between 99 and 100% is perceived as larger than the one between 10 and 11%). As a result we tend to overreact to small changes in extreme probabilities and underreact to changes in intermediate probabilities.

This all leads to a powerful rule of thumb to follow when making decisions regarding probabilities: beware of exaggerating the importance of differences in probability at either extreme of the spectrum and not sufficiently factoring in differences in the intermediate range. Even if it doesn’t feel right!

The 100 Percent rule means that we have to make sure that we calibrate our estimates of the likelihood of various possibilities by weighing them carefully against one another, and using the rule that the total percentages must come to 100% will help us be more rigorous about this and also help make appropriate adjustments in our estimates of probabilities as new information comes available. [cb: so one might consider to also assess the probability for the opposite event and watch out that one doesn’t end up with a total of >100% !]. Keeping the 100% rule in mind can help you do a consistent job of weighing probabilities against one another. But it won’t tell you anything specific about how you should reevaluate the probabilities as you receive additional information.

The last part of this chapter is dedicated to discussing Bayesian thinking. Bayes’s theorem shows us how we should revise our beliefs in the light of new evidence, but we need not apply the formula consciously or even know it in order to follow its prescriptions.

The base-rate fallacy is one common way where people go wrong. Many studies have shown that people fail to take sufficient account of prior probabilities or even completely ignore base rates altogether. When asked to make predictions about things they were more familiar with, however, people are much better at taking the base rate into account.

It is not necessary to master the complex math of Bayesian theory to develop high risk intelligence. Nevertheless, it can be useful to keep in mind some of the counterintuitive consequences that follow from Bayes’s theorem, and apply these insights in a rough and ready way. For example: when one particular hunch becomes less convincing, the other hunches should be upgraded in proportion to their former strength.

What you should definitely not do is focus entirely on the nature of a new task, and completely ignore all your past experience. By framing the forecasting problem and making use of previous experience, you are leveraging what you already know to make a better estimate than you would if you focused only at the task at hand.

8: How to gamble and win

This chapter is about rational choice [cb: which often is an illusion, something that isn’t discussed too much, especially in economy literature] and expected utility. Neumann and Morgenstern’s theory is rooted in the idea that many of our choices in our lives, especially those that involve risks, can be thought of as the choice whether or not to gamble. Neumann and Morgenstern assume that each person is the best judge of his or her own happiness. Utility is a subjective quantity and what seems irrational to one person may therefore be quite rational to someone else with different values and preferences.

We can only judge the quality of a decision by the light of the information available at the time; if we discover new facts that change the cost-benefit equation, the only wise thing is to make a different decision. As John Maynard Keynes said: “When the facts change, I change my mind. What do you do, sir?”.

By forcing decision makers to spell out their assumptions, it keeps them honest and makes their reasoning more transparent [cb: in theory yes, but there are of course always the possibilities of making up these assumptions and rationalizing a decision afterwards, so it’s no fool-proof way and one must keep a critical mind all the way].

Evans disagrees with Einstein’s famous quote that “Not everything that counts can be counted” [cb: which means that he probably misunderstood the quote] and uses an example of applying cold numerical and rational utility theory to a case of asking someone out on a date. [cb: I think Gigerenzer’s treatment of a similar case (and conclusion that we often may end up with another conclusion than that which utility theory ‘dictates’) is much more convincing. In the discussion of ‘gut feeling’ on page 173 I sense a discrepancy between the understanding of gut feeling between Gigerenzer (who sees it as an evolved tool in decision making) and Evans (who sees it as feelings and something that should factor into your utility)].

When it comes to predicting how long feelings would last, most people get it badly wrong. Almost everyone overestimates how long both good and bad feelings last. Yet most people persist in thinking that powerful events must have long-lasting emotional consequences. They suffer from what psychologists call “durability bias” when trying to predict their emotional reactions.

The theory of expected utility is a powerful thinking device that can help us make better decisions if used wisely. This, however, depends crucially on making good enough initial estimates about the value of potential gains and losses and the probabilities of each.

[cb: All in all this is a somewhat disappointing run-of-the-mill chapter on expected utility that really would have benefited from a more balanced (critical) discussion].

9: Knowing what you know

By focusing exclusively on the questions they understand and failing to consider the possibility that there might be other questions of which they are unaware and whole areas yet to be discovered, people can be dramatically overconfident how much they know.

Known unknowns are bits of information we don’t yet possess but that we know we need to find out. Unknown unknowns are the answers to questions that have not yet been asked. Such problems are generally not considered until they occur, and it may be impossible or at least very difficult to imagine them in advance. brief discussion of Black Swans 

Regular risk intelligence testing [cb: one of the few times that Evans tries to sell this in] will help to keep us on track, but there will always be surprises.

We might discover the answer to all the questions we have identified but fail to identify all the relevant questions. If we don’t consider this possibility, we will tend to be overconfident.

Psychologists Dunning and Kruger have described the phenomenon that some unskilled people may suffer from an illusory superiority, rating their ability above average, much higher than it actually is. They then make bad choices, but fail to realize how bad their choices are. Darwin noted: “Ignorance more frequently begets confidence than does knowledge”. The only way out of this is to become more competent. Awareness of our incompetence is the beginning of wisdom. 

Unknown knowns are bits of information you possess but fail to use when solving a problem (Zizek: “the disavowed beliefs, superstitions and obscene practices we pretend not to know about, even though they form the background of our public values”). The reason you fail to use the information is not because you can’t retrieve it from memory but because you don’t see how it could help you to solve the problem. You fail to see its relevance.

Our cognitive horizons will always have limits and we will be prone to “local thinking”: we can’t think of everything when imagining the future, so some ideas will always come to mind more easily than others. But we can learn to extend out mental horizons bit by bit, and become - if not truly global in our thinking - at least somewhat more regional.

The art of estimation: Fermi problems are a good way to avoid the lazy “I don’t know”. It’s rarely true that people have NO clue. They often have some idea. Then it’s often much easier to estimate the answers to sub-problems and when we multiply all the answers, we answer at an answer to the original question. This answer is often a pretty good approximation of the true value. This method works by transforming unknown knowns into known knowns.

Even perfect risk intelligence is no guarantee of success. The very idea of a guarantee is, in fact, somewhat at odds with the probabilistic world to which risk intelligence opens our eyes. In this world, nothing is certain or impossible; there are only degrees of likelihood. Even the wisest decision can backfire through bad luck. The whole point about chance is that it is beyond our control. Risk intelligence merely boosts our chances of navigating the raging river; it does not eliminate the danger.

The irony is that the very tools that can help enhance risk intelligence - the mathematics of probability theory and the collection of statistical data - have sometimes led to an overconfidence on its own.

Final thoughts

Coming to the end of the book I’ll close off with purely my comments – this time without any […].

As said, when I read the book for the first time it left me skeptical, mostly because of the numerical focus, and a bit because of the test. I’ve re-read the book (a very easy read, by the way) without spending one single second on calibration curves and tests and I don’t think that I missed something really essential. It struck me also how little “advertisement” there is for the test (which after all is the “product” that Evans is selling) - in the text at least. There’s quite some space devoted to it in the appendices, but these are usually only for “special interest readers” anyway. I’ve heard some criticism on the scientific basis of the Risk Intelligence test, by the way. Whatever the truth is about that – I don’t think that a possible lack of scientific validity really diminishes the value of this book. The test may serve, however, as a nice illustration how we can be over-confident of what we actually know.

With regard to the numerical focus… I disagree on a very important principle - how numbers are understood by people who are confronted with them (and if they can be understood at all, but let’s leave that for now). This leads me actually to the position that I have fundamental problems with the definition of Risk Intelligence as used by Evans: “the ability to estimate probabilities accurately”.

  1. Evans focuses on only one half of the (simplified) risk formula of Risk = Consequence * Probability. To assess risk properly one has to factor in the possible consequences. We treat a high probability of a fatality quite differently from a high probability of a bruise. (In defense of Evans - I think people are better in coming up with possible consequences that estimating probabilities, but as far as I can recall Evans does not explain this point in his book).
  2. In many (most) cases Evans regards risks more or less as singular things, while they usually come in packages with multiple criteria and that’s another reason that it’s hard to compare them on just one criterion (i.c. probability). We often have a multitude of goals we want to achieve. Evans does mention several times that risks have upsides and downsides, yet still he only focuses on one factor. Doesn’t sit well. (In defense of Evans, part 2 - he does in fact discuss other factors, like the robustness of knowledge, but mainly in connection of estimation of probabilities).

  3. Why must probabilities be estimated “accurately”? Isn’t the word “estimate” a contradiction of “accurate” in itself? He probably means something “as closely as possible”, but what I don’t get is why he doesn’t rather advocate estimate ranges. En contraire, he ridicules ranges. Quite baffling, and not convincing for me.

  4. And then there is the thing how numbers are understood by people. As Evans argues: probability estimates are subjective estimates, true enough. But do most people realize that? I think not. Confront a random decision maker with a number (10-6, 72%, 0.0001, etc) and more often than not they will interpret this as some kind of scientific or objective truth. Numbers in risk assessment can be helpful, but they contain also a major pitfall of creating a false sense of precision and certainty and thereby turning decision making into a mechanical process (compare number to defined threshold and decide yes/no) that effectively transfers the responsibility of the decision making process from those who should take a decision on multiple criteria to some kind of “expert” (Evans, and also Taleb and Gigerenzer cautions about those) who judges on just one criterion. A better alternative for the focus on numbers in my opinion is the conscious deliberation and discussion as also advocated by Andrew Stirling.

Concluding, I would actually propose a less strict definition for Risk Intelligence. More in the line of: “Able to handle risks well, and able to make relatively good decisions in situations of uncertainty”.

So, in reflection I think I stick with the “good read” (but not essential) judgment, but if you’re interested hop over some things – or at least: don’t take them at face value and read with a critical mind (as you should anyway, and always).

 

I’ve read the ‘international softcover edition’ (ISBN 978-1848877382), also available as a hardcover (ISBN 978-1451610901)