Summary of Thinking, Fast and Slow by Kahneman - 1st edition

What are the two systems? - Part 1

What is the distinction between System 1 and System 2? – Chapter 1

When looking at a picture of a blonde woman with an angry expression on her face, you immediately notice her having blonde hair and being angry. You get the feeling she is about to say something negative in a loud voice. This premonition of her future behaviour arose effortlessly and automatically. This is an example of ‘fast thinking’.

When looking at the mathematical problem 14 x 38, you quickly know it involves multiplication and you have a vague intuitive idea about the possible outcome. You quickly knew the answer ends with the number 2 and are sure 10.000 is not the right answer. However, the precise answer did not immediately come to mind. Solving the problem requires some time and taking several steps. You start with collecting your knowledge about multiplication from your memory, as learned in the early school years. Then you have to put this knowledge into practice, which takes some effort. This is an example of ‘slow thinking’. The computation involves both mental and physical work: it also leads to a higher blood pressure, an increased heart rate, tensed muscles and dilated pupils.

The modes of thinking are given various labels by psychologists. Kahneman refers to ‘System 1’ and ‘System 2’. System 1 operates fast, automatically, involuntary and without (hardly any) effort. System 2 involves effortful mental work, which requires concentration, making choices and agency. The feelings and impressions that effortlessly originate from System 1 are the main sources of the deliberate choices and explicit beliefs of System 2. Both systems have their own functions, abilities and limitations.

Examples of events that occur automatically and effortlessly (System 1) are:

  • Answer to 1 + 1 = ?
  • Hearing an unexpected sound: looking in the direction of the source.
  • Reacting to a threat before recognizing it.

The abilities of System 1 include skills which we also see in animals, like recognizing things and orienting attention. Other quick and automatic mental activities are the result of prolonged practice. System 1 involves learned skills (how to behave socially, reading) and learned associations (capitals of countries). Certain skills are acquired solely by specialized professionals. Learned skills require knowledge, which is stored in memory and can be accessed effortlessly and unintentionally. Some responses are entirely involuntary. You cannot stop yourself from knowing that 1 +1 = 2 or looking in the direction of a sudden sound. Others can be controlled but are usually done automatically. Controlling attention is an activity that fits both systems. Orienting to a sudden noise happens involuntary (System 1), but turns quickly into voluntary attention (System 2). Even if you manage to resist looking at the source, your attention was drawn to it.

The various operations of System 2 share one feature: they all require attention and are disrupted when the attention is moved away. Examples are:

  • Bracing yourself for fireworks going off.
  • Maintaining a faster walking pace than your natural walking pace.
  • Focusing on the voice of a specific person in a noisy and crowded setting.

By programming the automatic functions of memory and attention, System 2 is somewhat able to alter the way System 1 works. Doing something that does not come naturally requires effort, you need to ‘pay attention’. Conducting several effortful activities at once is hard or impossible, because they interfere with each other. Solving a complex mathematical problem while crossing a busy road is very difficult. Talking to your partner while walking in a quiet park is not, because these activities are undemanding and easy. We all have some awareness of the limited capacity of attention.

The book ‘The Invisible Gorilla’ demonstrates how focusing intensely on a task can result into being blind to distractions. The writers instructed the participants to watch a clip of two teams passing a ball. One team was dressed in white, the other in black. The task was to count the amount of passes between the white players and paying no attention to the black team. After some time, a person in a gorilla suit enters the court. The gorilla was visible for 9 seconds and out of the thousands viewers, approximately half of them did not notice the gorilla. The task of counting and ignoring a team made them effectively blind. The viewers who did not notice the gorilla were sure it did not happen, they could not imagine failing to spot a gorilla on a sport court. The study demonstrates two important findings about the mind: people can be blind to the obvious and people can be blind to their own blindness.

When we are awake, both Systems are active. System 1 runs automatically, System 2 is usually in a low-effort mode. System 1 continuously generates feelings, intuitions, intentions and impressions for System 2. System 2 turns intuitions and impressions into beliefs, and impulses into conscious actions. System 2 normally adopts the suggestions of System 1 without modification. In general, you believe your impression and follow your feeling, which is fine in most cases. When System 1 encounters difficulties and does not provide an answer, it relies on System 2 to tackle the problem and provide the right answer. Surprises can also activate System 2. The gorilla experiment shows that some conscious attention is needed for detecting surprising stimuli. The surprise then activates and orients attention: staring and searching the memory for experiences that makes sense of the surprise. System 2 also monitors a person’s own behavior: the control that keeps people respectful when they are furious and alert when they are driving in the dark.

The division of mental work between System 1 and System 2 is very efficient: minimal effort and optimal performance. Their interaction is mostly successful, because System 1 usually provides accurate short-term pre3 dictions and models of familiar situations, and its initial responses to challenges are quick and normally appropriate. It has biases though: systematic errors that are likely to be made in certain circumstances. System 1 occasionally answers an easier question than the real question and has a limited understanding of statistics and logic. Another limitation is that you cannot turn System 1 off.

Conflicts between an intention of carrying out a task and an automatic (opposite) response occur regularly. You may remember a time when you tried not to stare at someone with an alternative hairstyle or when you forced your attention on boring homework. One task of System 2 is self-control: overcoming the impulses of System 1.

What are illusions?

The well-known Müller-Lyer illusion demonstrates the difference between an impression and a belief, as well as the autonomy of System 1. The two lines appear to be of different lengths, but have the same length. When looking at the image, you believe what you see: lines of different lengths. After measuring them, you (or your System 2) believe something else: you know that the lines have the same length, even though you still see a difference in length. You cannot turn System 1 off: you cannot decide to see two equal lines, despite knowing they are. Resisting the illusion requires learning to mistrust your impressions of the length of horizontal lines with arrows at the ends. This means recognizing the illusory pattern and remembering what the ‘catch’ is.

Some illusions are visual, others are cognitive. You cannot control feeling sympathy for someone who turns out to be a psychopath (System 1). This can be the automatic response to psychopathic charm. You can learn how to recognize the illusion and how to respond to it (System 2). Can cognitive illusions be overcome? Errors of intuitive thoughts are generally hard to prevent, since System 1 works automatically and turning it off is impossible. Some biases cannot be avoided, for instance when System 2 does not have a clue to the error. Even when a clue is available, preventing mistakes requires a lot of effort and we cannot constantly question our thoughts. System 2 is not fast nor efficient enough to replace System 1. The best option is learning to recognize circumstances in which mistakes are likely to occur and trying harder to prevent making significant mistakes when there is a lot at stake. However, recognizing our own mistakes is not easy.

Kahneman uses the terms ‘System 1’ and ‘System 2’ as nicknames, because they are easier to say and take less space in our memory than ‘automatic system’ and ‘effortful system’. This is important, because any occupation of the working memory reduces our ability to think. He emphasizes that the systems are not real parts of the brain.

How do System 1 and System 2 deal with effortful tasks? - Chapter 2

System 2 is defined by its effortful operations, although it is also lazy: it puts in no more effort than needed. Some crucial tasks can only be performed by System 2, because they require self-control and effort to overcome the impulses and intuitions of System 1.

The ‘Add-1’ task puts our System 2 to work and demonstrates the limits of our cognitive abilities within seconds. If you truly want to challenge yourself, try Add-3. Your body will also react to the mental work. Eckhard Hess, psychologist, described pupils as windows to the soul. He found that the pupils indicate the level of mental effort: they dilate more if people have to solve more difficult problems. Kahneman set up an experiment to study the reaction of pupils while the participant performed paced tasks. The pupils got wider as the tasks got more demanding. The Add-1 task demonstrated how longer strings of numbers caused bigger pupils. When performing the Add-3 task, the pupils got 50% bigger and the heart rate increased. This is the maximum of mental effort, people give up if the task gets more demanding than Add-3 (the dilating of the pupils stopped). The pupils had a constant normal size when the participant was chatting to someone else during a break. Engaging in small talks and easy tasks are deemed effortless, while tasks as Add-1 and Add-3 are extremely effortful. This can be compared to walking in a natural pace and sprinting. When we are (mentally) sprinting, we could become effectively blind, like the Invisible Gorilla experiment demonstrated.

System 2 has limited capacity. It responds to threatened (mental) overload by protecting the most important activity. That activity gets the attention it needs. Remaining capacity will be divided to other tasks. An experiment regarding the detection of the letter K as a ‘side task’ showed that the observes failed when the main task was highly demanding.

The allocation of attention has always played an important role in our evolution. The ability to orient and respond rapidly to sudden threats or great opportunities was needed to survive, which we also recognize in the animal world. Even now, System 1 is activated when an emergency occurs and fully focuses on self-protection. We respond to a sudden threat before we are fully aware of it.

Brain studies have demonstrated that the degree of activity needed for an action changes as we become more skilled. An increase in skills resulted into the involvement of fewer brain regions. The same goes for talent: the brain activity and pupil size of highly intelligent people show they need less effort to successfully complete the same task. Law of least effort: if we have several options for achieving a goal, we choose the least demanding one. It is human nature to be lazy.

Only System 2 is capable of following rules, comparing objects on different attributes and making conscious choices between multiple options. The automatic System 1 lacks these capabilities, it cannot deal with more than one task at once and is not able to use merely statistical information.

A key capability of System 2 is the adoption and termination of ‘task sets’: it can program our memory to follow instructions that overrule habitual responses. Psychologists call this ‘executive control’. If you get the task to count the occurrences of the letter ‘L’ in a text, the answer will not pop up in your head naturally and you have never done this specific task before, but your System 2 will be able to accomplish it. It takes some effort so set yourself up for the task and carry it out, which gets easier with practice. Imagine you get a second task after completing the first one: count all the capital letters in the following text. This will take more effort, as you have to fight the tendency to concentrate on the letter ‘L’. One of the most significant findings of cognitive psychologists in the last decades is that moving the focus to another task is effortful, particularly when there’s a time limit involved. This is why Add-3 is such a hard task. You have to remember several numbers and each number requires a specific action: one is in the middle of transforming, some are waiting to be transformed and others are already transformed and waiting to be reported. People who do well on tests that demand them to switch constantly between two effortful tasks are also likely to do well on tests of general intelligence.

Why is System 2 deemed the ‘lazy controller’? – Chapter 3

System 2 has a natural pace. Having random thoughts and monitoring what happens around you is not effortful. We make small decisions when we ride our bicycle, take in some information as we watch the news and have low key conversations with our colleagues or partner. These actions take little effort and can be compared to a stroll. It is usually easy to be walking and thinking at the same time, but in some cases they cause a mental overload. When you go on a walk with someone and you ask that person to instantly solve the problem 32 x 64, he or she will stop walking. Walking faster than your natural pace worsens your thinking ability, as your attention shifts to maintaining a faster pace. If you walk as fast as you can, it will be impossible to focus on anything else. Next to the psychical effort, it takes mental effort to fight the urge to slow down: self-control. Conscious thoughts and self-control fight over the same restricted budget of effort.

Sometimes people are in a state of effortless concentration in which the maintenance of a coherent train of thoughts requires no willpower. Psychologist Csikszentmihalyi called this a ‘flow’. Examples are car speeding, painting, playing chess, writing. Being in a flow state can make you lose your sense of yourself and time. Activities that induce this flow are called ‘optimal experiences’. These activities take considerable effort, but in a state of flow, the maintenance of focused attention on them requires no discipline. 'Flow' separates the two forms of effort: the deliberate control of attention (self-control) and concentration on the task (cognitive effort).

Psychological research has demonstrated that someone who is simultaneously challenged by a temptation and by a demanding mental task is more likely to give into the temptation. When you get the task to remember a list of numbers for several minutes and at the same time have to choose what you want to eat: broccoli or pizza, you are more likely to go for the pizza. System 1 has more influence on our behavior when System 2 is occupied. Someone who is cognitively busy is also more likely to use sexist language, be superficially judgmental in social settings and make selfish decisions. A busy System 2 loses the hold on behavior, although mental load is not the only cause of depleted self-control. Other possible causes are a bad night of sleep, drinking alcohol or anxiety about the task. Conclusion: self-control requires effort and attention.

Experiments conducted by psychologist Baumeister displayed that voluntary physical, emotional and cognitive effort all – partly – drain our tank of mental energy. His experiments involved successive tasks. Efforts of self-control or will are tiring: if we have had to force ourselves to do a task, we are less likely to exert self-control when starting the next task. This is called ‘ego depletion’. Participants who had to suppress their emotional response did not do well in a later psychical test. Emotional effort has a bad influence on your ability to endure muscle pains. An ego-depleted person is therefore more likely to give up faster. In another experiment, participants who started with the task to eat healthy food while resisting sweet treats later gave up quicker than usual when faced with a demanding mental task.

Many tasks and situations lead to depletion of self-control. They all involve the need to suppress a natural urge and conflict. Examples are avoiding the thought of red cats, trying to impress someone and responding friendly to your husband’s bad behavior. There are also many and various indications of depletion, for example reacting aggressively to someone provoking you or not doing well in cognitive tasks.

Highly demanding tasks require self-control, while the exertion of self-control is unpleasant and depleting. Unlike mental load, ego depletion is a loss of motivation. It does not equal being cognitively busy.

Baumeister also found that mental energy is not merely a metaphor. The nervous system is one of the most glucose consuming parts of the body, especially when you are carrying out demanding mental tasks. Carrying out a cognitive activity that requires self-control results into a lower blood glucose level. This effect of ego depletion can be reversed by ingesting glucose. Only the participants who got a glucose drink before starting the second task were not depleted. Intuitive mistakes are usually more frequent among ego-depleted individuals.

A recent study showed the effects of depletion on judgment. Judges had to review parole applications. The researchers found an increase in approved requests after every food break. In the period until the next break, the rate dropped to nearly zero just before their next eating moment. The best explication is that hungry and fatigued judges had the urge to go for the easier default decision: denial of parole.

Monitoring and controlling actions and thoughts suggested by System 1 is one of the most important functions of System 2. System 2 either allows, suppresses or modifies them. Read the following puzzle, what does your intuition tell you? “An ice cream and chocolate dip cost € 1.10. The ice cream costs one euro more than the dip. What is the price of the dip?” You automatically answer € 0.10, which is wrong. If the price of the dip is € 0.10, then the total price will be € 1.20 (0.10 for the dip and 1.10 for the ice cream). The right answer is € 0.05. Answering with € 0.10 means that you did not actively review your intuitive answer and your System 2 supported a wrong answer that it could have prevented with little effort. Here we see the ‘law of least effort’ at work. Several thousand students at leading universities in the United States answered the puzzle and more than half of them gave the wrong answer. At lower ranked universities, 80% of the participating students failed to give the right answer. The ice cream – dip puzzle demonstrates that most people are overconfident: they are prone to put too much trust in their intuitions and avoid cognitive effort.

Now read the following argument and decide as fast as you can if the conclusion drawn is correct. “All apples are fruits. Some fruits are pink. Therefore some apples are pink.” Most college students agreed with the conclusion, but it is actually an invalid syllogism: it is possible that there are no pink apples. Because a plausible answer comes to mind straight away, not many people are willing to put effort into thinking it through. This is a problematic finding, because it indicates that when people believe a conclusion is valid, they also tend to believe the arguments are valid. System 1 focuses firstly on the conclusion, the arguments follow later.

Read the following question and rapidly give your best estimation. “How many homicides occur in the state of Tennessee in 12 months?” This question challenges System 2. The trick is whether people will remember that Memphis, a city with a very high crime rate, is in Tennessee. People from the United States know that Memphis is one of the biggest cities of Tennessee. The ones that remember that Memphis is in Tennessee give higher estimates. Most respondents who were asked this question did not think of the city when asked about the state and reported lower guesses than the respondents who were asked about the number of homicides in Memphis. Failing to think of Memphis can be a flaw of both System 1 and System 2. Whether the city pops up in your mind depends partly on the automatic function of memory. That is something people differ in. Some people, such as residents, have extensive knowledge about the state and are more likely to remember various facts about it. It also depends on the interests of people and their intelligence. Intelligence is not solely about reasoning, but also about retrieving relevant facts from memory and deploying attention. While memory function is associated with System 1, taking your time for a conscious search of memory is a feature of System 2. The extent of this search varies among people.

The ice cream – dip puzzle, the apples syllogism and the Memphis – Tennessee question have one thing in common: giving the wrong answers seems to be caused by insufficient motivation, not making enough effort. Students of high-ranked universities have the capability to provide the right answer. Without the temptation of accepting a plausible answer that automatically comes to mind, they can solve much harder problems. It is troubling that they are so easily satisfied and stop thinking. Their System 2 proved to be lazy. They should be less willing to accept tempting answers, more alert and intellectually active and have less confidence in their intuitions.

Shane Frederick used his Cognitive Reflection Test to examine the characteristics of students who had performed poorly and found that they are prone to answer with the first thought that comes to mind and are reluctant to make the effort of checking this intuition. They are also prone to believe other ideas from System 1. The students were particularly impatient, impulsive and wanted instant gratification. 63% of them preferred getting a smaller amount of money now than a larger amount after a month. Only 37% of the students who had given the right answer to the puzzles went for the smallest amount. The findings of Frederick indicate that System 1 and System 2 have different ‘personalities’. System 1 is intuitive and impulse, System 2 is cautious and capable of reasoning, but can also be lazy. The same goes for people: some are like System 1, others like System 2.

The link between self-control and thinking was also examined by Walter Mischel. He subjected four year olds to a dilemma: receiving a small reward (one cookie) whenever they wanted it or a bigger reward (two Oreos) after waiting for 15 minutes in a non-distracting room. Half of them succeeded in waiting for 15 minutes, mostly by trying not to pay attention to the cookie. Over a decade later, the children that had managed to resist the temptation had greater measures of executive control in cognitive tasks, in particular the ability to reallocate attention. They were also less likely to do drugs and had better scores on intelligence tests.

Other researchers examined the connection between intelligence and cognitive control by exposing four to six year olds to computer games specifically designed to engage their attention and control abilities. The researchers discovered that training attention improved both executive control and their scores on intelligence tests. They also found that parenting techniques affected the kids’ ability of controlling attention and a close link between this ability and the ability of controlling emotions.

How does the ‘associative machine’ of System 1 work? - Chapter 4

Read the following words:

“Mango” “Puke”

In a few seconds you will experience appalling images and pull a disgusted face. You automatically responded to the word ‘puke’ like you would respond to the actual event. Our minds automatically assume causality between the words mango and puke, forming a scenario in which the mango caused nausea. This results into a short-term aversion to mangos. You are also extra ready to recognize and respond to concepts and objects associated with ‘puke’, such as vomit, sick, nausea and words associated with ‘mango’, such as exotic, fruit and red. Words associated with other causes of puking are also easier to recognize (food poisoning, hangover). You were a little surprised, as your System 1 noticed the uncommon juxtaposition of the words.

This wide range of responses occurred effortlessly, automatically, quickly and could not be stopped. This is an example of your System 1 at work. The visions and thoughts you experienced are the result of the process called ‘associative activation’: ideas that have been formed generate numerous other ideas. A word evokes memories, which triggers emotions, which evoke reactions like facial expressions and an avoidance tendency. These reactions intensify the feelings to which they are connected, and the feelings intensify compatible thoughts. This rapid process of physical, emotional and cognitive response is called ‘associatively coherent’. System 1 tries to make sense of the unusual situation (two random words) by linking them in a logical story. It starts with evaluating the current level of threat and then creates a context for the current situation and future events. System 1 treats the connection between two words as a representation of reality. Your body reacts as it would react to the real event and your emotional reaction is part of the interpretation of that event. As cognitive researchers recently emphasized: you do not merely think with your brain, but also with your body.

The process that causes mental events in sequences is called ‘the association of ideas’. Philosopher Hume identified three principles of association: causality, contiguity in place and time, and resemblance. An idea can be abstract or concrete and can be described as a noun, a verb, an adjective or in a physical way. A psychologist sees an idea as a node in a network, the associative memory, in which it is linked to numerous others. There are several types of links: cause-effect (drinking-hangover), things- properties (carrot - orange), things – categories (tulip – flower). According to the current view of the functioning of the associative memory, the mind goes through a sequence of ideas at once. One idea evokes many other ideas, of which only a few are conscious.

Psychologists discovered in the 1980’s that seeing or hearing words causes instant and measurable changes in the ease with which numerous related words can be evoked. If you have read the word ‘beverage’ and then have to finish the word ‘TE_’, you are more likely to go for ‘tea’ than for ‘ten’. This would be different if you had seen the word ‘number’. You will also be quicker than normal to recognize the word ‘tea’ when it is whispered or blurred out. In addition, you are primed for other drinking-related ideas (thirsty, water). These primed ideas can prime other ideas.

Priming is not restricted to words and concepts. Our emotions and actions can be primed by events that we are not aware of. The classic experiment of John Bargh showed how young students walked significantly slower after finishing the task of constructing sentences with a set of words associated with old people (bald, wrinkle, gray, Florida). This experiment involved two stages of priming: 1) the words prime thoughts of old people, although the word ‘old’ was not mentioned and 2) the thoughts prime an action which is associated with the elderly (walking slowly). The students had not noticed that the words had an elderly theme and insisted that none of their actions were influenced by the words. Although they were not aware of the idea of old age, their behavior had changed nonetheless. This phenomenon is called the ‘ideomotor effect’. A reciprocal experiment evoked a coherent reaction: participants had to walk slower than usual and were afterwards quicker to recognize words associated with the elderly. If people are primed to think of old age, they tend to act old, and acting old results into the thought of old age.

Reciprocal links are common in the associative machinery. Being happy makes you smiles and smiling tends to make you feel happy. Gestures can unconsciously influence feelings and thoughts. Nodding makes you more acceptive of something you hear and shaking your head results in the tendency of rejecting it. The advice to ‘act nice and calm, no matter how you feel’ is excellent, because you are likely to actually feeling nice and calm.

These discoveries show that our choices and judgments are not as autonomous and conscious as we think they are. We may see voting as a conscious act reflecting our values and judgments of policies, one that is not affected by irrelevant factors. However, a study demonstrated that the location of polling station can influence the voting pattern.

Money primes evoke problematic effects. People who were shown words with a money theme or images of money became more independent, self-reliant, selfish and preferred being alone. They waited longer before asking for help and were less willing to help others. The idea of money thus primes individualism. These findings indicate that living in a money-driven society unconsciously and negatively shapes our attitudes and behavior. In some societies, people are often reminded of respect, God or their national leader. The latter might result into reduced independent and spontaneous behavior.

Most people are in disbelief when they are told about priming studies. This makes sense, because System 2 thinks it is in charge and cannot be manipulated. Priming is a phenomena arising in System 1, which we have no conscious access to. System 1 produces impressions that frequently turn into beliefs, which become choices, judgments and actions, without us being aware of it. In this light, it is no surprise that System 1 is also causing systematic errors in our intuition. We are not completely influenced by random primes though, the effects are often small. Only voters in doubt will be influenced by the location of the polling station. They could, however, make the difference.

What is cognitive ease? - Chapter 5

When we are conscious, several assessments take place in our brain, providing answers to important questions: Is something new happening? Are things going alright? Is there a threat? Should I redirect my attention? System 1 carries out these assessments automatically. It determines whether System 2 needs to put in more effort. ‘Cognitive ease’ is one of the variables being measured. On a scale of easy to strained, ‘easy’ means that things are going alright (no news, no threats, no redirecting of attention needed) and ‘strained’ means that a problem occurred and System 2 has some work to do. ‘Cognitive strain’ is affected by the presence of unmet demands and the current level of effort.

Processing a clearly written sentence or listening to someone when you are happy induces cognitive ease. Reading a blurry manual or when you are frustrated induces cognitive strain. The causes of strain or ease have interchangeable effects. When you are in a state of strained ease, you are probably suspicious, putting in more effort, feeling less comfortable but also less creative and intuitive. When you feel at ease, you are probably in a positive mood, satisfied, feeling comfortable and rather causal in your thinking.

What are illusions of remembering?

Thinking and memory are susceptible to illusions. Psychologist Jacoby first exemplified the memory illusion in his article ‘Becoming famous overnight’. When you are shown a list of made-up names and five minutes later you come across one of those names, you will remember where you saw it and that it is not the name of a famous person. Three days later you are presented a long list of names, including semi-famous people and new unknown names. You have to identify all the celebrities in the list. It is likely that you will identify the made-up name as a famous person. In case of very famous people, you have a mental file with extensive information. There is no information about someone whose name you heard twice. You remember seeing the name before, but there is nothing more than a feeling of familiarity. Familiarity has a quality of ‘pastness’ that suggests that there is a direct reflection of a past experience. This quality is an illusion. The made-up name looks familiar, because words you have seen earlier become easier to see again and quicker to read. Thus, seeing a word you have seen before induces cognitive ease, which results into the illusion of familiarity.

What are illusions of truth?

“Amsterdam is the capital of The Netherlands”. “You cannot drink seawater, because it is too salty”. “A duck has four legs”. After reading these statements, you instantly retrieved many related facts and quickly knew that the first two are correct and the third is incorrect. However, the statement “A duck has three legs” is more clearly incorrect. The associative machine slows the assessment of the third statement by providing the information that there are many four-legged animals. System 2 is responsible for this.

When the correct answer does not come to mind, we tend to go by the cognitive ease: we pick the answer that feels familiar and assume it is true. Extreme and new answers are likely to get rejected. System 1 produces the impression of familiarity and System 2 provides a judgment (true or not) based on that impression. If a judgment is based on an impression of cognitive strain or ease, a predictable illusion will occur. If you want people to believe a false statement, you have to frequently repeat it, because it is hard to distinguish the truth from familiarity. This is a well-known fact among marketers and authorities. It is not necessary to repeat the whole statement or idea, making people familiar with one phrase can make the whole statement appear true.

How do you write a persuasive message?

If you want to write a persuasive text, you should enlist cognitive ease and truth illusions. Even if your statement is true, you still need to convince people. In order to avoid cognitive strain, you should start with maximizing legibility. Print the text on high-quality paper to emphasize the contrast between background and characters. A text printed in bright colors is more likely to be believed. If you want to appear intelligent and credible, avoid using complex and pretentious language. Try making your statement memorable, by putting it in verse. Rhyming aphorisms are considered to be more true. If quoting a source, avoid names that are difficult to pronounce. System 2 is lazy, minimal mental effort is preferred.

Many decisions we make in our lives are guided by the impressions produced by System 1, of which the source is often unknown. Whether we believe a statement is true or not depends on feeling a sense of cognitive ease: is there a link with logic or an association with other preferences or beliefs you hold and does it come from a source you like and trust? The problem is that there could be other causes of cognitive ease, like an attractive presentation of a text. It is not easy to overcome superficial factors that evoke illusions of truth, as System 2 is lazy and usually backs the suggestions of System 1.

What happens when you experience cognitive strain?

When System 2 is engaged in effortful operations, you experience cognitive strain. Experiencing cognitive strain mobilizes System 2 to a more active mode. When students of a leading university were asked to take the Cognitive Reflection Test, half of them read it in a small font printed in a shade of gray. While the text was legible, the font caused cognitive strain. 90% of the students who read the Test in a regular font made a mistake, contrary to 35% of the students who had to deal with the small font. The latter performed better, showing that cognitive strain mobilizes System 2, which in turn rejects the suggestion by System 1.

What are the benefits of cognitive ease?

The article ‘Mind at ease puts a smile on the face’ discussed the experiment in which people were shown images of objects for a short amount of time. In some images, the objects were briefly outlined before showing the complete image, making them easier to recognize. When the images were easier to see, the participants appeared more relaxed and happy. This suggests that System 1 associates cognitive ease with positive feelings. The same goes for words that are easy to pronounce. Businesses with pronounceable names initially do better on the stock market. According to investors, stocks with fluent names are more profitable (compare ‘Emmi’ to ‘Ypsomed’).

Repetition also induces cognitive ease. More frequently displayed words are considered to mean something good as opposed to words that are shown just once or twice. Psychologist Zajonc called this link between the repetition of arbitrary stimuli and the mild affection people have for it the ‘mere exposure effect’. This effect occurs unconsciously. Even when the images are displayed so fast that the observers are not aware of them, they are still rated better. This is no surprise: System 1 reacts to impressions of things of which System 2 is unaware of. The effect of repetition on affection is according to Zajonc a biological phenomenon common to all living creatures. For surviving in a world full of dangers, we need to respond cautiously to novel stimuli: with fear and withdrawal. Caution fades if the stimulus proved to be safe. The mere exposure effect occurs because being repeatedly exposed to the stimulus did not cause harm.

How do ease, mood and intuition relate to each other?

Psychologist Mednick came up with the Remote Association Test (RAT). He argued that creativity is associative memory that works extremely well. Read the following words: “Gouda” “Fondue” “Dairy”. What word is associated with them? The answer is pretty easy: “cheese”. Now read: ‘rocket’, ‘dive’ ‘light’. This answer, ‘sky’, is much harder to find. Studies prove that a sense of cognitive ease can be induced by a very weak signal from the associative memory. It ‘knows’ that the three words share an association, long before retrieving that association. Manipulations that increase cognitive ease (clear font, priming, pre-exposing images) increase the tendency to see the association between the words. Our mood also affects our intuition. Making participants happy prior to the test doubled their accuracy, while sad participants were not capable of performing accurately. Mood affects the operations of System 1: being sad makes us lose touch with our intuition.

These discoveries indicate that intuition, gullibility, creativity, good mood and increased reliance on System 1 are part of a cluster. A good mood loosens the control of System 2 over performance: we become more creative and intuitive, but also more prone to logical errors and less vigilant. Just like the mere exposure effect, this process has an evolutionary explanation. Being in a good mood means things are going well, there is no threat and you can let your guard down. Cognitive ease is both a consequence and a cause of feeling happy.

The RAT tells us more about the connection between positive affect and cognitive ease. You are likely to smile when you read a coherent triad of words. Smiling and cognitive ease occur together and in turn, feeling happy leads to intuitions of coherence. Studies show that a brief emotional reaction following the display of words forms the basis of judgments of coherence.

How does our mind deal with surprises? - Chapter 6

The main function of System 1 is maintaining and updating a model of your personal world, which represents normality. This model is constructed by associations that connect ideas of events, circumstances, outcomes and actions that regularly occur. The formed connections become a pattern of associated ideas, which represents the structure of events in your life. It determines how you interpret the present and your future expectations.

Surprises are crucial elements of our mental life, they are the most sensitive indication of our understanding of the world and our expectations from it. Surprises can be divided into two varieties: conscious and active surprises, and passive surprises. Around dinner time, you expect your partner to return home from work. When you hear car doors slamming, you expect to see his face. It would be a surprise if someone else walks in. In case of a passive event you do not wait for it, but you are also not surprised when it happens. Although not actively expected, it is normal in that situation.

One event can make a recurrence less surprising. Imagine you run into your old friend Kim while holidaying far away from home. A few weeks later, you see her at a concert in Amsterdam. The second meeting is a more remarkable coincidence, but you are less surprised to meet her. The first one changed the idea of Kim in your mind. While System 2 knows it is an absurd idea, System 1 made it seem normal to meet Kim in unusual places. You would have been more surprised if you met another old friend at the concert, even though meeting Kim was statistically no more likely than any other old friend.

Passive expectations sometimes turn active. Imagine you are driving on the highway and spot a car with a flat tire. Exactly one week later, you see another person stranded with a flat tire, at the same location. You will be less surprised the second time, because you consider that location as the ‘place where people strand with flat tires’. Because the circumstances were similar, the second event evoked an active expectation: you will be reminded of stranded cars every time you pass that location for a long time and are prepared to spot another unlucky driver.

How does our mind assess normality?

How incidents come to be perceived as abnormal or normal can be explained by the ‘norm theory’. If you witness two abnormal events, the second event will retrieve the first one from memory and together they will make sense. An example is the ‘Moses illusion’: “How many animals of each kind did Moses take into the ark?” Very few people realize that it was Noah who took them into the ark. The thought of animals in an ark creates a biblical context, in which Moses is normal. Hearing or reading his name did not come as a surprise. The (unconscious) associative coherence makes you accept the question. Replace Moses with Bill Gates and there would have been no illusion, because his name is abnormal in the context. The brain quickly detects deviations of normality. It immediately responds when you hear a guy say “My belly hurts, it must be menstrual cramps”. Our world knowledge instantly recognizes the abnormality and is why we can communicate with each other: we use the same words. We have ‘norms’ for lots of categories, which provide the background for the instant detection of abnormalities (such as menstruating guys).

The role of norms in communication can be explained by the following example. “The big ant climbed on the neck of the tiny giraffe”. We are likely to have similar norms for the size of giraffes and ants. We both will not picture the ant larger than the giraffe. System 1 has access to norms of categories, specifying the range of possible values and the most typical cases.

How does our mind find causal connections?

“Jane’s husband was late. His parents could arrive any minute. Jane was irritated.” Irritation (effect) and being late (possible cause) are linked in your network of associations. You instantly found the cause of Jane’s irritation, it was not linked to his parents coming over. Searching for causal connections, an automatic operation of System 1, is a component of understanding a story. Your (conscious) System 2 accepted the causal interpretation it was offered. When there is little information about what happened, System 1 starts searching for a coherent causal story that brings the fragments of information together.

People have impressions of causality from birth. Our minds are from an early age ready to identify agents, assign them personality traits and certain intentions. Before the age of one, we are prepared to identify victims and bullies. People have the tendency to apply causal thinking when the situation actually requires statistical thinking. Statistical reasoning derives conclusions about individual cases from ensembles and categories, which System 1 is not capable of. System 2 is able to reason statistically, but this requires training (which most people do not receive).

Why do people often jump to conclusions? - Chapter 7

One of the characteristics of System 1 is jumping to conclusions. Jumping to a conclusion is efficient if the conclusion is likely to be true, the costs of a potential mistake are acceptable and it saves a fair amount of effort and time. It is risky when the stakes are high, the situation is unfamiliar and there is a lack of time for collecting further information. In this case, it is likely to make an intuitive error, unless System 2 intervenes.

If you read a list of letters, including the number ‘13’ in the same shape as the letters, you tend to read it as the letter ‘B’. Would it be the letter ‘B’ in a list of numbers, you tend to read the number ‘13’. This is explained by the fact that the context affects the interpretation of each character. You jump to a conclusion and fail to detect the deviation. When there is no explicit context, System 1 produces a plausible context. When the situation is uncertain, System 1 takes a bet, which is guided by experience. The current context and recent events strongly influence the interpretation. When you do not remember recent events, you rely on older memories (like singing the alphabet). The B/13-example shows that a definite choice was made without you being aware of it. System 1 did not consider alternatives: it does not know conscious doubt. Doubt and uncertainty are typical for System 2.

What is the confirmation bias?

Psychologist Gilbert came up with the theory of believing and unbelieving. He argued that understanding an idea starts with attempting to believe it. What would it mean if it were true? The first attempt to believe is an automatic process of System 1, which constructs the most plausible interpretation of the situation. Even a foolish idea (“birds drink wine”) will initially be believed due to the automatic process of associative memory searching for connections between both ideas that would make sense of it.

Unbelieving is according to Gilbert a process of System 2. When System 2 is engaged, we tend to believe most things. We are more likely to be persuaded by commercials when we are depleted and fatigued.

The operations of associative memory are linked to ‘confirmation bias’. The question ‘Is Naomi nice?’ evokes different memories than the question ‘Is Naomi rude?’. System 2 tests a hypothesis by conscious searching for confirming facts. It is a rule of science to test a hypothesis by trying to refute it, but people (even scientists) tend to search for evidence that supports their beliefs. The confirmation bias of System 1 is guilty of uncritically accepting suggestions and exaggerating the probability of unlikely events.

What is the halo effect?

If you like someone’s views and opinions, you are likely to also like his/her appearance and voice. The tendency to like or dislike everything about someone, including the unobserved things, is called the ‘halo effect’. This common bias plays a significant role in the way we shape our view of situations and people. It represents the worlds more coherent than it is in reality.

When you meet a man who is approachable and pleasant to speak to, you tend to believe that he is kind to cats. You know nothing about his love for animals, but you like him and you like animal lovers, so by association you believe he likes cats, which makes you even like him more. Evidence was missing, the gap was filled by a guess that matches your feelings. In other cases, your interpretation is influenced by the emotion linked to your first impression. A classic experiment is the following:

Mark: smart – passionate – impulsive – envious – stubborn

Eric: stubborn – envious – impulsive – passionate – smart

Who do you like more? Most people pick Mark, because the first qualities alter the meaning of the qualities mentioned later. The sequence in which we observe things is important, because the halo effect assigns more weight to first impressions.

The halo effect can be tamed by the principle of decorrelate error. Ask a big group of people to estimate the number of marbles in a jar. Individuals will perform poorly, but the whole group usually does well. Some people underestimate the number, others overestimate it, but the average tends to be fairly accurate. The errors that observers make are independent of the errors made by others. Error reduction only works well in case of independent observations and uncorrelated errors. Error reduction fails if the individuals share a bias, as they will influence each other, which leads to a reduced sample size and a less precise group estimation. The most useful data can be derived from multiple sources by making them independent of each other. An example is the rule that multiple witnesses are not allowed to discuss the incident prior to their testimony.

What is the WYSIATI-rule?

Our mind treats currently available information completely different from information that is not retrieved from memory. System 1 construct the most plausible story from currently activated ideas, without considering missing information. The coherence of the created story is it’s measure of success, not the quality and amount of the information it is based on. When there is very little information, which occurs regularly, System 1 jumps to conclusions.

Consider this statement: “Will Carlo be a good boss? He is smart and ambitious.” The answer ‘yes’ immediately popped into your head, based on the limited information available. What if the next words were ‘rude’ and ‘irresponsible’? System 1 only came to the conclusion that being smart and ambitious is positive, which will be revised if novel information becomes available. It does not wait for it though and then there is also the problem of the first-impression-bias.

System 1 seeks for coherence and System 2 is lazy, which indicates that System 2 will support a great amount of intuitive beliefs. Although System 2 is capable of checking the evidence and seeking information that is needed before making a decision, System 1 will still influence these decisions by providing non-stop input.

In order to understand intuitive thinking, you must realize that jumping to conclusions on the basis of very little information is an important part of it. Keep the abbreviation ‘WYSIATI’ in mind (What You See Is All There Is). System 1 is extremely insensitive to the quantity and quality of the information that leads to intuitions and impressions.

An experiment involving jury members who were given one-sided evidence demonstrated the striking effect on their judgments. They also felt more confident in their judgments. The consistency of the evidence mattered more than the completeness of it. The less you know, the easier it is to create a coherent story. WYSIATI induces achieving coherence and cognitive ease, that makes us believe a statement is true. It is why we are fast thinkers and can make sense of incomplete stories. In most cases, the created stories come close to reality and result into appropriate actions. WYSIATI can, however, lead to biases of choice and judgement. Examples are overconfidence (we believe what we see and neglect the possibility that crucial evidence is missing), base-rate neglect and framing effects (presenting the same information in different ways evokes different emotions).

How are judgments formed? – Chapter 8

System 2 deals with both questions from someone else (“Did you like the food?”) and from your own mind (“Do I really need to buy this?”). Both answers come from directing your attention and searching your memory. That is not how System 1 operates. It constantly monitors what is happening inside and outside our mind. It unintentionally and effortlessly assesses the elements of the situation. These ‘basic assessments’ affect our intuitive judgment, because they are easily replaced with harder questions. Two other characteristics of System 1 supporting the replacement of one judgment with another are the ability to translate values across dimensions (“If Chloe were as heavy as she is smart, how heavy would she be?”) and triggering other computations, like basic assessments.

An example of a ‘basic assessment’ is the ability to distinguish between an enemy and a friend in the blink of an eye. System 1 rapidly provides the judgment whether it is safe or not to interact with a stranger. In one glance at someone’s face, we can evaluate how trustworthy and dominant (thus threatening) that person is and whether we expect his/her intentions to be hostile or friendly. Dominance is assessed by looking at the shape of the face (square chin) and intentions are predicted through facial expressions. Face reading is not 100% reliable, a stranger with a round chin and a (fake) smile can still have bad intentions. In today’s society, this evolutionary ability is used to influence the voting behavior of people. Participants were shown campaign portraits of politicians and asked to rate their competence and likability based on their faces. The winner of the election turned out to be the person with the highest competence rating. Ratings of likability were less predictive of the voting result. Competence was judged by combining trustworthiness and strength. However, facial features are not predictive of how well someone will perform. Rejecting a candidate is based on the lack of attributes we consider important. Studies of the brain show that losing politicians evoked a greater negative emotional response, which is an example of a ‘judgment heuristic’.

The influence of System 1 on voting varies among people. Research shows that politically uninformed and television-prone voters are more likely to fall back on the automatic and quickly formed preferences of System 1. The effect of facial competence on their voting behavior is three times greater in comparison to informed voters who watch less television.

What is intensity matching?

Questions about one’s popularity, happiness or suitable punishments have one thing in common: they refer to an underlying dimension of amount or intensity. It is linked to using the word ‘more’ (more popular, more happy, more severe). This regards another ability of System 1: matching across various dimensions. An example is the following. “Richard read fluently when he was five years old. What is your prediction of his future GPA?” You will answer by translating from one scale to another scale and choose the matching GPA. Predicting by matching is a natural operation of System 1 and System 2 usually accepts it, but it is statistically wrong.

What does Kahneman mean with ‘mental shotgun’?

System 1 is constantly monitoring what is going on around you and unintentionally carries out multiple routine assessments at the same time. Other assessments are carried out only when needed. These judgments are voluntary: they occur by choice. Our control over intended judgments is not precise though, we usually assess much more than needed or wanted. This excess computation is called the ‘mental shotgun’. Similar to a shotgun causing scattering pellets when aiming at one specific target, it is impossible for System 1 not to do more than System 2 demands it to do.

Consider the following example. Participants had to listen to pairs of words and were asked to hit a button as soon as they noticed the words rhyming. The pairs both contain rhyming words:

BOAT – NOTE

NOTE – MOTE

The participants did not see the words, but they were also influenced by the difference in spelling. They took longer to detect the words rhyme if their spelling was different. The task was to only compare the pronunciation, but they also compared the spelling. The intention to answer one question induced a pointless second question. The combination of intensity matching and a mental shotgun explains having intuitive judgments.

What is the role of substitution? - Chapter 9

Our minds have intuitive opinions and feelings about nearly everything, only in rare cases we face problems to which no solution comes spontaneously to mind. We dislike or fancy someone before we know much about him/her and we distrust a stranger without knowing why. We have answers to questions we don’t fully understand, based on evidence we can’t defend or explain.

How do we generate an intuitive opinion on a complex problem? If System 1 can’t find an adequate answer to a difficult question fast enough, it will seek for an easier, related question and answer that one instead. This operation is called ‘substitution’, the intended question the ‘target question’ and the easier question the ‘heuristic question’. ‘Heuristic’ is defined as the simple procedure that helps find an adequate but not perfect answer to a difficult question.

Substitution can be useful when you have to solve difficult problems. This strategy is consciously implemented by System 2. Other heuristics are the result of the mental shotgun, which are not chosen. Consider the following questions: “How satisfied are you with your current life?” “How popular will this politician be three months from now?” Before you are able to give an adequate answer, you must consider other questions. What is the definition of satisfaction? What political developments do you expect in the next three months? Taking all these questions into consideration would be impractical. Instead of providing an optimally reasoned answer, you can go for the heuristic alternative. Sometimes this works well and sometimes it results into a major error. Easily answered (heuristic) questions could be: “Am I satisfied right now?’ and “How popular is the politician currently?”

The automatic processes of intensity matching and the mental shotgun normally produce answers to simple questions that are related to the main question. The lazy system 2 tends to endorse a heuristic answer, although it could reject or modify it after retrieving more information. You probably won’t even notice how difficult the target question was, because an intuitive answer easily came to mind.

What is the mood heuristic for happiness?

A good example of substitution is the following experiment, in which participants were asked the questions: “How happy have you been lately?” “On how many dates did you go last month?” There turned out to be no correlation between the answers. Dating did not immediately came to mind when asked to rate happiness. Other participants also got both questions, but in reverse order. The outcome was totally different: the correlation was very high. Their lives did not revolve around dating (as the first experiment showed no correlation with happiness), but they experienced an emotional response when asked to think about their love life. The participants with the most dates were reminded of happy moments, while those who did not date experienced sadness. The happiness induced by the dating-question was still lingering when the happiness-question was asked. The last question requires hard thinking, but got substituted by an easy-to-answer question instead. Any emotion-inducing question that influences someone’s mood will have this effect. WYSITA. Our current mood has a big influence on the evaluation of our happiness.

What is the affect heuristic?

We tend to let our dislikes and likes shape our beliefs about the world. How convincing we find arguments is determined by our political preferences. If you favor a certain policy, you believe the benefits are greater than those of the alternatives. If you dislike things (bungee jumping or tattoos for instance), you are prone to believe they are very risky and have no benefits. Conclusions are dominant over arguments, but the mind is not completely immune to sensible reasoning and information. Emotional attitude and beliefs may alter when you learn that something is not as risky as you thought, but information about a lower risk also makes the benefits appear greater (without them being mentioned). This is a different trait of System 2. It actively searches the memory, makes assessments, compares, makes choices, is able to resist the suggestions of System 1 and is self-critical, but when it comes to attitudes, it tends to defend the emotions of System 1. The search for arguments and information is usually restricted to information that matches existing beliefs, with no intention of examining it.

A list of activities and traits of System 1:

  • Distinguishing the normal from the surprising
  • Biased to believe and confirm
  • Matching intensities across scales
  • Mental shotgun
  • Halo effect
  • Heuristics: substituting a difficult question for an easier question
  • Prospect theory: being more sensitive to changes than to states
  • Loss aversion: responds more strongly to a loss than to a gain

What are heuristics and biases? - Part 2

What is meant by ‘the law of small numbers’? – Chapter 10

A study of cancer diagnoses in the United States showed a pattern: the number was lowest in sparsely populated, rural and Republican areas. What you make of this information, through searching memory and formulating hypotheses, is an operation of System 2. System 1 was also involved: System 2 depends on the suggestions and facts retrieved from associative memory. You probably focused on the rural-fact and did not link the lowest number to Republican policies. It makes sense to attribute it to the rural environment, with fresher foods and cleaner air. The number of diagnosis was highest in also rural and sparsely populated areas. You might link this information to poverty, access to good healthcare or smoking. However, living in rural areas cannot explain both numbers. The main factor was not the areas being Republican or rural, but having small populations. This example shows the complex relation between statistics and our mind.

System 1 excels in one form of thinking: it effortlessly and automatically detects causal links between events. System 1 fails to deal with merely statistical information, which affects the probability of the outcome but not the cause of the event.

Imagine a jar filled with balls: half of them are black, half are yellow. Without looking, you draw 4 balls from the jar, record the number of black balls and throw them back. You repeat this several times. If you summarize the outcomes, you will find that ‘2 black, 2 yellow’ occurs six times as often as the result ‘4 black’ or ‘4 yellow’. This is a mathematical fact. Now imagine the US-population as balls in a jar. Some balls are marked CD (cancer diagnosis). You draw samples of balls and populate each area. Rural samples are the smallest. Extreme outcomes (many or few diagnoses) will be found in sparsely populated areas. Fact: number of cancer diagnoses varies across areas. Explanation: extreme outcomes are more likely to be found in smaller samples. This is a statistical explanation, it is not causal. The small population of an area does not prevent or cause cancer. There is nothing to explain, the number of diagnoses is actually not higher or lower than normal, it just looks that way due to a sampling accident. The differences in sample sizes resulted into ‘artifacts’: observations that are produced exclusively by an aspect of the research method. Outcomes of large samples are more trustworthy, also known as the law of large numbers. The ‘sparsely populated’ part probably did not seem relevant to you and it took some effort to realize that large samples are more precise extreme outcomes are found more often in small samples. Even researchers have a poor understanding of sampling effects.

Research psychologists see sampling variation as an unpleasant obstacle in their research project. If the hypothesis to be tested is “The vocabulary of seven-year-old girls is greater than the vocabulary of seven-year-old boys”, you must use a large enough sample to prevent wasting effort and time. In the whole population, the hypothesis is true: girls in general have a more developed vocabulary. Boys and girls vary greatly though, so you could select a sample in which the boys score higher or no difference is detected. Picking a too small sample size puts you at the mercy of sampling luck. It is possible to estimate the risk of error for all sample sizes, but psychologists tend to skip this procedure and use their often flawed judgment.

Psychologists usually make the mistake of choosing very small samples resulting into a 50% risk of failing to confirm hypotheses that are actually true. A likely explanation is that these researchers have intuitive misconceptions about the extent of sampling variation. Instead of picking a sample size by computation, researchers tend to trust their intuition and tradition. A study among researchers, including statisticians, showed that the majority made sample size mistakes. Kahneman and Amos advocate in their article ‘Belief in the Law of Small Numbers’ that researchers should be more suspicious of their statistical intuitions and recommend replacing impressions with computations.

What is the bias of confidence over doubt?

In a poll of 400 senior citizen, 60% approves of the president’s actions. You will probably conclude that ‘older people support the president’ and not pay much attention to the sample size. Your conclusion would be the same in case of a different sample size, unless it would be an extreme number like 4 or 40 million. This shows that we are not ‘adequately sensitive to sample size’. We automatically focus on the story, not on the reliability of the data. The principle of WYSIATI indicates that System 1 is not able to distinguish degrees of belief, it does not tend to doubt. It generates coherent stories and associations that make a statement seem true. System 2 has the ability to doubt, although it requires some effort. The law of small numbers is an example of the bias that makes us favor certainty over doubt. The belief that a small sample represents the population from which it is drawn is part of the tendency to exaggerate the coherence and consistency of what we witness. This exaggerated faith in the value of a few observations is related to the halo effect. System 1 creates a story on the basis of fragments of evidence, running ahead of the facts. It generates a representation of reality that makes too much sense.

The associative machine searches for causes: how did something came to be? The statistical approach focuses on what could have happened instead. There is no particular case, it was selected by chance. Our preference for causal thinking makes us susceptible for significant mistakes in the evaluation of the randomness of truly random incidents.

Consider the following example. Six women give birth to a baby. The sequence of male and female babies is random, the births are independent of each other and the number of male/female babies born earlier that day has no effect on the gender of the next baby. Are the following three possible sequences equally likely?

MMMFFF

FFFFFF

MFMMFM

Your intuitive answer ‘no’ is wrong. Because the outcomes M and F are equally likely and the births are independent, any possible sequence is a likely as any other. Even with this knowledge, only the last option seems random and is believed to more likely. We seek patterns and believe in a coherent world, in which regularities are the result of intention or mechanical causality: they do not occur accidentally. We refuse to believe regularities are the result of randomness. This misconception, and the ease with which we see patterns when there are none, can have serious consequences. A rocket bombing in World War II was believed to be not random, but a statistical analysis proved it was.

A study of misperceptions of randomness in basketball had a surprising outcome. The so called ‘hot hand’ is considered a fact by coaches, fans and players. Multiple successful shots in a row result into the causal judgments that the player is ‘hot’ and likely to make more shots. Teammates pass more often to this player. However, researchers found that the sequence of missed and successful shots is random. The hot hand is just a cognitive illusion. The public response to this finding was disbelief, due to the strong tendency to see patters in randomness (illusion of pattern). This illusion affects our lives in various ways.

What is the ‘anchoring effect’? – Chapter 11

The ‘anchoring effect’ is the phenomenon that occurs when you consider a particular value for an unknown quantity, prior to estimating that quantity. Imagine you are asked to spin a wheel marked 0-100, write the number you got down and then have to answer the question “What do you think is the percentage of…?” Your estimated answer will be close to the number you had to consider. If the wheel stopped at the number 15, your answer might be 25%. If you get the question “Was Mother Theresa 112 years old when she died?”, your guess would be significantly higher than it would be if the anchoring question referred to the age of 40 years. Another example is the asking price on a house influencing what you consider a reasonable price. The anchoring effect is very important and common in our everyday lives. Our judgments are influenced by uninformative numbers.

Anchoring effects are produced by two mechanisms. One form of anchoring is an operation of System 2: deliberate adjusting. The other form is an automatic operation of System 1: priming. The anchoring and adjustment heuristic is a good strategy for estimating uncertain quantities: start from the anchoring number, assess whether it is too low or high and gradually adjust your estimated number. The adjusting process ends when people reach a certain level of uncertainty, which is usually too soon. An example of insufficient adjustment is the situation of a 16-year old listening to loud music and annoyed parents. The indulgent teenager will turn the music down, from extremely loud (high anchor) to a ‘normal’ volume, but probably fails to adjust it enough. Adjusting means deliberately trying to find reasons to mentally move away from the anchor, which requires effort. A mentally depleted person adjusts less (staying nearer to the anchor). Not adjusting enough is a failure of a lazy or weak System 2.

The priming effect of anchoring is explained by the same automatic operation of System 1 as suggestion. Suggestion is a priming effect, inducing compatible evidence. Low and high numbers activate different ideas in memory. A high temperature makes you retrieve summery memories, which leads to a biased estimation of the annual temperature. Bringing something to mind is sometimes enough to make you feel, see or hear it. The question “Was Mandela younger or older than 134 when he died?” results into your associative machine generating the impression of a very old man, although you immediately knew that Mandela did not live for 135 years. System 1 makes sense of statements by attempting to make them true, it tries to create a world in which the anchor is the truth.

What is meant by the anchoring index?

Anchoring is one of the few psychological phenomena that can be measured. One group of participants gets asked questions with a high anchor (135 years) and another group questions with a low anchor (30 years). The difference is 105 years. The difference between the mean estimates produced by both groups can also be measured. Imagine it being 55 years. The ratio of the two differences (55/105) is called the anchoring index: 52%. This is a common value, seen in various cases. The closer to 100%, the closer to the anchor someone stays.

Anchoring effects are particularly strong in decisions regarding money (how much we are willing to pay for something).

Anchoring seems reasonable in some situations, for instance when the questions asked are difficult. If you know nothing about the topic, you could assume that the anchor number is close to the truth. Remarkably, anchoring research shows that obviously random anchors can be just as effective as possible informative anchors. The anchoring effect does not occur because people believe the anchors are informative. How powerful random anchors can be was shown in the following experiment. Experienced judges were first informed about someone who was caught stealing and then had to roll a pair of rigged dice that would result into a 9 or a 3. The judges then were asked whether they would sentence the thief to an imprisonment lesser or longer than the outcome of the dice rolling (in months) and lastly what exact sentence they would give. The judges with the 9-result would sentence the thief to 8 months, the judges with the 3-results would sentence the thief to 5 months, resulting into an anchoring effect of 50%.

What are the uses and abuses of anchors?

Anchoring effects are very common in our daily lives. Some people try to exploit our gullibility. The use of arbitrary rationing as a marketing trick can be explained by anchoring effects. An example is a store promoting a price reduction of 15%, with a limit of 10 products per person on one day and no limit on other days. Customers bought twice as many products on the limit-days. This is not solely explained by anchoring. Rationing suggests that the products are selling out quickly, making people want to stock up. A similar strategy is used by house sellers. Making a first move by setting a listing price, the anchor, gives an advantage in the negotiation phase. In order to resist the powerful anchoring effect, potential buyers should active System 2: focus the attention and search the memory for counterarguments. You can focus your attention on the minimal offer or on the costs related to not reaching an agreement.

System 2 is susceptible to the biasing influence of anchors that make certain information easier to retrieve from memory. System 2 has no knowledge of and control over the effects of random anchors. People who deny that being exposed to random or nonsensical anchors (Mandela’s death at age 135) could have influenced their estimate are wrong. Priming and anchoring effects are similarly threatening, because we are unaware of the way it constrains and guides our thinking, even if we are aware of the anchor itself. Advice: assume that any number you see has an anchoring influence. Resist that influence by mobilizing your System 2, especially if the stakes are high.

What is the availability heuristic? - Chapter 12

What do people do when they want to estimate the frequency of certain categories (for example the divorce rate among professors, poisonous snakes)? The reliance on the ease of memory search (instances coming to mind) is called the ‘availability heuristic’. This heuristic is both an automatic operation (System 1) as a deliberate problem-solving strategy (System 2).

The availability heuristic substitutes questions, which results into biases (systematic errors). Examples of factors that are potential sources of bias are:

  • Conspicuous events attract attention and are easy to retrieve from memory. Instances of divorces among famous actors will come easily to mind, making you prone to exaggerate the frequency of divorces among famous actors.
  • Dramatic events temporarily increase the availability of the concerning category. Hearing about a fatal accident with a bike on the news temporarily influences your opinion about the safety of bikes.
  • A personal experience is more available than an incident that happened to someone else. Pictures are more available than words and vivid examples are more available than statistics.

It requires a fair amount of effort to resist so many potential availability biases. It takes reconsidering our intuitions and impressions by asking yourself questions. Examples are “Is my impression that house robbery is a major national problem due to my neighbor’s houses recently being robbed?” or “Is it possible that I feel no need to eat healthy because none of my friends got sick last year?”

A well-known study of availability indicates that being aware of our own biases contributes to peaceful marriage, and potentially other joint projects. Surveys among spouses about their own contributions to housekeeping and causing arguments demonstrated that they remember their own contributions more clearly. The same goes for people working in group projects: most members feel they had put in more effort than others.

An experiment carried out by psychologist Schwarz helped us gain a better understanding of the availability heuristic. He wanted to assess how our impressions of the frequency of a category will be influenced by the task to list a certain number of instances. The experiment showed that the listing instances task enhances the judgment by the ease with which they come to mind and the number of instances retrieved. The first instances will come easily to mind, but the fluency of the last instances will be low. People who list eight instances of indecisive behavior will rate themselves as less indecisive than people who list only three. People who are asked to list eight instances of decisive behavior will think of themselves as rather indecisive. Self-rating is dominated by the ease with which instances come to mind. The fluency of the retrieval counts more than the amount of retrieved instances. Numerous experiments have yielded paradoxical results. Other examples are: people who are asked to report more arguments to support a choice are less confident in that choice, people who had to list many advantages of a gadget were afterwards less impressed by it and students who listed more ways to improve a course rated it better.

Results are not always paradoxical, sometimes content trumps the ease of retrieval. You understand a pattern of behavior if you know how to reverse it. Under which conditions will reversal take place? During the task, the ease with which instances of behavior come to mind changes. The first instances come to mind easily, then it gets harder. A gradual drop in the fluency of the retrieval will be expected, but the drop between the low number and the high number of instances will be greater than expected. You tend to think: ‘if it is so much harder than expected to come up with more instances of modesty, then I’m probably not very modest’. This assumption rests on a surprise (a worse than expected fluency). The availability heuristic turns into an ‘unexplained unavailability’ heuristic. This heuristic can be disrupted by providing an explanation for the diminished fluency of retrieval, like background music, a background color, curved text boxes or other irrelevant factors. By eliminating the surprise, the low fluency stops influencing the judgement.

System 1 sets expectations and generates surprise in case of violated expectations. System 1 also retrieves potential causes of the surprise, normally by finding one among recent surprises. System 2 is capable of resetting the expectations of System 2, making an normally surprising event appear normal. The ease with which instances are retrieved is a System 1 heuristic, which gets replaced by a focus on content as soon as System 2 becomes more engaged. Someone that lets System 1 guide him/her is more susceptible to availability biases than some someone who is more vigilant. Conditions in which someone is more affected by the ease of retrieval than by the retrieved content:

  • Scoring low on a depression scale
  • Being in a good mood
  • Being simultaneously engaged in another demanding task
  • Being or feeling powerful

How do availability, risk and emotion relate to each other? - Chapter 13

Economist Kunreuther found that availability effects are helpful in explaining the pattern of insurance purchase and prevention following disasters. Victims are worried after a disaster making them more eager to purchase insurance and adopt measures of prevention. This is temporary: once the memories start to fade, so does the worry. The recurrent cycles of disaster, worry and increasing complacency can be explained by the dynamics of memory.

A classic example of an availability bias is the survey carried out to analyse the public perceptions of risks. Participants were asked to consider sets of causes of death: accidents and strokes or asthma and diabetes. They had to indicate the most frequent cause per set and estimate the ratio of both frequencies. Their judgments were then compared to statistics. Some of the findings were:

  • 80% of participants judged accidental deaths more likely than death by strokes, although strokes cause nearly twice as many deaths.
  • Tornadoes were considered more deathly than asthma, although asthma kills 20 times more people.
  • Death by accident and by disease were seen as equally likely, although death by disease is 18 time more frequent.

It was clear that media coverage influenced the estimates of causes of death. Media coverage is biased towards sensationalism and novelty. The media shape the public interest and are shaped by it. Unusual causes of death receive disproportionate attention and are therefore seen as less unusual than they actually are. The world in our mind does not equal the real world. Expectations about the frequency of events are warped by the emotional intensity and prevalence of the information we are exposed to.

The estimates of causes of death represent the activated ideas in associative memory and are an example of substitution. Research also shows that the ease with which ideas of several risks come to mind and the emotional responses to these risks are connected. Terrifying images and thoughts easily come to mind, and vivid thoughts of danger induce fear. Psychologist Slovic introduced the affect heuristic: people rely on their emotions when making decisions and judgments. Do I hate or love it? In many aspects of life, our choices and opinions express our feelings. The affect heuristic is an example of substitution: the difficult question (What do I think about this?) is replaced by the easier question (How do I feel about this?). Slovic relates his finding to the finding of neuroscientist Damasio: when making decisions, our emotional evaluations of outcomes, the bodily state and the avoidance and approach tendencies connected to them all play a key role. Someone who does not show the appropriate emotions before making a decision also has an impaired ability to make reasonable decisions.

Slovic asked participants about their opinions about several technologies. They had to list the risks and benefits of these technologies. They found an extremely high negative correlation between the estimated level of risk and the estimated level of benefit. When they liked a technology, they listed great benefits and minimal risks. After the first task, they had to read a number of arguments in favor of the technologies. Some read arguments regarding the benefits, others read arguments that focused on the low risks. These statements changed the emotional appeal of the technology. The participants who were given the beneficial arguments altered their beliefs about the risks. They liked the technology more after reading about the benefits and considered the technology less risky, without any evidence. The participants who read about the mild risks gained a more positive view of the benefits.

According to Slovic, people are guided by emotion instead of reason. Experts show a lot of the same biases as ‘normal people’, but their preferences and judgments about risks differ from those of others. Differences between the public and experts reflect a conflict of values. Experts usually measure risks by the number of years or lives lost. The public differentiates between ‘bad and good deaths’. The public has a richer idea of risks than the experts, who only count cases. Slovic argues that the assessment of a risk depends on the chosen measure. Measurement and risk are both subjective.

Legal scholar Sunstein disagrees with Slovic. He argues that objectivity can be achieved by expertise, careful deliberation and science. He believes that biased responses to risks are a source of misplaced priorities in the United States’ policy. The system of regulation should reflect objective analysis, not irrational concerns from the public. Citizens are prone to cognitive biases, which in turn influences regulators. Jurist Kuran calls this process of biases turning into policy the ‘availability cascade’. This cascade could start from media coverage of a relatively minor incident, leading up to public worry and ultimately government action. The Alar scare-case demonstrates how a huge public overreaction to a chemical sprayed on apples, which turned out to pose a minimal health risk, led to the FDA banning it.

Dealing with small risks is a limitation in the ability of the mind: they either get ignored or given way too much weight. The amount of concern does not adequately reflect probability of harm. You imagine the dramatic story in the paper (the numerator) and do not think about all the safe cases (the denominator). A parent anxiously waiting for a child who is late from school cannot prevent horrible visions of disasters coming to mind, although there is almost nothing to worry about. Sunstein calls this the ‘probability neglect’. The combination of availability cascades and probability neglect leads to a major exaggeration of a minor threat.

Nowadays, terrorists are a significant source of availability cascades. Terror attacks cause relatively few deaths, for instance compared to the amount of traffic deaths. The difference is in the availability of the risk, the frequency and ease with which they are retrieved from memory. A lot of media coverage and horrible images cause a public concern. Terrorism triggers System 1. It is hard to reason yourself into calmness.

Kahneman shares the discomfort of Sunstein with the influence of availability cascades and irrationals concerns on the public risk policy. But he also agrees with Slovic’s opinion that policy makers should not ignore public concerns, whether they are reasonable or not. The public must be protected from fear, not merely from real dangers. Risk policies should combine the emotions of the public with the knowledge of experts.

What is the representativeness heuristic? – Chapter 14

Imagine drawing one ball from a jar. To determine whether the ball is more likely to be black or yellow, you need to know how many balls of each color there are in the jar. The proportion of balls of a specific color is called a ‘base rate’.

If you have to guess the graduate specialization of university student Patrick by ranking a list of fields in order of likelihood, you quickly realize that the relative size of enrollment in these field is important. Due to the absence of more information about Patrick, you will go by the base rates and predict that Patrick is more likely to be enrolled in law, than in medicine or art history, because there are more students overall in law. We use base-rate information when there is no further information.

Now you are asked to read a description about Patrick’s personality. “He is smart, socially skilled, a great listener and he cares for others. He is very capable of taking decisions under pressure and working in a team. He is responsible, disciplined and committed. His handwriting is terrible.” Again, you are asked to guess the specialization of Patrick from a list of fields. This question requires retrieving a stereotype of graduate students in the fields mentioned. The description fits the stereotype of a medical student.

The task of ranking the fields is hard and requires organization and discipline, which are operations of System 2. The clues used in the description activated an association with a stereotype, which is an automatic operation of System 1. The accuracy of the description and knowing the base rates of the fields are irrelevant. Focusing exclusively on the similarity of someone’s description to stereotypes is called ‘representativeness’.

The description of Patrick causes a clash between base rates and representativeness. When a statistic was asked to carry out similar tasks and guess Patrick’s profession, he answered with ‘medical doctor’. He neglected the relevance of the base rates after reading the description. An experiment among psychology students also resulted into substitution: the easier question about similarity (judgment of representativeness) substituted the difficult question about probability. Ignoring base rates and not paying attention to the accuracy of evidence in probability tasks will certainly lead to serious mistakes.

Statisticians and logicians disagree about the meaning of ‘probability’. Many consider it a measure of subjective degree of belief. Both have formulated a competing, precise definition of ‘probability’. For non-experts, it is a vague notion, also known as ‘likelihood’. It is associated with plausibility, surprise and uncertainty. We more or less know what we mean when we use the word, so it is not troublesome. Questions about likelihood or probability activate a mental shotgun: evoking answers to less difficult questions. An example of an easy answer is the automatic assessment of representativeness. System 1 unintentionally produces an impression of similarity.

Other examples of the representativeness heuristic are “She won’t become a good doctor with all those piercings’ or ‘He will receive the most votes, you can see he is a great leader.” Although it occurs often, predictions by representativeness are not statistically optimal, which is demonstrated in the following story. Sport scouts are prone to predict the potential success of players by their look and build. The manager of a club overruled the scouts by selecting players on the basis of statistics of previous performances. The players he choose were less expensive, but performed excellently.

What are considered sins of representativeness?

Intuitive impressions produced by representativeness are usually more accurate than a random guess would be. A person who acts friendly usually is friendly. In most cases, there is some truth to a stereotype. In other cases, the stereotypes are wrong and the representativeness heuristic is misleading, particularly if it causes the neglect of contradictory base-rate information. Relying merely on the heuristic, even if it is somewhat valid, goes against statistical logic. The exorbitant willingness to predict the occurrence of low base-rate (unlikely) events is considered a sin of representativeness.

Imagine seeing a man reading The Financial Times in a London park. Which of the following statements is a better guess about the reading man? “He has a PhD” “He does not have a college degree”. Representativeness makes you willing to go for the first statement, but the second statement should be seriously considered, because a lot more non-graduates visit parks.

Base-rate information will not always be neglected when more information about the topic is available. Research shows that many people are influenced by explicitly provided base-rate information, although the information about the specific case normally trumps mere statistics.

System 1 and System 2 are both to blame when a false intuitive judgement is made. System 1 generated the intuition, while System 2 validated it and expressed it in the form of a judgment. System 2 fails due to either laziness or ignorance. Base rates get ignored by some people because individual information is available (deeming it irrelevant), while other people are not focused on the task (laziness).

Insensitivity to the quality of evidence is another sin of representativeness. This is related to the WYSIATI-rule of System 1. The description of Patrick activated your associative machinery, but it may not be accurate. Even if you were explicitly told the statement is not very trustworthy, some parts were convincing enough. System 1 automatically processes the available information as if it were the truth, unless it immediately rejects it (for instance because it came from someone you don’t trust). When you are doubting the quality of the evidence, you should let your probability judgment stay near the base rate, which is an effortful exercise of discipline. The right answer to the student Patrick puzzle is staying close to your initial beliefs, slightly reducing the high probabilities of well-enrolled study fields and slightly raising the low probabilities of rare fields. The little evidence you received about Patrick should not be trusted, so your estimation must be dominated by the base rates.

How can you discipline intuition?

Your expectation that it will snow tomorrow is your subjective degree of believe. You should stop yourself from believing anything that comes to mind: discipline your intuition. The logic of probability should constrain your beliefs. If you believe there is a 70% chance of snow, you must also believe that there is a 30% chance it will not snow and not believe that there is a 20% chance of snow.

The rules for puzzles like the student Patrick one follow from Bayesian statistics. The rule of Thomas Bayes specifies how initial beliefs (for example, base rates) should be combined with evidence diagnosticity. Two ideas are important to remember: base rates matter and intuitive judgments of the evidence diagnosticity are frequently exaggerated. The combination of associative coherence and WYSIATI has the tendency to make us believe our own fabricated stories. You can discipline your intuition the Bayesian way by:

  • Anchoring your judgment of the probability of an outcome on a plausible base rate
  • Questioning the diagnosticity of the evidence.

What is meant by the ‘less-is-more’ pattern? - Chapter 15

A famous and controversial experiment is known as the ‘Linda problem’. It was made up by Amos and Kahneman to demonstrate the role of heuristics in judgment and their incompatibility with logic. People who read the description of Linda instantly knew what University she had attended, because it was known for its politically engaged, radical students. Participants were asked to read a list of possible scenarios regarding Linda and rank them by representativeness and by probability. They agreed that one scenario (“She is a feminist bank teller”) seems more likely than another one (“She is a bank teller”). The stereotype of bank tellers does not include feminism, but adding that part to the description results into a more coherent story. The twist is found in the judgments of probability, because there is a logical connection between both scenarios. Since every feminist bank teller is a bank teller, the probability of Linda being a feminist bank teller must be lower than the probability of Linda being merely a bank teller. Specifying possible events in greater detail always lowers the probability. This issue causes a conflict between the logic of probability and the intuition of representativeness. The participants ranked ‘feminist bank teller’ higher than ‘bank teller’ in their ranking by probability and by resemblance.

The scenarios ‘bank teller’ and ‘feminist bank teller’ were placed on the list as number 6 and 8, close to each other. Kahneman and Amos expected that participants would notice the connection between them and that their rankings would follow logic. But surprisingly, they had ranked ‘feminist bank teller’ as more likely. Representativeness had won the battle, which is considered a failure of System 2. Nearly 90% of the participants violated the logical rule. Failing to apply an obviously relevant logical rule is called ‘fallacy’. It is called a ‘conjunction fallacy’ when people judge a conjunction of two events (in this case: feminist and bank teller) to be more probable than one of the events (bank teller) in a straight comparison.

In the short version of the Linda problem, participants had to answer the question which of the following alternatives is more likely: “She is a bank teller” or “She is a bank teller and a feminist”. Most of them gave the right answer: “bank teller”. The difference with the long version is the separation between these outcomes by the intervening seventh scenario on the list: they were judged independently, no comparison was made between them. The short version involved a direct comparison, which mobilized System 2 and prevented the commitment of the fallacy.

Hsee’s dinnerware study demonstrated how absurd the less-is-more pattern is. He asked participants to price dinnerware sets. One group was shown a display that allowed a comparison between two sets (set X: 40 pieces of which 9 are broken and set Z: 24 pieces). This is called a ‘joint evaluation’. The two other groups were shown only one set, making it a ‘single evaluation’. Which set is worth more? Set X contains the same pieces as set Z and seven extra pieces, so it must be valued higher. The participants in the joint evaluation group priced set X higher, but the participants in the single evaluation group valued set Z much higher. This happened because the average value of the pieces is much lower for set X due to the broken pieces and the single evaluation was dominated by the average. Hsee calls this pattern ‘less is more’. Removing broken pieces from set X improves the value, just like adding a high valued item increases the value of the set.

The incidence of the conjunction fallacy can be reduced by formulating an easier question. The question “What percentage of the participants…?” is much harder than “How many of the 100 participants..?” 100 people are easier to imagine, while the percentage-question does not make you think of individuals.

The experiments learn us that System 2 is not very alert and lazy.

Why do causes trump statistics? – Chapter 16

Read the following problem, what is your intuitive answer?

“Last night, a bus was involved in a traffic accident. Two bus companies, the Yellow and the Brown, operate in the area. 80% of the busses are Yellow and 20% are Brown. A witness declared the bus being Brown. The reliability of the witness in similar circumstances was tested: the witness correctly identified each color 75% of the time and failed 25% of the time. What is the probability that the bus was Brown?”

There are two pieces of information: a base rate and the not fully reliable testimony of the witness. Without the witness, the probability of the bus being Brown is 20%: the base rate. If the bus companies had been equally large, the base rate would be useless (50/50) and merely the reliability of the witness would be considered. Most people ignore the base rate and answer 75%. Both sources of information should have been combined by the probability rule of Bayes.

Now consider the following story, in which the base rate is differently presented. “The two bus companies operate the same number of busses, but Yellow busses are involved in 80% of traffic accidents”. The versions are psychologically different, although they are mathematically the same. People who read the first story tend to ignore the base rate, because they don’t know what to do with it. The base rate regards the number of Yellow and Brown busses in the area, which does not explain the bus driver getting into an accident. People who read the second version give more weight to the base rate, their judgments are closer to the right answer. Drivers of Brown busses cause four times as many accidents, which leads to the instant conclusion that Brown drivers are dangerous drivers. You have formed a stereotype of Brown drivers, which fits easily into a causal story.

The bus examples demonstrates two types of base rates: statistical base rates (irrelevant facts about a population) and causal base rates. Statistical base rate are often underweighted or even neglected when specific information about the individual case is available. Causal base rates are used as information about a concrete case and are easily combined with other relevant facts. The causal version of the bus problem was formulated as a stereotype. Stereotypes: statements about a group that are accepted as facts about individual members. System 1 is known for representing categories as prototypical exemplars and norms, our memory holds a representation of one or more regular members of a category (cats, blenders). A representation is called a stereotype when the category is social. In the bus problem, stereotyping the Brown drivers improved the accuracy of judgment. In other cases, like profiling or hiring, stereotyping is seen as morally (and lawfully) wrong and causal base rates get rejected. However, rejecting valid stereotypes results in judgments that are not optimal. It might be politically correct, but it is not costless.

The concept of causal base rates was proposed by psychologist Ajzen. His experiment contained a manipulation of causal base rates. He showed the participants short descriptions of university students who had taken a test and asked them to judge the probability that an individual student had passed it. Manipulation: one group was told that 75% of the class passed the test, the other group was told 25% passed. The base rate of passing indicates that the test that 25% students passed must have been extremely difficult. The difficulty level is only one causal factor that determines an individual students’ result. The participants were very sensitive to the causal base rates. The probability of every student passing was judged much higher in the successful class. A merely statistical base rate had less influence on their judgments. This proves that System 1 performs poorly in statistical reasoning, it is better at dealing with stories in which the elements are causally connected.

The bus driver and the difficult test experiments demonstrate that stereotypical traits of individuals and a significant feature of a situation that influences individual outcomes are two inferences that are drawn from causal base rates. The well-known ‘helping experiment’ illustrates that people won’t draw inferences from base-rate information if they conflict with other beliefs. It also suggests that teaching psychology is hard. Participants sat in individual boots and spoke over an intercom about their lives. They had to talk in turns, the microphones of others were switched off when someone was talking. Among the participants was one that followed instructions from the researchers. He faked getting a seizure, asked for help and said he was going to die. Then his microphone was switched off, as his speaking time was over. The participants knew someone needed help and that there were others who could provide it. Only four out of fifteen responded instantly to the cry for help. Five came out when it would have been too late and six never left their booth. This shows that people feel relieved of responsibility when they know that other people heard to same appeal for help. This is surprising, because we tend to see ourselves as decent people who would immediately help others in need. This expectation proved to be wrong, which is something psychology teachers try to make their students aware of. It is, however, not easy to (negatively) change their minds about human nature and our behavior in certain situations.

Borgida and Nisbett expected that students would be able to recite the findings of the helping experiment, but doubted it would actually change their beliefs about human nature. They showed the students videos of short interviews with two participants of the helping experiment. They came across as kind, decent and normal. The students were asked to guess how quickly the interviewees had offered help. Following the Bayes’ rule means starting with asking yourself what your guess would be if you had not seen the videos: what is the base rate? 4 out of 15 participants rushed to help, so the probability of any participant immediately responding is 27%. The next step is adjusting your judgment if there is relevant information. The videos provided no relevant information about the helpfulness of the participants, so you need to stay near the base rates. One group of students was told about both the procedure of the helping experiment and its outcome. The other group was not told about the outcome, the prediction of the students in this group was that both participants would instantly rush to help. The group that knew the outcome gave the same prediction, statistics did not matter at all. Despite knowing the base rate, the videos convinced them the participants would rush to help. Conclusion: psychology is hard to teach. What might help is surprising students with individual cases, like telling them that two nice people did not help.

What is regression by the mean? - Chapter 17

A key principle of skill training is that rewarding improvement works better than punishing mistakes. An experienced instructor doubted this, he stated that his students performed worse after receiving a compliment and did better after being shouted at. He was right and wrong. A praised performance is likely to be followed by a poor performance and punishment is normally followed by an improved performance. The conclusion he had drawn about the efficacy of punishment and reward was wrong. His observation is known as ‘regression to the mean’, which was due to random fluctuations in the performance quality. He praised only a student who performed much better than average, but that one performance was just a case of luck, which is why his next performance was of lower quality. The praise did not cause the poor performance. The mistake of the instructor was attaching a causal interpretation to random fluctuations.

Imagine two golf players competing in a tournament. One had a great performance on the first day, which makes you think he is more talented than the average competitor and that he had better luck than others. The other player performed poorly, so he must be less talented and unlucky. If you had to guess their scores on the second day, you would predict that the first player will score above average (he is still more talented) and the other player below average. Luck can change and is not predictable, so you expect it will be average. Conclusion: player 1 will perform well, but not as good as on the first day as he won’t be that lucky again and player 2 will perform below average but better than on the first day, as he won’t be that unlucky again. The difference between both players will shrink. The answer is that the performance on the second day will be more moderate: closer to the average than to the scores on the first day. This is another example of regression to the mean.

A famous example is the ‘Sports Illustrated jinx”. After gracing the cover of this magazine, a sportsperson is expected to perform worse in the next season. This is often explained by increased pressure or overconfidence. However, it is easier than that: a sportsperson that makes it on the cover has performed extremely well in the last season, most likely with the help of good luck and luck fluctuates.

Conclusion: the difference between a first and a second performance does not need a causal explanation, it is a mathematically consequence of luck.

The notion of regression to the mean was introduced by Sir Galton, in the late 19th century. He compared the height of children to the height of their parents and found that the size of the children was not similar to that of their parents but was more mediocre. Large parents: children were smaller, very small parents: children are larger. The study also demonstrated that the mean regression towards mediocrity was proportional to the parental deviation from it. Galton was surprised by the results, but regression effects are very common.

The ‘correlation coefficient’ between two measures is a measure of the relative weight of the shared factors and varies between 0 and 1. Regression and correlation are different perspectives on the same concept. An imperfect correlation between two scores means that there will be regression to the mean. The concept of regression is difficult, because our mind cannot handle mere statistics very well, it is biased towards causal explanations. Associative memory starts looking for a cause when an event caught our attention. This is problematic when regression to the mean is detected, because that does not have a cause. Both System 1 and System 2 struggle with regression. While System 1 searches for causal interpretations, System 2 finds the relation between regression and correlation hard to understand.

Imagine reading the headline “Depressed minors treated with ice cream improve significantly over a two-month-period”. While this is made up, it is true: if a group of depressed minors is treated with ice cream for months, they will show improvement. But depressed minors who spend 15 minutes a day walking backwards or petting a rabbit will also improve. Many readers will automatically draw the conclusion that ice cream or rabbit petting caused the improvement, which is unjustified. Depressed minors are an extreme group and extreme groups eventually regress to the mean. Depressed minors will improve over time, even without the ice cream and rabbits. Not only readers of newspapers are prone to wrong causal interpretations of regression effects, even researchers make this mistake. In order to prove whether a treatment is effective, a group of patients receiving the treatment must be compared to a control group (not receiving treatment or a placebo). The control group will improve by merely regression, will the treatment-group improve more than can be explained by regression?

How can intuitive predictions be tamed? - Chapter 18

Forecasting is a major part of our professional and private lives. A number of predictive judgments are based on analyses or computations, but most involve System 1 and intuition. Some intuitions draw on expertise and skill, gained through experience. The automatic and quick judgments and decisions of physicians, chess masters and fire chiefs are examples of skilled intuitions. They quickly come with solutions, because they recognize familiar cues. Other intuitions are the result of (substitution) heuristics. Numerous judgments arise from a combination of intuition and analysis.

What are nonregressive intuitions?

A question regarding a current situation and a prediction activates System 1. “Mark is currently a bachelor student. He could count to 30 when he was two years old. What is his GPA?” People who have knowledge about the educational system provide quick answers thanks to the operations of System 1:

  • Seeking a causal connection between the evidence (Mark’s counting) and the target of the prediction (his GPA), which in this case is academic talent. The associative memory then automatically and rapidly forms the best possible story from the available information (WYSIATI).
  • Evaluating the evidence in relation to the relevant norm. How precocious is a child who counts to 30 at the age of two?
  • Substitution and intensity matching. The evaluation of the evidence of cognitive ability at an early age is substituted as an answer to the question about his GPA in college. Mark will be assigned the same percentile score for his GPA and for his achievements as an early counter.
  • Intensity matching: from the general impression of Mark’s academic achievements to a GPA score that matches the evidence for his talent. This impression needs to be translated into a corresponding GPA score.

The task required evaluating the evidence and predicting an outcome. This example clearly shows the role of substitution: people substitute an evaluation about the evidence when a prediction is asked, without being aware of the fact that the question they answer is not the question they were asked. This will lead to systematically biased predictions, as regression to the mean is fully ignored.

How can intuitive predictions be corrected?

The right way to predict Mark’s GPA is by using a formula for the factors that determine college grades and counting age:

GPA = factors specific to GPA + shared factors = 100%

Counting age = factors specific to counting age + shared factors = 100%

The shared factors are the degree to which family supports academic interests, genetically determined aptitude and other factors that would cause similar people to be precocious counters as minors and academical talents as adults. The correlation between both measures (GPA and counting age) equals the proportion of shared factors among their determinants. Assume the proportion being 30%. You are now ready to generate an unbiased prediction, in four steps:

  1. Estimate the average GPA (baseline).
  2. Determine what GPA matches your impression of the evidence (intuitive prediction).
  3. Estimate the correlation between GPA and counting precocity (moving from the baseline towards the intuition).
  4. Move 30% away from the average to the matching GPA (makes the prediction more moderate).

This is a general roadmap for predicting quantitative variables, like a GPA, company growth or investment profit. It builds on intuition, but moderates it by regressing it towards the mean. An intuitive prediction is not regressive, therefore biased, and needs to be corrected.

Common biases of predicting the probability of an outcome are insensitivity to the accuracy of evidence and neglect of base rates. The biases of predictions that are expressed on a scale and the corrective procedures are similar to the biases of discrete predictions. Similarities of the corrective procedures are: containing a baseline prediction, containing an intuitive prediction, aiming for a prediction that lies between the baseline and the intuitive answer, in the absence of relevant evidence: staying with the baseline, other extremes: staying with the initial prediction.

System 2 is responsible for correcting intuitive predictions. Finding the relevant reference category, estimating the baseline prediction and evaluating the quality of evidence requires some effort, which is justified only when there is a lot at stake and you can’t afford making a mistake. A willingness to predict rare events from low-quality evidence and extreme predictions are typical for System 1. The associative machinery naturally matches the extremeness of the prediction to the extremeness of the supporting evidence (substitution). System 1 also produces overconfident judgments. On the other hand: System 1 finds it hard to understand the idea of regression. Students often struggle with this topic. System 2 needs extra training to comprehend it.

What is overconfidence? - Part 3

What is the illusion of understanding? – Chapter 19

The concept of a ‘narrative fallacy’ was introduced by Nassim Taleb and describes how flawed stories of the past influence our current views and future expectations. An explanation is considered more appealing if it’s concrete, assigns a significant role to talent, intentions or ignorance (instead of luck) and focuses on a few conspicuous events that happened than on numerous events that did not happen.

People are prone to interpret someone’s behavior as a reflection of personality traits and general propensities, which are easy to match to effects. The halo effect contributes to coherence: our judgement of one significant attribute influences how we view all qualities. If you consider a soccer player to be strong and attractive, you are likely to think of him as an excellent player as well. If you find him unattractive, you will probably underrate his soccer skills. The halo effect exaggerates the consistency of judgments: bad people are all bad and nice people do only nice things. Reading ‘Hitler liked cats and toddlers” causes a shock, because such a bad person having a good side violates our expectations.

When you read a story about the founders of a highly successful company, with almost every choice they made having a good outcome, you get the sense that you understand what made the company succeed. You get the feeling that you learned what it takes to found successful companies. It is, however, very likely that your sense of understanding and learning from the story is mostly illusory. An explanation can be tested by determining whether it would have made the event predictable in advance. The story about the successful company won’t meet that test, because no story can include all the events that would have caused a divergent outcome. Our minds can’t handle events that did not happen. The fact that most significant events involved choices makes you exaggerate the role of skill and underestimate the influence of luck. Although the founders were skilled, luck had a big influence on the great outcome. This demonstrates the power of the WYSIATI-rule. You deal with the restricted information you received as if it were all there is to know. You construct the best possible story from the available information and if it’s a nice one, you believe it. The less you know, the easier it is to form a coherent story.

People saying “I knew well before the economic crisis happened that it was inevitable” are wrong, because they thought it would happen, they did not ‘know’ it. They afterwards say ‘knew’ because it did happen.

It is an illusion to believe that we understand the past, because we understand it less than we believe we do. The words ‘know’, ‘premonition’ and ‘intuition’ refer to past thoughts that turned out to be true. They need to be avoided in order to think clearly about future events.

What are the costs of hindsight?

Our mind is like a sense-maker. When something unpredicted happens, you instantly revise your view of the world so the surprise fits in. Learning from surprises seems sensible, but there can be dangerous consequences. Our mind is limited by its flawed ability to reconstruct beliefs that have changed or past states of knowledge. As soon as you adjust your view of the world, you are not able to recall your past belief. Instead of reconstructing what they used to believe, people retrieve their current belief (substitution) and most people cannot believe they ever had another belief. Not being able to reconstruct former beliefs causes the underestimation of the extent to which we were surprised by past events. This is called the ‘hindsight bias’ or the ‘l-knew-it-all-along’ effect.

In an experiment, participants were asked to assign probabilities to a number of possible outcomes. After the event occurred, they were asked to recall their previous answers. They exaggerated their answers if the event had occurred and recalled the events that did not occur as always being unlikely. Other studies also demonstrate how we tend to revise our past beliefs in light of what actually occurred, which generates a cognitive illusion. Hindsight bias negatively affects the evaluations of decision makers. The quality of decisions should be assessed by whether the process was right, not by whether the outcome was right. Imagine a low-risk surgery going wrong due to an unpredictable accident. People are afterwards likely to believe that it actually was a risky surgery and the decision of the doctor to order it was wrong. This is an example of the outcome bias, which makes it very hard to properly evaluate a decision.

Hindsight is particularly troubling for people who make decisions for others, like financial advisers, politicians or physicians. When the outcome is bad, clients usually blame them for failing to see it coming, although the signs only became clear afterwards. Decision makers who fear having their decisions scrutinized in hindsight tend to change their procedures, which leads to bureaucracy and increased social costs. Physicians order more tests, refer more people to specialists and apply treatments that probably won’t work. Hindsight and the outcome bias can also result into rewarding irresponsible decision makers who got lucky but took a lot of risk.

What are the recipes for success?

System 1’s habit of trying to make sense of things makes us view the world as more simple, coherent, tidy and predictable than it actually is. The illusion that we understand the past induces the illusion that we are capable of predicting and controlling the future. They makes us feel comfortable, as acknowledging the uncertainty of our existence would make us anxious.

Managers and leaders influence the outcomes of their businesses, but the impact of management practices and leadership style on success are often exaggerated in success stories. If you ask business experts what they think about the reputation of a CEO, their knowledge about the business doing well or poorly produces a halo. The CEO of a profitable company will be praised, but one year later things go south: the same CEO will be reviewed negatively. While both reviews seem correct at the time, it is weird to say contradicting things about the same person (first decisive, then confused). This illustrates the power of the halo effect. It also results into an backward causal relationship: we tend to believe that the business fails because the leader is confused, but the opposite is true: the leader appears confused because the business is doing poorly.

The combination of the outcome bias and the halo effect explains the popularity of books with titles like ‘how to build a successful business’. Key message of these books is that good management practices will be rewarded with profit. The difference between a successful company and a less successful company is often not great leadership but luck. Even if you are convinced that the leader is extremely competent and visionary, you would not be able to predict the performance of the company. The average gap between compared successful and less successful companies shrank over time, most likely because the original gap was due to luck (regression to the mean).

What is the illusion of validity? - Chapter 20

System 1 is known for jumping to conclusions from limited evidence (WYSIATI). The coherence of the story created by System 1 and System 2 makes us confident about our opinions. The quality and amount of the evidence are less important, because poor evidence created a good story. We even have beliefs without any evidence for them, we just believe something because our partner or friend does. It is ridiculous how confident we are in our beliefs when we know so little.

The illusion of validity can be explained by the following experiment. Two psychology students watched soldiers in officer-training trying to solve a problem. They decided to evaluate them: who would make a great army leader? The exercise was exhausting and required several attempts. Some soldiers seemed arrogant, persistent, submissive, hot-tempered, stubborn, patient or quitters. Some were expected to become great leaders, others were ruled out for the officer rank. The students were convinced the exercise revealed the true nature of the soldiers and were very confident in their evaluations. Their impressions were coherent and did not conflict. After some time, they learned how the soldiers were doing and compared their evaluations against the reports by their commanders. This showed that the predictions about their performance at the officer-training were hardly any better than blind guesses. The students decided to repeat the experiment with a new group of soldiers. The shocking truth about the quality of their previous predictions had no effect on how they evaluated the soldiers and very little effect on how confident they felt in their judgments and predictions. The evidence of their prior failure did not damage their confidence and did not cause a moderation of the predictions, which it should have. They still felt like their predictions were valid. Another example of the illusion of validity is the Müller-Lyer illusion.

The predictions of the future performances of the soldiers are instances of the representativeness heuristic and of substitution. The predictions were based on 60 minutes of the soldiers behavior in artificial circumstances. The predictions were totally nonregressive and supported by weak evidence (WYSIATI). The experiment demonstrates that confidence reflects the coherence of the information and the cognitive ease of processing the information. Remember that a very confident person has formed a coherent story in his mind, which does not necessarily mean it’s the truth.

What is the illusion of stock-picking skill?

Each day, billions of shares are traded, which involves many buyers and sellers. They have the same information, their exchange is based on a difference in opinions. A buyer thinks the price will rise, while the seller expects it to drop. Something makes them believe they know more about the future price than others, but for many, that belief is merely an illusion. The price of stock is based on the available information about the company value and the best prediction about the future of the stock. If someone believes the price will rise, he will buy the stock, which causes the price to rise.

A lot of individual investors suffer constant trading losses. 10.000 investors who sold stock and immediately bought new stock were part of study. They expected that the stock that was bought would do better than the stock that was sold. The findings showed that, on average, the shares that were sold did much better than the bought ones. A later study showed that the most active investors gained the lowest returns and the least trading investors had the best results. Other research indicates that men acted on their bad predictions more often than women, which is why women had better investments outcomes. On the other side of a transaction are professional investors and financial institutions, waiting for individual investors to make a mistake. They tend to sell stock that got more value (winners) and hang on to stock that dropped in price (losers). In the short run, winners do better than losers, so they sell and buy the wrong stock. Individual investors are more influenced by companies being in the news than professional investors. Only a few stock pickers have the skill to beat the market repeatedly. Even professional investors are no persistent achievers. The persistence of individual differences in achievements is proof of having a skill. Research shows that most fund managers select stock like rolling dice: they play a game of chance, not of skill. A fund having a good year is mostly due to luck.

Visual illusions tend to be less stubborn than cognitive illusions. The Müller-Lyer illusion changed your behavior, but not how you see the lines. You know you cannot trust what you saw. Investors who are told good outcomes are the result of luck and skill still believe they are doing better than the market, despite the statistical facts proving otherwise. They accept the information intellectually, but it has no effect on their feelings. The illusion of skill is persistent in the financial world, but why? Stock pickers are highly skilled when it comes to consulting data, examining balance sheets and assessing the competition. Their work requires a lot of training and experience in using these skills. However, they lack the skill of knowing whether the information about a company is already incorporated in the price of their shares and seem unaware of their ignorance. Subjective confidence is a feature of System 1. Moreover, the illusions of skill and validity are supported by the powerful professional culture of the financial community. A lot of members believe they can do something that someone else cannot.

It is for people hard to accept that the future cannot be predicted, due to the ease with which they can explain the past. In hindsight, many things make sense. This leads to the intuition that something that makes sense in hindsight today could be predicted yesterday. The illusion that we understand the past makes us overconfident in our ability to predict the future. We think the past can be explained by focusing on the abilities and intentions of a few great leaders, social movements, or technological and cultural developments. We cannot believe that big historical events are determined by luck.

The illusion of valid prediction gets exploited by pundits in politics, business, media or the financial world. Newspapers and television stations hire experts to evaluate the past and predict the future. Readers/viewers think they receive insightful information, which the experts think they are offering. Expert predictions about economic and political trends were the subject of a landmark study by psychologist Tetlock. He collected over 80.000 predictions and the outcome was shocking: they performed worse than blindfolded golfers. Even in the field of their expertise, they did not better than non-experts. People with the most knowledge are frequently less reliable, because they develop an illusion of their skill and become overly confident. The more famous the expert is, the more overconfident he is and the more outrageous his predictions are. The study also showed how these experts are less willing to admit they had been wrong and came with several excuses.

The lessons to be learned from this chapter is firstly that predictions error are inevitable, because our world is unpredictable. Secondly, high subjective confidence is not trustworthy as an indicator of accuracy. Short-term trends can be predicted, and achievements and behavior can be forecast from previous achievements and behaviors, but you should not rely on long-term predictions made by pundits.

How do intuitions and formulas relate to each other? – Chapter 21

Psychologist Meehl reviewed the results of studies that had assessed whether ‘clinical predictions’ based on the subjective impressions of trained professionals were more accurate than ‘statistical predictions’ made by combining ratings or scores according to a rule. In one study, trained counselors were asked to predict the grades of students at the end of their first school year. They interviewed the students and had access to personal statements, aptitude tests and their high school grades. The statistical formula used only one aptitude test and high school grades, but was more accurate than 11 out of 14 counselors. Other study reviews showed similar results (regarding a variety of predictions: criminal recidivism, parole violations, success in pilot training).

The outcome shocked clinical psychologist and lead to many more studies. But fifty years later, algorithms still score better than humans. 60% of the research shows that algorithms have better accuracy, other studies resulted into a tie. Domains that involve a fair amount of unpredictability and uncertainty are called ‘low-validity environments’. Examples are medical variables (longevity of patients, diagnoses of diseases, length of hospital stay), economic measures (prospects of success, assessments of credit risks) and governmental interests (odds of recidivism, likelihood of criminal behavior). In all these cases, the accuracy of algorithm was better or equally good.

Simple statistics beat the predictions of world-renowned professionals. Meehl’s explanation is that experts try to be smart, consider complex combinations of features and think outside the box. Complexity usually reduces validity. Research has shown that human experts are inferior to formulas even when they are handed the score predicted by the formula. They believe they can do better than the formula because they have more information about the case. Another explanation is that people are inconsistent in making summary judgments of complex information. Two evaluations of the same information result often into two different answers. This inconsistency is probably caused by System 1’s need for context. Unnoticed stimuli in our environment influence our actions and thoughts.

Meehl’s research indicates that final decisions should be made by formulas, particularly in low-validity environments. The final selection of students for medical schools is often determined by interviewing the candidates, which reduces the accuracy of the selection procedure. Interviewers have too much confidence in their intuitions and favor their impressions over other information sources, which reduces validity.

The dominant statistical practice in social sciences is assigning weight to several predictors by following the formula ‘multiple regression’. Robyn Dawes argues that this complex statistical algorithm is rather worthless. Recent studies show that formulas that assign equal weight to all the predictors are best, because they are not affected by sampling accidents. Equal-weighting has a major advantage: useful algorithms can be developed without any previous statistical research, Simple equally weighted formulas based on common sense or on existing statistics are excellent predictors of significant outcomes.

Clinical psychologists received Meehl’s finding with disbelief and hostility, due to the illusion of skill regarding their ability to make long-term predictions. Right judgments are often short-term predictions. The hostility towards formulas will probably diminish, as their value in our daily lives becomes more and more visible. Examples are recommendations by software, decisions about credit limits, health guidelines and the payment of sportsmen.

Imagine you want to hire the best possible person for a job in your company. You should start with selecting a maximum of six independent traits that are required for the position. You must be able to assess the traits reliably by asking some factual questions. Make a list of the questions per trait and think of a scoring scale, for instance 1 (very weak) - 5 (very strong). These preparations take little effort but can make a big difference in the quality of the hired people. Collect information on traits one by one, score the questions before you move on to the next trait, in order to avoid the halo effect. The candidate with the highest score should be hired, even if you like someone else better.

When can we trust an expert intuition? – Chapter 22

Gary Klein is the intellectual leader of students of Naturalistic Decision Making (NDM), who study real people in natural situations. He rejects the focus on biases in heuristics, doing artificial experiments and is highly skeptical about choosing algorithms over human judgments. Klein is known for studies of expertise in firefighters and the development of intuitive skills in experienced experts. Despite their differences, Kahneman worked together with Gary Klein on a joint project in order to answer the question “When can you trust an experienced professional who claims to have an intuition?” They both agreed about Gladwell’s bestselling book ‘Blink’ about art experts that had the gut feeling that the object was a fake, but could not tell what it exactly was that made them think it was not the real deal. They knew it was a fake without knowing how they knew: a perfect example of intuition.

While Kahneman’s views of intuition were shaped by observing the illusion of validity in himself and reading Meehl’s review about clinical predictions, Klein’s thinking was shaped by his studies of fire ground commanders. He observed and interviewed them. He introduced the ‘recognition-primed decision (RPD) model, which applies to several experts (from fire commanders to chess masters). System 1 and System 2 are both involved in this process. A tentative plan automatically comes to mind (System 1) and then gets mentally tested (System 2). The model of making intuitive decisions involves recognition: the situation provides a cue, the cue retrieves information from memory, which provides the solution. Intuition is merely recognition.

Information gets stored in memory by learning emotions, like fear. A scary experience stays with you for a long time. Fear can be learned by experience and by words. Soldiers get trained to identify situations and firefighters discussed all types of fires with others. Emotional learning is quick, developing expertise takes a long time. Chess masters need more than 10.000 hours of practice to reach the top. During these hours, players become familiar with all the possible moves and able to quickly read the situation.

Kahneman and Klein agreed that the confidence of people in their intuitions is not a reliable source of validity. But if subjective confidence cannot be trusted, how do we know when judgments reflect true expertise? The answer lies in the two conditions for acquiring a skill:

  • The environment must be sufficiently regular so its predictable
  • There must be an opportunity to learn the regularities through prolonged practice.

An intuition is normally skilled when both conditions are met. Chess players, nurses, physicians, firefighters and sportsmen are active in regular, orderly situations. Political scientists and stock pickers are not, they operate in a less regular (non-validity) environment.

What is the importance of ‘the outside view’? - Chapter 23

Kahneman was asked to write a textbook about decision making and judgments. After one year, a number of chapters and the syllabus had been written, which was considered good progress. Kahneman asked his team to separately estimate how long it would take to finish the textbook. The average estimate was two years. He then asked an expert in developing curricula, who was part of the team, how long it took for similar teams to finish a textbook. He answered that about 40% of the teams never managed to complete one. Kahneman never considered the possibility of failing. The teams that completed the task had finished the book in seven to ten years. He also rated the resources and skills of Kahneman’s team slightly below average. Even the expert himself was surprised by it, as his previous estimate was two years. Before the questions were asked, his mind did not make the connection between his knowledge of the progress of other teams and his prediction of the future of the team he was in. While everybody ‘knew’ that a 40% chance of failure and a minimum of seven years was more likely than the prediction of two years, they did not acknowledge this information. It seemed unreal, because it was impossible to imagine it taking so long. The reasonable plan to finish the book in two years conflicted with the statistics. The base-rate information should lead to the conclusion that writing a textbook is much harder than previously thought, but that conflicted with the direct experience of making good progress. It would have been wise to cancel the project, because no one wanted to spend six more years on a project with a fairly high chance failure, but they continued. The textbook was finished eight years later, due to numerous unpredictable events. Three lessons were learned from this story:

  1. There is a distinction between two very different approaches to predicting, called the inside view and the outside view.
  2. The initial predictions exhibited a planning fallacy. The predictions were closer to a best-case scenario, lacking realism.
  3. Irrational perseverance (not cancelling the project). When face with a choice, they gave up rationality.

The inside view was adopted to assess the future of the project. The team focused on their specific circumstances and searched for evidence in their own experiences They knew how many chapters they were going to write and knew how long it taken to write the already finished chapters. Only a few less optimistic members of the team added some months to their estimates (margin of error). The predictions were based on the available information: WYSIATI, but the already written chapters were probably the easiest and the motivation at its peak. The biggest problem was failing to take the ‘unknown unknowns’ into account. On that day, the events that would make it a prolonged project were not foreseen (sickness, divorces, bureaucracy). A failed plan can have many reasons, and although most of them are very unlikely to happen, the probability that something will go wrong in a major project is high.

The baseline prediction (seven-ten years and a 40% chance of failing) should have been the anchor for adjustments. The comparison of the team with other teams indicated that the predicted outcome was worse than the baseline prediction. This regards the outside view, which suggested that the inside-view predictions were not even close. The difference between the expert’s judgments is remarkable: he had so much relevant knowledge in his head, but he did not use it. The other members of the team did not have access to the outside view, but they also did not feel they needed information about similar teams. This happens often: people who have information about one case almost never feel the need to know the statistics of similar cases. When they were told about the outside view, they ignored it. Statistical information tends to get ignored when it challenges someone’s own impressions of a case. The inside view beats the outside view.

Forecasts and plans are called ‘planning fallacies’ when they are unrealistically close to best-case scenarios and could be improved by consulting statistics of similar cases. An outside view can prevent a planning fallacy. The remedy for this fallacy, introduced by planning expert Flyvbjerg, is called ‘reference class forecasting’: a big database which provides information for numerous projects world-wide.

People frequently take on risky projects because they are too optimistic about the odds: they underestimate the costs and overestimate the benefits. Many executives fall victim to the planning fallacy. They base their decisions on unjustified optimism instead of on a rational weighting of probabilities, losses and gains.

What is the optimistic bias? - Chapter 24

The planning fallacy is one of many manifestations of the optimistic bias. Many people view their attributes as more favorable than they probably are and consider their goals as more achievable than they probably are. Optimistic bias can be a risk as well as a blessing, which is why you should be cautious when you feel optimistic.

Some people are more optimistic than others. They are usually happy, popular and resilient. Optimists play a disproportionate role in shaping society. Their decisions have an impact on others: they are leaders, inventors, entrepreneurs. They seek challenges and take risks, are talented and lucky. Their successes and the admiration by others makes them even more confident. This description results into the following hypothesis: the most influential people are likely to be overconfident and optimistic, and take more risks than they are aware of. The evidence indicates that an optimistic bias causes institutions or people to take on risks.

The chances that a small company will survive for five years in the US are slightly over 33%. Someone who starts a company believes that these statistics do not apply to him/her. Research shows that American entrepreneurs are prone to believe that their company is something else: their estimated chance of success was almost twice as high: 60%. Would they still have invested time and money if they knew the odds? They never thought of the outside view.

One of the benefits of being an optimist is considered persistence when faced with obstacles. However, being persistent can be costly. Studies show how almost half of the people continue their project after being told it would not succeed. Their initial losses doubled.

According to psychologist, the majority of people genuinely believes that they are better than others, they would even bet money on it. This belief has significant consequences in the market. Misguided acquisitions by large businesses in the stock market are explained by the ‘hubris hypothesis’: leaders of acquiring firms are less competent than they think they are. The optimistic risk taking of entrepreneurs contributes to the economic dynamism of our capitalistic society, but also evoke policy issues. Should founders of small companies be financially supported by the government, when they are very likely to fail? There is no satisfying answer to this question.

Entrepreneurial optimism is not merely explained by wishful thinking, emotions and cognitive biases also play a significant role, especially the WYSIATI-rule of System 1. Focusing on the goal and neglecting relevant base rates can result into the planning fallacy. Focusing on the causal role of skill and neglecting the role of luck can result into the illusion of control. Focusing on what is known and neglect what is not known leads to overconfidence.

Many founders believe that the success of their company depends to a great extent on their effort. They think their fate is almost completely in their own hands. This is not true: the changes in the market and the achievements of competitors are just as important. Entrepreneurs focus on what they know: their plans, actions, opportunities and most immediate threats (WYSIATI). They usually know very little about their competitors. This is called the concept of competition neglect. Another manifestation of WYSIATI is overconfidence. When they estimate a quantity, they rely on information that comes to mind and form a coherent story in which it makes sense. The consequences can be costly. Overconfident experts are also overconfident about the prospects of their own company and willing to take more risks they should avoid. Ironically enough, companies and people reward misleading optimists more than they reward truth tellers.

Is there a remedy for overconfident optimism?

Overconfident optimism is very hard to overcome by training. Overconfidence is a immediate consequence of System 1-features that can be tamed but not eliminated. The biggest obstacle is that subjective confidence is determined by the coherence of the constructed story, not by the amount and quality of the supporting information. Organizations are better at taming optimism than individual people. The best remedy comes from Gary Klein and is called the ‘premortem’. When an organization is about to make an important decision, a group of individuals with relevant knowledge regarding the decision should gather for a brief session. They have to imagine being one year into the future: the outcome of the decision turned out to be extremely bad and write a brief history of what happened. The premortem overcomes the groupthink that influences a lot of teams once a decision is close to being made and it directs the imagination of knowledgeable individuals into an important direction.

How do people make choices? - Part 4

What are ‘bernoulli’s errors’? – Chapter 25

Economics and psychologist have very different views of people. The first think of them as rational and selfish beings. The latter argue that people are neither completely rational nor selfish. Kahneman and Amos studied the attitudes of people to risky options in order the answer the question “What rules govern choices between different simple gambles and between sure things and gambles?”

A simple gamble is for instance “45% chance to win € 500”. Gamble: the consequences of the choice are always uncertain. Choices between simple gambles provide a model that shares main features with more complex decisions. The ‘expected utility theory’ was the basis of the rational-agent model and still is the most important theory in the social sciences.

Consider the following simple decision problem. Which do you prefer?

  1. Toss a coin. Heads: you win € 105. Tails: you win nothing.
  2. Get € 50 for sure.

The intuitive choice of most people would be the second option. The study of Kahneman and Amos resulted into the ‘prospect theory’, a descriptive model that was constructed to explain systematic violations of the axioms of rationality in choices between gambles. Their article about the theory is one of the most cited in their field. A few years later they published an essay about framing effects: the significant changes of preferences that are sometimes caused by inconsequential variants in the way a choice problem is worded.

Daniel Bernoulli introduced a theory about the relationship between the psychological desirability or value of money (now: utility) and the actual amount of money. According to Bernoulli, a gift of 10 euros has the same value to someone who already has 100 euros as a gift of 20 euros to someone who already has 200 euros. This is true, as we also define a change in income as an percentage. A 20% raise evokes a similar psychological response for the poor and for the rich, which a numeric amount would not do. Psychological responses to a change in wealth are proportional to the initial amount of wealth: utility is a logarithmic function of wealth. Bernoulli used this knowledge to introduce a new approach to the evaluation of gambles. He argued that the majority of people dislikes risk and wants to avoid the poorest outcome. People will choose the sure thing, even if it is less than expected value. His theory is that the psychological value of gambles is the average of the utilities of the outcomes, each weighted by its probability, and not the weighted average of the possible euros outcomes. The theory explains why poor people buy insurance and wealthy people sell it to them.

300 years later, his theory of risk attitudes and the preference for wealth is still being used in economic analysis. This is quite surprising, as it fairly flawed. This is illustrated by the following example:

Today Molly and Mike each have a wealth of 6 million

Yesterday, Molly had 2 million and Mike had 10 million.

Do they have the same utility? (Or they equally content?)

According to the theory of Bernoulli, they are equally content, but this is obviously not the case: Mike is less content. Bernoulli’s model does not take reference points into account (2 million for Molly and 10 million for Mike). So how is it possible that the theory is still so popular? The explanation is ‘theory-induced blindness’. Once people have accepted a theory and used it in their thinking, it is extremely hard to notice the flaws. The theory gets the benefit of the doubt if your observation does not fit the model, because all the other experts use it. Disbelieving requires effort and System 2 is lazy.

What is the prospect theory? – Chapter 26

In utility theory, the utility of a gain is determined by comparing the utilities of two states of wealth. The utility of receiving an extra € 400 when your wealth is € 2 million is the difference between the utility of € 2.000.400 and the utility of 2 million. If you lose € 400, the disutility is again the difference between the utilities of both states of wealth. It was assumed that the distinction between losses and gains did not matter and was not examined due to the theory-induced blindness.

Kahneman and Amos had focused on differences between gambles with low or high probabilities of winning, until Amos casually mentioned the losses. The risk aversion turned out to be replaced by going for the risk. Consider the following problems:

  1. Which do you prefer? Get € 800 for sure or a 90% chance to get € 900.
  2. Which do you prefer? Lose € 800 for sure or 90% to lose € 900.

The first problem induces risk aversion, you would go for gaining € 800. In case of the second problem, you will probably go for the gamble (the risk). When all options are bad, people send to become risk seeking.

The comparison between the problems emphasize the importance of the reference point: the previous state relative to which losses and gains are evaluated. Reference points usually get ignored by people and Bernoulli’s theory lacks them. The prospect theory takes reference points into account. The prospect theory involves three cognitive features (associated with System 1), which play a crucial role in the evaluation of financial outcomes and are common to a lot of automatic processes of emotion, judgment and perception:

  • Evaluation is relative to a neutral reference point (‘adaptation level’).
  • A principle of diminishing sensitivity applies to the evaluation of changes of wealth and to sensory dimensions.
  • Loss aversion.

What is the endowment effect? – Chapter 27

Imagine looking at a graph displaying someone’s ‘indifference map’ for two goods: income and vacation days. This map specifies particular combinations. Each curve connects the combinations of the goods that are equally desirable: they have the same utility. The convex shape suggests diminishing marginal utility: the more vacation days you have, the less you care for one more, and each added day is worth less than the previous one. The more income you have, the less you care for an extra euro, and the amount of money you are willing to give up for an extra vacation day increases. All locations on an indifference curve are equally appealing.

All economics textbooks for students contain images of indifference curves, but only a few students have noticed that something is missing: an indication of the person’s current income and vacation days, also known as the reference point. This is another example of Bernoulli’s error. The utility is not completely determined by your current situation, the past is also relevant. The missing of the reference point is also an example of theory-induces blindness.

Richard Thaler introduced the ‘endowment effect’: owning a good increases its value, especially if the goods are not regularly traded. Imagine you bought a ticket for a major soccer match for the normal price of € 300. You are a big fan of the participating team and would have been willing to pay a maximum of € 600. You read online that all tickets are sold and people now offer € 3.500. Would you sell your ticket? Probably not: your lowest selling price is € 3.500 and your maximum buying price is € 600. The endowment effect can be explained by the prospect theory. The willingness to sell or buy depends on the reference point: whether or not the person currently owns the good. If he is the owner, he considers the pain of giving up the good. If he is not the owner, he considers the pleasure of getting the good. The values are not equal because of loss aversion: giving up the good is more painful than getting a similar good is enjoyable. The reaction to a loss is stronger than the reaction to a corresponding gain.

How do people react to bad events? – Chapter 28

In an experiment, people lying in a brain scanner were shown several images. Among them were pictures of the eyes of a happy person and of a terrified person. They were shown for a fraction a second: the participants never consciously knew they had seen the pictures. One part of their brain did know: the amygdala, the ‘treat center’. Brain images showed an intense reaction to the threatening picture. The same process makes us process angry faces (a possible threat) more efficiently and faster than happy faces. An angry person in a happy crowd gets noticed faster than the opposite situation. Our brains are equipped with a mechanism that gives priority to bad news.

Our brains also respond faster to merely symbolic threats. Bad words (war, murder), emotionally loaded words and opinions with which you strongly disagree attract attention quicker than their opposites. Loss aversion is another manifestation of negativity dominance. Bad feedback and bad parenting proved to have more impact, and bad impressions and stereotypes are formed faster. As Gottman argues: long-term success of marriages depends more on the avoidance of negatives than on looking for positives. One bad action can ruin a long-term relationship. The boundary between good and bad is a reference point that changes over time and depends on the current situation.

People are driven more strongly to avoiding a loss than to achieving a gain. A reference point can be a future goal or a the status quo. These two motives have different strengths: loss aversion (not reaching the goal) is a lot stronger than the wish to exceed it. This explains why many people set short-term goals.

The different intensities of the motives to achieve gains and avoid losses show up in many situations. It is often detected in negotiations, in particular the renegotiations of existing contracts. Reference point: existing terms. Any proposed change is considered a concession (loss) by one of the parties. Loss aversion makes reaching an agreement difficult.

A study on what the public considers unfair behavior by employers, landlords and merchants showed that the opprobrium linked to unfairness imposes constraints on profit seeking.

Reference point: the existing rent, wage or price. The participants deemed it unfair for stores to impose losses on customers, while the stores behaved according to the standard economic model: increased demand leads to a raised price. The latter is seen as a loss. Exploiting market power to impose losses on others is considered unfair. On the other hand, companies are entitled to retain current profit if it faces a loss by transferring the loss to customers or workers. Research shows that merchants who set unfair prices are likely to lose sales and that employers who are considered unfair have to deal with reduced productivity.

What is meant by the ‘fourfold pattern’? – Chapter 29

When we evaluate complex objects (mother-in-law, gadgets), we assign weights to their characteristics: some have a bigger influence than others, which we might not be aware of. When we evaluate an uncertain situation, we assign weights to the possible outcomes. These weights are correlated with the probabilities of the outcomes: a 40% chance of winning the jackpot is more appealing than a 2% chance. Assigning weights sometimes happens deliberately, but often it is an automatic process of System 1.

The decision making in gambling provides a natural rule for the assignment of weights to outcomes: the more probable an outcome, the more weight it gets. The expected value of a gamble is the average of the outcomes, all weighted by their probability. This is called the ‘expectation principle’. Bernoulli applied this principle to the psychological value of the outcomes: the utility of a gamble is the average of the utilities of the outcomes, all weighted by their probability.

The expectation principle is flawed, because it does not describe how we think about the probabilities associated with risky prospects. Consider the following example. The chance of winning the jackpot improves by 5%. Is every option equally good?

  1. From 0 to 5 %
  2. From 5% to 10%
  3. From 50% to 55%
  4. From 95% to 100%

Expectation principle: utility increases by 5% in each option, but this does not describe how you experience it. From 0-5 and from 95-100 appears more impressive than the other two options. The first option creates a (previously non-existing) possibility, which gives hope and therefore is a qualitative change. This impact is known as the ‘possibility effect’: highly unlikely outcomes are weighted disproportionally more than they should. The option 5-10 is merely a quantitative change: it doubles your chance of winning, unlike the psychological value of the prospect. The change from 95-100 is also a qualitative change that induces the ‘certainty effect’: almost certain outcomes are assigned less weight than it should.

Certainty and possibility both have powerful effects when it comes to losses. When your mother needs to have surgery, with a 5% risk of dying, that risk feels worse than half as bad as a 10% risk. The possibility effect causes us to overweight small risks and being more willing to pay a lot more than expected value to avoid those risks. The psychological difference between 95% risk of a bad event happening or 100% (certainty) seems even bigger: a tiny bit of hope looms large. The overweighting of small probabilities increases the appeal of insurance policies and gambling.

What is the ‘Paradox of Allais’?

Maurice Allais introduced the theory that people are susceptible to a certainty effect and thus violate expected utility theory and the axioms of rational choice. Consider the following puzzles.

X. 62% chance of winning € 410.000 or 64% chance of winning € 390.0000

Y. 98% chance of winning ,€ 410.000 or 100% chance of winning € 390.000

Most people go for the first option in puzzle X and the second option in puzzle Y, but that is considered a logical sin and violates the rules of rational choice. It is explained by the certainty effect. The 2% difference between 98-100 is more impressive than the 2% difference between 62-64. There have been several attempts to provide a plausible justification for the certainty effect, but so far all failed.

The prospect theory describes the choice making of people, whether they are rational or not. In this theory, decision weight do not equal probabilities. At the extremes of 0 and 100, the decision weights match the corresponding probabilities. Unlikely events are generally overweighted (possibility effect). Example: the corresponding decision weight of a 5% chance of a gain is 13.2. The decision weight would be 5, if the axioms of rational choice were met. The other end of the probability scale demonstrates the certainty effect: a 5% risk of not winning (95% of winning) reduces the utility of the gamble by 21% (from 100 to 79). People are inadequate sensitive to intermediate probabilities: the range of probabilities between 5% and 95% correspond with a much smaller range of decision weights.

Amos and Kahneman found that decision weights assigned to outcomes differ from probabilities and that people attach values to losses and gains (not to wealth). Both conclusions explain the ‘fourfold pattern’, a pattern of preferences which is the main achievement of the prospect theory.

  1. Illustrative prospect (gains: 95% chance of winning money / 5% chance of winning money), losses: 95% chance of losing money / 5% chance of losing money).
  2. Focal emotion evoked by the prospect (gains: 95% chance: fear of disappointment / 5% chance: hope of large gain), losses: 95% chance: hope to avoid loss / 5% chance: fear of large loss).
  3. Behavior of people when offered a choice between a gamble and sure gain/loss that corresponds to the expected value. Gains: 95% chance: risk averse / 5% chance: risk seeking, losses: 95% chance: risk seeking / 5% chance: risk averse.
  4. Expected attitudes of a plaintiff and a defendant when discussing a settlement (gains: 95% chance: accept unfavorable settlement / 5% chance: reject favorable settlement, losses: 95% chance: reject favorable settlement / 5% chance: accept unfavorable settlement.

People are averse to risk when they consider prospects with a substantial chance of a large win. They are willing to accept less than the expect value of a gamble if it means a certain win. The possibility effect explains the popularity of lotteries. When the jackpot is huge, people appear indifferent to a minuscule winning chance. Lottery tickets are the best example of the possibility effect. Buying one gives a chance to win and dream about a nice life. Insurance is bought ‘in the fourth row’. People are willing to pay a lot more for insurance than expected value. People do not only buy protection against an unlikely disaster, they purchase a comfortable feeling and eliminate worrying.

How do we respond to rare events? – Chapter 30

Try remembering a time in which terrorist attacks in public transport were relatively common. The attacks were fairly rare in absolute numbers and the risks for travellers very small, but that is not how they felt about it. People tried to avoid public transport or were very cautious. Kahneman also disliked being near buses, even though he knew the risk of a terrorist attack happening was negligible. People assigned an absurdly high decision weight to a very small probability due to the experience of the moment: being near a bus made them have unpleasant thoughts, so they avoided buses. Terrorism is effective because it evokes an availability cascade. Very vivid images of victims, constantly mentioned by media and the topic of many conversations, become highly accessible, especially if it related to a specific situation (seeing a bus). This emotional response is automatic, uncontrolled, associative and it generates an impulse for protective behavior. System 2 knows about the low probability, but System 1 cannot be switched off.

The same goes for big lotteries. The exciting possibility of winning the jackpot is shared by the community and reinforced by interactions with others. Buying a ticket instantly results into appealing fantasies, just like avoiding public transport was an immediate response to fear. Merely the possibility matters, not the actual probability. According to the prospect theory, highly unlikely events get overweighted or ignored. Kahneman’s current view of weighting decisions has been shaped by research on the role of vividness in decision making and of emotions. Vividness and emotion influence judgments of probability, availability and fluency and therefore explain disproportionate responses to rare events.

Consider the following questions:

  1. What is your judgment of the probability that the national football team of Saudi Arabia wins the next world cup?
  2. How much will you pay for a bet in which you receive € 500 if the next world cup winner is Saudi Arabia, and no money otherwise?

The first question regards the assessment of the probability of an unlikely event. The second question requires assigning a decision weight to the same event. People tend to overestimate the probability of an unlikely event and overweight the unlikely event. Overweighting and overestimation are different notions, but the psychological mechanism behind them are the same: cognitive ease, confirmation bias and focused attention. The associative machinery of System 1 is triggered by specific descriptions. When thinking about the unlikely win of Saudi Arabia, your associative machinery starts selectively retrieving evidence, images and instances that would make the statement true. The judgment of probability was determined by the cognitive ease with which a credible scenario come to mind. The probability of a rare event will be overestimated in case of a not fully specified alternative.

Research demonstrates that the valuation is a gamble is much less sensitive to probability when the outcomes are emotional (kissing, getting electric shocks) than when the outcomes are losses or gains of money. The fear of receiving a shock does not correlate with the probability of receiving it. The probability alone triggered the feeling of fear, which overrules the response to probability. This conclusion was later challenged by other researchers. They considered that the low sensitivity to probability for emotional outcomes is normal (gambles on money excluded): insensitivity to probability is not caused by the intensity of emotion.

What are risk policies? – Chapter 31

The emotional evaluation of ‘sure loss’ and ‘sure gain’ is an automatic response of System 1, which takes place before the computation of the expected values of the gambles. People that have to make choices that involve high or moderate probabilities tend to be risk seeking when it comes to losses and risk averse when it comes to gains, which can be costly. These tendencies make you willing to pay a high price to receive a sure gain rather than face a gamble, and willing to pay a high price to avoid a sure loss.

Construing decisions is possible in two ways:

  • Broad framing: a single comprehensive decision, with four options.
  • Narrow framing: a sequence of two simple decisions, considered apart from each other.

The concept of logical consistency cannot be achieved be our mind. We tend to avoid mental effort and are susceptible to WYSITA, so we have the tendency of making decisions as problems arise, even when they have to be considered jointly. Broad framing blunts the emotional response to losses and increases the willingness to take risks. Financial traders shield themselves from the pain of losses by this type of framing. The combination of narrow framing and loss aversion must be avoided. Individual investors avoid it by checking less often how their investments are doing. Constantly checking is unwise, because the pain of frequent small losses trumps the joy of small gains. Deliberately avoiding being exposed to short-term outcomes improves the quality of decisions and outcomes. The short-term reaction to bad news is usually increased loss aversion.

A decision maker who is prone to narrow framing should have a ‘risk policy’ that he applies whenever a relevant problem arises. Examples of risk policies are: “never buy extended warranties” and “always take the highest possible deductible when purchasing insurance". A risk policy is a broad frame that embeds a certain risky choice in a set of similar choices. The risk policy and the outside view are remedies against two opposite biases that influence a lot of decisions: the exaggerated caution evoked by loss aversion and the exaggerated optimism of the planning fallacy.

What is mental accounting? – Chapter 32

For most people, gaining money reflects achievement and self-regard. We keep score in our mind when we lose or gain money and consider them punishments and rewards, threats and promises. The ‘scores’ motivate our actions and influence our preferences. Cutting our losses feels like a failure, so we refuse doing it.

We hold money in both physical and mental accounts. We have general savings, special savings for our kids or emergencies and money to spend. We are more willing to draw on some of these accounts compared to other accounts to cover current needs. Self-control also plays a role: some accounts can be used only for household expenses, they limit have much we can spend. Mental accounts are a form of narrow framing: they keep things manageable and under control. Mental accounts are used for keeping score. For instance, successful golfers have a separate account for each hole, not just one for their overall score.

Consider the following example of mental accounting. Two Rihanna fans have to travel 50 miles to visit her concert. One of them bought the ticket, the other one got it as a gift. A heavy snowstorm is announced for the day of the concert. Which of the two fans is more likely to brave the storm to see Rihanna? The fan who paid for the ticket. Both of them set up a mental account for the concert they hoped to visit. Missing the concert will result into a negative balance. Both will be sad, but the closing balance is more negative for the fan who paid for the ticket. System 1 performs the calculations of emotional balance. For System 2 to respond rationally, it would have to be aware of the counterfactual possibility: “Would I still travel in this blizzard if I had gotten the ticket as a gift?” This requires a disciplined and active mind.

Imagine an individual investor having to sell stock because he is in need of money. He still remembers the price at which he bought each stock and can identify it as a loser or a winner. Fafsung is a winner: if he sells it now, he gains € 8.000. He holds an equal investment in Fapple, which is now worth € 8.000 less than he paid for it. In the last month, the value of both stocks has been stable. Which is he more likely to sell? Closing the Fafsung account results into an positive score. Closing the Fapple account would lead to a negative score. He has to choose between pleasure and pain, so he will likely sell Fafsung. Financial research shows that there is a major preference for selling winners. This bias is called the ‘disposition effect’, which is an example of narrow framing. The state of the mental account was considered a valid consideration for selling. If you care more about your wealth, you would sell the loser.

Imagine a firm that has already spent € 20 million on a failing project. The forecasts are worse than at the starting point. Another € 40 million is needed to give it a chance. The alternative is investing the money in a new project that appears to have a brighter future. What will the firm do? Companies often refuse to accept the humiliation of closing the account of the failure and invest more money in the failing project. In light of the fourfold pattern: this represents the choice between an unfavorable gamble and a sure loss. The sunk-cost fallacy keeps people too long in unhappy relationships, bad jobs and unpromising projects.

When do we feel the most regret?

Regret is something we consider a punishment. The fear of regret is a factor in a lot of decisions we make. Regret is triggered by the availability of alternatives to reality. Regret differs from blame, but both are induced by a comparison to a norm.

We tend to feel greater regret we experience after acting than after failing to act. Consider this example: “Anna considered switching employers, but decided not to switch. She learns that she would have been better off € 2.500 if she had switched. Claire has switched employers and learns she would have been better off by € 2.500 if she had kept her job by her former employer. Who feels the most regret?” Most people would say Claire, because she acted. People expect to have stronger emotional responses, like regret, to an outcome produced by action than by inaction. This is also found in the context of gambling.

When it comes to the endowment effect, reactions to price changes and choices between gambles, losses are weighted approximately twice as much as gains. In certain situations, the loss-aversion coefficient is a lot higher. An example is health. Consider the following problem. “You have been exposed to a disease, which if contracted leads to a rapid and painless death within 4 days. The probability that you contracted it is 1/1.200. Before symptoms occur, you can get an effective vaccine. What is the maximum you would be willing to pay for it?” The majority of people would be willing to buy a limited amount. The risk of dying is small, it appears unreasonable to pay a lot of money in order to avoid it. Now consider the following problem: “For research on diseases, you are asked to expose yourself to a 1/1.200 chance of contracting the disease. What is the minimum you would ask to be paid?’ Most people set the fee much higher than the price they were willing to pay for the vaccine (ratio of 50:1). The high selling price demonstrates two features of the problem: we are not supposed to sell our health (is viewed as not legitimate, thus people are reluctant to participate) and more importantly: you will be responsible for a potential bad outcome. You will feel more regret in the second case, because you could have done nothing. Another example is the reluctance of parents to expose their child to a danger for a few seconds in return for money. The intense aversion to trading increased risk for a benefit is also found in European laws (precautionary principle: actions that might cause harm are prohibited).

What are preference reversals? - Chapter 33

Consider the following task. “A woman lost the use of her left leg as a result of a stabbing. She was stabbed during a robbery in a grocery store in her neighborhood. There are two grocery stores near the woman’s home, one of which she is a regular customer. Two scenario’s:

A. The robbery happened in the woman’s regular grocery store.

B. The woman’s regular grocery store was closed, so she went to the other grocery store, where she was stabbed. Should the grocery store in which the victim was stabbed make a difference to her compensation?

Your judgment was made in a joint evaluation. Both scenarios were considered and compared at the same time. We would all agree that the compensation should be the same in both scenarios. The location is not a relevant factor to victim compensation. This evaluation involved System 2 (thinking slow).

In an experiment, participants saw only one scenario and were asked to assign it a dollar value (single evaluation). The victim was awarded a much greater compensation if she was stabbed in the store she usually does not visit. Poignancy, related to regret, is a counterfactual feeling: “if only she had shopped at her regular grocery store”. The mechanism of intensity matching and substitution (System 1) translate the strength of the emotional response to the case into a monetary value. This explains the big difference in sums.

Comparing these experiments demonstrates a distinct contrast. People who see the scenarios together (within-subject) endorse the principle that poignancy is not a legitimate consideration. The principle is relevant only when both scenarios are shown together, and this usually is not the case in daily life. Life is usually experienced in the between-subjects mode. The lack of contrasting alternatives that could change your mind and WYSIATI result into the fact that your (moral) beliefs do not necessarily govern your emotional responses. The discrepancy between joint and single evaluation of the stabbing scenario is part of a broad category of reversals of choice and judgment (preference reversals).

Preference reversals occur because joint evaluation focuses attention on a specific aspect of the case, which was less salient in single evaluation. Single evaluation is mostly determined by the emotional responses of System 1. Joint evaluation involves a effortful and more careful assessment (System 2).

What is emotional framing? - Chapter 34

France and Argentina competed in the 2022 World Cup final. The following sentences both describe the outcome: “Argentina won.” “France lost.” Whether these statements have the same meaning or not depends on your idea of ‘meaning’.

The truth conditions of the two interchangeable descriptions are identical: one is true, so the other is true as well. Economics consider the preferences and beliefs of people as reality-bound, which are not influenced by the wording of their descriptions. There is another sense of ‘meaning’, in which both sentences have different meanings. The two sentences induce different associations (System 1). “Argentina won” induces thought of the actions of their national team and “France lost" induces thought of what they did that made them lose. In terms of induced associations, the sentences mean something else.

Most people do not have reality-bound preferences as System 1 is not reality-bound. Many are influenced by the formulating of a problem. For instance, a negative outcome is more acceptable if it is framed as the cost of a lottery ticket instead of lost gamble. Losses evoke stronger negative feelings than costs. The same goes for discounts and surcharges: they are economically the same thing, but emotionally not.

Neuroscientists performed an experiment in which they studied framing effects by recording the activity of several brain areas. Participants were asked to imagine receiving 50 pound. They then had to choose between a sure outcome and a gamble on a wheel. Wheels stops on white: they receive the full amount. Wheels stops on black: they get nothing. The sure outcome is a gain of 20 pound (expected value of the gamble). The same sure outcome can be framed as a ‘lose’ of 30 pound and a ‘keep’ of 20 pound. These words evoke tendencies to avoid or approach and System 1 tends to be biased in favor of the option framed as ‘keep’. All participants demonstrated framing effect: they choose the sure thing in the keep frame and accepted the gamble in the lose frame. There was a distinction among the participants though: some were very susceptible to the framing, others were reality-bound. The participants were ranked by a ‘rationality index’.

The study resulted into three significant findings:

  • The amygdala (region related to emotional arousal) was most likely to be active when participants’ choices conformed to the frame. This region is accessed very quickly by emotional stimuli (System 1).
  • The anterior cingulate (region related to self-control and conflict) was more active when participants did not act naturally (choosing the sure thing despite the ‘lose’-label). Resisting the suggestion by System 1 appears to cause conflict.
  • The most rational participants showed enhanced activity in the frontal area that is known for combining reasoning and emotion.

This study illustrates how words that induce emotion influence our decision making.

What does the two selves mean? - Part 5

How does our memory affect our judgments of experiences? – Chapter 35

The notion ‘utility’ has two different meanings. Jeremy Bentham argued that people are under the governance of two masters: pleasure and pain. They determine what we shall do and what we ought to do. Kahneman refers to this idea as ‘experienced utility’. When economists use the term, they mean ‘wantability’, which Kahneman refers to as ‘decision utility’. Expected utility theory concerns the rationality rules that should govern decision utilities.

Both concepts of utility can coincide: when people want what they will like and like what they chose. There are several possible discrepancies between the forms of utility. Imagine you have to receive a daily painful injection. The pain is the same every day. Will you attach the same value to reducing the number of future injections from 19-17 as from 5-3? You would be willing to pay more for the first option. The decision utility of avoiding two injections is higher in the first option, which is remarkable because the reduction of pain is the same. It shows that experienced utility can be measured (number of injections) and that experienced utility is the criterion by which decisions should be assessed.

Economist Edgeworth argued that experienced utility could be measured by using a ‘hedonimeter’: an imaginary instrument that measures the level of pain or pleasure that someone experience at one moment. Time is an important factor in his theory.

In a study of the experiences of two patients undergoing a painful medical procedure, the patients were asked to indicate how much pain they experienced every 60 seconds. 0: no pain. 10: intolerable pain. The experience lasted 8 minutes for patient Y and 24 minutes for patient Z. Which patient suffered more? You would go for patient Z, as his procedure lasted a lot longer. After the procedure, they were asked to rate the total amount of pain they had experienced. The two main findings were:

  • Duration neglect: the duration of the procedure did not influence the ratings of total pain at all.
  • Peak-end rule: the global retrospective rating was predicted by the level of pain at the end of the experience and by the average level reported at the worst moment.

Patient Y retained a much worse memory of the experience than patient Z. It was bad luck that his procedure ended at a painful moment, which left him with a distressing memory.

The two measures of experienced utility, the retrospective assessment and the hedonimeter, are different. Judgments based on the hedonimeter are duration-weighted: it assigns equal weights to all moments. The retrospective assessment is insensitive to duration and weights two singular moments (the end and the peak). Which one is best? For the medical practice, this is an important question. It depends:

  • If the physician wants to reduce the memory of pain, minimizing the peak intensity of pain is better than lowering the duration of the procedure. Also, gradual relief is better than abrupt relief.
  • If the physician wants reduce the actually experienced amount of pain, he should lower the duration of the procedure.

It is likely that most people will prefer reducing the memory of pain. The dilemma demonstrates a conflict of interests between two selves: the remembering self (How was it, overall?) and the experiencing self (do I feel pain right now?).

Imagine being at a concert that gets stopped near the end due to a fight and you hear someone say: “My whole experience is now ruined”. That is wrong: the memory of the experience was ruined, not the experience. The experiencing self had an almost entirely nice experience and the bad end could not undo that. Confusing experience with the memory of it is a common cognitive illusion. The remembering self is the one keeping score and making decisions. However, decisions and tastes can be shaped by memories that are wrong. The memory (System 1) represents the most intense moment of pleasure or pain (the peak) and the feelings when the experience was at its end. A memory that neglects the duration of experiences will not serve our preference for lasting pleasure and brief pain.

How do we evaluate stories? - Chapter 36

When people are about to die, their loved ones rush to them for one last moment, before death sets in. Why do we care so much about that final moment? Why is it so important that a movie ends with people arriving in time, while the length of the life was not taken in consideration? It hardly matters learning that they actually had several days together instead of 5 minutes, but someone being too late would be a big deal to you. Stories are about memorable moments and important events, not about the passing of time. In a story, duration neglect is normal and the ends often defines whether it’s a good or bad story.

Caring for someone usually means being concerned for the quality of his/her story, not for his/her feelings. We also deeply care for the narrative of our own life story.

Psychologist Diener examined whether the peak-end rule and duration neglect govern the evaluation of an entire life. The results demonstrated that both did. Doubling the duration of the life of the fictitious woman had no effect on the judgments of her total happiness nor on the desirability of her life. In addition, a less-more effect was found: adding ‘slightly happy’ years to a really happy life caused a drop in evaluations of total happiness: they made the whole life worse.

Consider you are making vacation plans. Would you go for the beautiful place you enjoyed with your family last summer or visit a whole new location, enriching your memory store? The tourism industry helps people collecting memories and constructing stories. The goal of storing memories shapes the vacation plans and the experience of it. The word ‘memorable’ is frequently used to describe the highlights of the vacation. The remembering self is the one that chooses vacations. A study shows that the final evaluation of a vacation entirely determines the intentions for future breaks, although that did not accurately reflect the quality of the whole experience (as described in a diary). We choose by memory when we decide whether we repeat an experience or not. Eliminating memories is likely to significantly reduce the value of the experience.

What does research about experienced well-being learn us? – Chapter 37

Research about well-being revolves around one survey question, which was considered a measure of happiness and was addressed to the remembering self: “All things considered, how satisfied are you with your life as a whole these days?” In his experiments, Kahneman found that the remembering self was not the best witness, so he decided to focus on the well-being on the experiencing self.

There are numerous experiences we would prefer to continue instead of stop, including psychical and mental pleasures. Example are being in a ‘flow’ (absorbed in a task) and playing with toys. The resistance to interruption is an indicator of having a good time.

Together with other specialists Kahneman developed a measure of the well-being of the experiencing self. Experience sampling seemed a good option, but it is burdensome and expensive. This led to the development of the ‘Day Reconstruction Method’ (DRM). Most moments in life can be classified as ultimately negative or positive. The American participants experienced negative feelings approximately 19% of the time. This percentage is called the U-index. The advantage of the U-index is that it is not based on a rating scale but on an objective measurement of time. It can also be computed for activities.

A remarkable finding was the extent of inequality in the distribution of emotional pain. Half of the participants went through a whole day without experiencing unpleasant episodes. A significant number of participants experienced negative feelings for a big part of the day. This suggests that a minority of the population does most of the emotional suffering.

The mood of people at any moment depends on their overall happiness and temperament. Emotional well-being also fluctuates over the week and day. The mood of the moment depends mostly on the current situation. Situational factors are the most important. We are usually focused on our current activities and environment, but sometimes the quality of subjective experience is dominated by recurrent thoughts (being in love, grieving). However, in most cases we draw pain and pleasure from what currently is happening.

The findings have implications for society and individuals. People have some control over their use of time. A number of people could arrange their lives to spend more time doing things they like and less time doing things that make them unhappy. Some aspects of life have more effect on the evaluation of someone’s life than on the experience of living, like educational attainment. Bad health, living with children and religious participation have a stronger adverse effect on experienced well-being than on life evaluation. Does money make us happy? Being poor is depressing and being rich is satisfying, but having a lot of money does not improve experienced well-being.

What is the focusing illusion? - Chapter 38

The decision to marry someone reflects a huge error of ‘affection forecasting’. On their big day, the groom and bride know that the divorce rate is high, but they believe that these numbers do not apply to them.

A study on the level of life satisfaction from the day people get married shows a gradual drop. It is argued that the honey moon phase fades and married life becomes a routine. Another example is plausible: heuristics of judgment. A mood heuristic is one way of answering questions about life-satisfaction. In addition to the current mood, people are likely to think about significant events in the recent past. Only a few relevant ideas come to mind, but most do not. The rating of life-satisfaction is heavily influenced by a small amount of highly available ideas, not by carefully weighting all life domains. People who recently got married will retrieve that happy event when asked a general question about life. As time passes, the salience of the thought will diminish. This explains the remarkably high level of life satisfaction in the first years after marriage. On average, experienced well-being is not affected by marriage, not because marriage does not makes us happy, but because it changes some aspects of life for the worse and others for the better.

A reason for the low correlations between life-satisfaction and the circumstances of individuals, is that life-satisfaction and experienced happiness are significantly determined by the genetics of temperament. A disposition for well-being is heritable. In other cases, like marriage, the correlations with well-being are low due to balancing effects. Setting (financial) goals also proved to have lifelong effects.

People tend to respond fairly quick to life questions. This speed of answering and the effects of current mood on the answers demonstrate that they skip a careful assessment. They probably use heuristics, which are examples of WYSIATI and substitution. When attention is directed to a specific aspect of life, it greatly affects the overall evaluation. This is known as the ‘focusing illusion’. The most important thing in life seems the thing you are thinking about. The essence of this illusion is WYSIATI. The focusing illusion results into a bias in favor of experiences and goods that are initially appealing, but will eventually lose their charm.

 

Image

Access: 
Public

Image

This content refers to .....
Psychology and behavorial sciences - Theme
Click & Go to more related summaries or chapters
Join WorldSupporter!
Search a summary

Image

 

 

Contributions: posts

Help other WorldSupporters with additions, improvements and tips

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Image

Spotlight: topics

Check the related and most recent topics and summaries:
Activity abroad, study field of working area:

Image

Check how to use summaries on WorldSupporter.org

Online access to all summaries, study notes en practice exams

How and why use WorldSupporter.org for your summaries and study assistance?

  • For free use of many of the summaries and study aids provided or collected by your fellow students.
  • For free use of many of the lecture and study group notes, exam questions and practice questions.
  • For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
  • For compiling your own materials and contributions with relevant study help
  • For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.

Using and finding summaries, notes and practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Use the summaries home pages for your study or field of study
  2. Use the check and search pages for summaries and study aids by field of study, subject or faculty
  3. Use and follow your (study) organization
    • by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
    • this option is only available through partner organizations
  4. Check or follow authors or other WorldSupporters
  5. Use the menu above each page to go to the main theme pages for summaries
    • Theme pages can be found for international studies as well as Dutch studies

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study for summaries and study assistance

Main summaries home pages:

Main study fields:

Main study fields NL:

Follow the author: Psychology Supporter
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Statistics
2231 1