How does the 'lazy control' of slow thinking work? - Chapter 3

System 2 has a natural pace. Having random thoughts and monitoring what happens around you is not effortful. We make small decisions when we ride our bicycle, take in some information as we watch the news and have low key conversations with our colleagues or partner. These actions take little effort and can be compared to a stroll. It is usually easy to be walking and thinking at the same time, but in some cases they cause a mental overload. When you go on a walk with someone and you ask that person to instantly solve the problem 32 x 64, he or she will stop walking. Walking faster than your natural pace worsens your thinking ability, as your attention shifts to maintaining a faster pace. If you walk as fast as you can, it will be impossible to focus on anything else. Next to the psychical effort, it takes mental effort to fight the urge to slow down: self-control. Conscious thoughts and self-control fight over the same restricted budget of effort.

Sometimes people are in a state of effortless concentration in which the maintenance of a coherent train of thoughts requires no willpower. Psychologist Csikszentmihalyi called this a ‘flow’. Examples are car speeding, painting, playing chess, writing. Being in a flow state can make you lose your sense of yourself and time. Activities that induce this flow are called ‘optimal experiences’. These activities take considerable effort, but in a state of flow, the maintenance of focused attention on them requires no discipline. 'Flow' separates the two forms of effort: the deliberate control of attention (self-control) and concentration on the task (cognitive effort).

Psychological research has demonstrated that someone who is simultaneously challenged by a temptation and by a demanding mental task is more likely to give into the temptation. When you get the task to remember a list of numbers for several minutes and at the same time have to choose what you want to eat: broccoli or pizza, you are more likely to go for the pizza. System 1 has more influence on our behavior when System 2 is occupied. Someone who is cognitively busy is also more likely to use sexist language, be superficially judgmental in social settings and make selfish decisions. A busy System 2 loses the hold on behavior, although mental load is not the only cause of depleted self-control. Other possible causes are a bad night of sleep, drinking alcohol or anxiety about the task. Conclusion: self-control requires effort and attention. 

Experiments conducted by psychologist Baumeister displayed that voluntary physical, emotional and cognitive effort all – partly – drain our tank of mental energy. His experiments involved successive tasks. Efforts of self-control or will are tiring: if we have had to force ourselves to do a task, we are less likely to exert self-control when starting the next task. This is called ‘ego depletion’. Participants who had to suppress their emotional response did not do well in a later psychical test. Emotional effort has a bad influence on your ability to endure muscle pains. An ego-depleted person is therefore more likely to give up faster. In another experiment, participants who started with the task to eat healthy food while resisting sweet treats later gave up quicker than usual when faced with a demanding mental task.

Many tasks and situations lead to depletion of self-control. They all involve the need to suppress a natural urge and conflict. Examples are avoiding the thought of red cats, trying to impress someone and responding friendly to your husband’s bad behavior. There are also many and various indications of depletion, for example reacting aggressively to someone provoking you or not doing well in cognitive tasks.

Highly demanding tasks require self-control, while the exertion of self-control is unpleasant and depleting. Unlike mental load, ego depletion is a loss of motivation. It does not equal being cognitively busy.

Baumeister also found that mental energy is not merely a metaphor. The nervous system is one of the most glucose consuming parts of the body, especially when you are carrying out demanding mental tasks. Carrying out a cognitive activity that requires self-control results into a lower blood glucose level. This effect of ego depletion can be reversed by ingesting glucose. Only the participants who got a glucose drink before starting the second task were not depleted. Intuitive mistakes are usually more frequent among ego-depleted individuals.

A recent study showed the effects of depletion on judgment. Judges had to review parole applications. The researchers found an increase in approved requests after every food break. In the period until the next break, the rate dropped to nearly zero just before their next eating moment. The best explication is that hungry and fatigued judges had the urge to go for the easier default decision: denial of parole.

Monitoring and controlling actions and thoughts suggested by System 1 is one of the most important functions of System 2. System 2 either allows, suppresses or modifies them. Read the following puzzle, what does your intuition tell you? “An ice cream and chocolate dip cost € 1.10. The ice cream costs one euro more than the dip. What is the price of the dip?” You automatically answer € 0.10, which is wrong. If the price of the dip is € 0.10, then the total price will be € 1.20 (0.10 for the dip and 1.10 for the ice cream). The right answer is € 0.05. Answering with € 0.10 means that you did not actively review your intuitive answer and your System 2 supported a wrong answer that it could have prevented with little effort. Here we see the ‘law of least effort’ at work. Several thousand students at leading universities in the United States answered the puzzle and more than half of them gave the wrong answer. At lower ranked universities, 80% of the participating students failed to give the right answer. The ice cream – dip puzzle demonstrates that most people are overconfident: they are prone to put too much trust in their intuitions and avoid cognitive effort.

Now read the following argument and decide as fast as you can if the conclusion drawn is correct. “All apples are fruits. Some fruits are pink. Therefore some apples are pink.” Most college students agreed with the conclusion, but it is actually an invalid syllogism: it is possible that there are no pink apples. Because a plausible answer comes to mind straight away, not many people are willing to put effort into thinking it through. This is a problematic finding, because it indicates that when people believe a conclusion is valid, they also tend to believe the arguments are valid. System 1 focuses firstly on the conclusion, the arguments follow later.

Read the following question and rapidly give your best estimation. “How many homicides occur in the state of Tennessee in 12 months?” This question challenges System 2. The trick is whether people will remember that Memphis, a city with a very high crime rate, is in Tennessee. People from the United States know that Memphis is one of the biggest cities of Tennessee. The ones that remember that Memphis is in Tennessee give higher estimates. Most respondents who were asked this question did not think of the city when asked about the state and reported lower guesses than the respondents who were asked about the number of homicides in Memphis. Failing to think of Memphis can be a flaw of both System 1 and System 2. Whether the city pops up in your mind depends partly on the automatic function of memory. That is something people differ in. Some people, such as residents, have extensive knowledge about the state and are more likely to remember various facts about it. It also depends on the interests of people and their intelligence. Intelligence is not solely about reasoning, but also about retrieving relevant facts from memory and deploying attention. While memory function is associated with System 1, taking your time for a conscious search of memory is a feature of System 2. The extent of this search varies among people. 

The ice cream – dip puzzle, the apples syllogism and the Memphis – Tennessee question have one thing in common: giving the wrong answers seems to be caused by insufficient motivation, not making enough effort. Students of high-ranked universities have the capability to provide the right answer. Without the temptation of accepting a plausible answer that automatically comes to mind, they can solve much harder problems. It is troubling that they are so easily satisfied and stop thinking. Their System 2 proved to be lazy. They should be less willing to accept tempting answers, more alert and intellectually active and have less confidence in their intuitions.

Shane Frederick used his Cognitive Reflection Test to examine the characteristics of students who had performed poorly and found that they are prone to answer with the first thought that comes to mind and are reluctant to make the effort of checking this intuition. They are also prone to believe other ideas from System 1. The students were particularly impatient, impulsive and wanted instant gratification. 63% of them preferred getting a smaller amount of money now than a larger amount after a month. Only 37% of the students who had given the right answer to the puzzles went for the smallest amount. The findings of Frederick indicate that System 1 and System 2 have different ‘personalities’. System 1 is intuitive and impulse, System 2 is cautious and capable of reasoning, but can also be lazy. The same goes for people: some are like System 1, others like System 2.

The link between self-control and thinking was also examined by Walter Mischel. He subjected four year olds to a dilemma: receiving a small reward (one cookie) whenever they wanted it or a bigger reward (two Oreos) after waiting for 15 minutes in a non-distracting room. Half of them succeeded in waiting for 15 minutes, mostly by trying not to pay attention to the cookie. Over a decade later, the children that had managed to resist the temptation had greater measures of executive control in cognitive tasks, in particular the ability to reallocate attention. They were also less likely to do drugs and had better scores on intelligence tests.

Other researchers examined the connection between intelligence and cognitive control by exposing four to six year olds to computer games specifically designed to engage their attention and control abilities. The researchers discovered that training attention improved both executive control and their scores on intelligence tests. They also found that parenting techniques affected the kids’ ability of controlling attention and a close link between this ability and the ability of controlling emotions.

Check page access:
Public
Check more or recent content:

Summary per chapter with the 1st edition of Thinking, Fast and Slow by Kahneman

What is the book 'Thinking, fast and slow' by Kahneman about?

What is the book 'Thinking, fast and slow' by Kahneman about?

Thinking, Fast and Slow by Kahneman

  • Thinking, Fast and Slow is an internationally renowned book written for anyone interested in personal well-being and human development.
  • The book gives a schematic explanation of how people can make better choices by describing how our thinking interprets past, future, and present situations in correct and incorrect ways.

‘Thinking, fast and slow’, a New York Times bestseller and multiple prize winner, is a highly praised book that provides us many insights into the human mind. It is seen as a guide to detecting and correcting our biased misunderstandings of the world. We see ourselves as rational thinkers and decision makers, but this book demonstrates how we are subject to many systematic errors, which we are not even aware of or tend to ignore.

The author, Daniel Kahneman, won the Nobel Prize in economics in 2002. Kahneman collaborated with psychologist Amos Tversky until his death in 1996, they together produced a fair amount of the research mentioned in this book. He brings in his own research, as well as that of other renowned experts (psychologists, economists, statisticians). Kahneman (1934) is Eugene Higgins Professor of Psychology Emeritus at Princeton University and Professor of Psychology and Public Affairs Emeritus at Princeton’s Woodrow Wilson School of Public and International Affairs. Kahneman is considered one of the greatest psychologists. He provides new insights into the understanding of risk, cognitive psychology, behavioral economics, the study of well-being and happiness and the analysis of reason and rationality. It is not only an important read for people who want to learn about human behaviour, the findings are valuable to investors, businesses, consumers, teachers, physicians, politicians, marketers and many others.

The book focuses mostly on biases of intuition. Kahneman’s aim is improving the ability to identify and understand errors of judgment and choice, particularly in ourselves, by presenting a view of how the mind works that draws on recent developments in social and cognitive psychology (as opposed to many authors who discuss earlier research). One of these developments is the better understanding of the flaws and wonders of intuitive thought.

The book is mainly about the distinction between fast and slow thinking, which Kahneman refers to as ‘System 1’ and ‘System 2. “Thinking, fast and slow’ is divided into five parts. Parts 1 discusses the basis elements of this two-system approach to judgment and choice. It demonstrates the distinction between System 1 (automatic operations) and System 2 (controlled operations) and how associative memory, the heart of System 1, consistently creates a coherent interpretation of what is happening around us. This automatic process underlies intuitive thinking and explains the heuristics of judgment. Part 2 is about new insights into judgment heuristics and why it is so hard to think statistically in contrary to thinking associatively. Part 3 demonstrates a significant limitation of our mind: the excessive confidence we have in what we believe we know and our inability to acknowledge

.....read more
Access: 
Public
What distinguishes fast and slow thinking? - Chapter 1

What distinguishes fast and slow thinking? - Chapter 1

What is the distinction between System 1 and System 2? – Chapter 1

When looking at a picture of a blonde woman with an angry expression on her face, you immediately notice her having blonde hair and being angry. You get the feeling she is about to say something negative in a loud voice. This premonition of her future behaviour arose effortlessly and automatically. This is an example of ‘fast thinking’.

When looking at the mathematical problem 14 x 38, you quickly know it involves multiplication and you have a vague intuitive idea about the possible outcome. You quickly knew the answer ends with the number 2 and are sure 10.000 is not the right answer. However, the precise answer did not immediately come to mind. Solving the problem requires some time and taking several steps. You start with collecting your knowledge about multiplication from your memory, as learned in the early school years. Then you have to put this knowledge into practice, which takes some effort. This is an example of ‘slow thinking’. The computation involves both mental and physical work: it also leads to a higher blood pressure, an increased heart rate, tensed muscles and dilated pupils.

The modes of thinking are given various labels by psychologists. Kahneman refers to ‘System 1’ and ‘System 2’. System 1 operates fast, automatically, involuntary and without (hardly any) effort. System 2 involves effortful mental work, which requires concentration, making choices and agency. The feelings and impressions that effortlessly originate from System 1 are the main sources of the deliberate choices and explicit beliefs of System 2. Both systems have their own functions, abilities and limitations.

Examples of events that occur automatically and effortlessly (System 1) are:

  • Answer to 1 + 1 = ?

  • Hearing an unexpected sound: looking in the direction of the source.

  • Reacting to a threat before recognizing it.

The abilities of System 1 include skills which we also see in animals, like recognizing things and orienting attention. Other quick and automatic mental activities are the result of prolonged practice. System 1 involves learned skills (how to behave socially, reading) and learned associations (capitals of countries). Certain skills are acquired solely by specialized professionals. Learned skills require knowledge, which is stored in memory and can be accessed effortlessly and unintentionally. Some responses are entirely involuntary. You cannot stop yourself from knowing that 1 +1 = 2 or looking in the direction of a sudden sound. Others can be controlled but are usually done automatically. Controlling attention is an activity that fits both systems. Orienting to a sudden noise happens involuntary (System 1), but turns quickly into voluntary attention (System 2). Even if you manage to resist looking at the source, your attention was drawn to it. 

The various operations

.....read more
Access: 
Public
How do fast and slow thinking deal with effortful tasks? - Chapter 2

How do fast and slow thinking deal with effortful tasks? - Chapter 2

How do System 1 and System 2 deal with effortful tasks? - Chapter 2

System 2 is defined by its effortful operations, although it is also lazy: it puts in no more effort than needed. Some crucial tasks can only be performed by System 2, because they require self-control and effort to overcome the impulses and intuitions of System 1.

The ‘Add-1’ task puts our System 2 to work and demonstrates the limits of our cognitive abilities within seconds. If you truly want to challenge yourself, try Add-3. Your body will also react to the mental work. Eckhard Hess, psychologist, described pupils as windows to the soul. He found that the pupils indicate the level of mental effort: they dilate more if people have to solve more difficult problems. Kahneman set up an experiment to study the reaction of pupils while the participant performed paced tasks. The pupils got wider as the tasks got more demanding. The Add-1 task demonstrated how longer strings of numbers caused bigger pupils. When performing the Add-3 task, the pupils got 50% bigger and the heart rate increased. This is the maximum of mental effort, people give up if the task gets more demanding than Add-3 (the dilating of the pupils stopped). The pupils had a constant normal size when the participant was chatting to someone else during a break. Engaging in small talks and easy tasks are deemed effortless, while tasks as Add-1 and Add-3 are extremely effortful. This can be compared to walking in a natural pace and sprinting. When we are (mentally) sprinting, we could become effectively blind, like the Invisible Gorilla experiment demonstrated. 

System 2 has limited capacity. It responds to threatened (mental) overload by protecting the most important activity. That activity gets the attention it needs. Remaining capacity will be divided to other tasks. An experiment regarding the detection of the letter K as a ‘side task’ showed that the observes failed when the main task was highly demanding.

The allocation of attention has always played an important role in our evolution. The ability to orient and respond rapidly to sudden threats or great opportunities was needed to survive, which we also recognize in the animal world. Even now, System 1 is activated when an emergency occurs and fully focuses on self-protection. We respond to a sudden threat before we are fully aware of it.

Brain studies have demonstrated that the degree of activity needed for an action changes as we become more skilled. An increase in skills resulted into the involvement of fewer brain regions. The same goes for talent: the brain activity and pupil size of highly intelligent people show they need less effort to successfully complete the same task. Law of least effort: if we have several options for achieving a goal, we choose the least demanding one. It is human nature to be lazy.  

Only System 2 is

.....read more
Access: 
Public
How does the 'lazy control' of slow thinking work? - Chapter 3

How does the 'lazy control' of slow thinking work? - Chapter 3

System 2 has a natural pace. Having random thoughts and monitoring what happens around you is not effortful. We make small decisions when we ride our bicycle, take in some information as we watch the news and have low key conversations with our colleagues or partner. These actions take little effort and can be compared to a stroll. It is usually easy to be walking and thinking at the same time, but in some cases they cause a mental overload. When you go on a walk with someone and you ask that person to instantly solve the problem 32 x 64, he or she will stop walking. Walking faster than your natural pace worsens your thinking ability, as your attention shifts to maintaining a faster pace. If you walk as fast as you can, it will be impossible to focus on anything else. Next to the psychical effort, it takes mental effort to fight the urge to slow down: self-control. Conscious thoughts and self-control fight over the same restricted budget of effort.

Sometimes people are in a state of effortless concentration in which the maintenance of a coherent train of thoughts requires no willpower. Psychologist Csikszentmihalyi called this a ‘flow’. Examples are car speeding, painting, playing chess, writing. Being in a flow state can make you lose your sense of yourself and time. Activities that induce this flow are called ‘optimal experiences’. These activities take considerable effort, but in a state of flow, the maintenance of focused attention on them requires no discipline. 'Flow' separates the two forms of effort: the deliberate control of attention (self-control) and concentration on the task (cognitive effort).

Psychological research has demonstrated that someone who is simultaneously challenged by a temptation and by a demanding mental task is more likely to give into the temptation. When you get the task to remember a list of numbers for several minutes and at the same time have to choose what you want to eat: broccoli or pizza, you are more likely to go for the pizza. System 1 has more influence on our behavior when System 2 is occupied. Someone who is cognitively busy is also more likely to use sexist language, be superficially judgmental in social settings and make selfish decisions. A busy System 2 loses the hold on behavior, although mental load is not the only cause of depleted self-control. Other possible causes are a bad night of sleep, drinking alcohol or anxiety about the task. Conclusion: self-control requires effort and attention. 

Experiments conducted by psychologist Baumeister displayed that voluntary physical, emotional and cognitive effort all – partly – drain our tank of mental energy. His experiments involved successive tasks. Efforts of self-control or will are tiring: if we have had to force ourselves to do a task, we are less likely to exert self-control when starting the next task. This is called ‘ego depletion’. Participants who had to suppress their emotional response did

.....read more
Access: 
Public
How does the 'associative machinery' of fast thinking work? - Chapter 4

How does the 'associative machinery' of fast thinking work? - Chapter 4

How does the 'associative machinery' of fast thinking work? - Chapter 4

Read the following words:

“Mango”                              “Puke”

In a few seconds you will experience appalling images and pull a disgusted face. You automatically responded to the word ‘puke’ like you would respond to the actual event. Our minds automatically assume causality between the words mango and puke, forming a scenario in which the mango caused nausea. This results into a short-term aversion to mangos. You are also extra ready to recognize and respond to concepts and objects associated with ‘puke’, such as vomit, sick, nausea and words associated with ‘mango’, such as exotic, fruit and red. Words associated with other causes of puking are also easier to recognize (food poisoning, hangover). You were a little surprised, as your System 1 noticed the uncommon juxtaposition of the words.

This wide range of responses occurred effortlessly, automatically, quickly and could not be stopped. This is an example of your System 1 at work. The visions and thoughts you experienced are the result of the process called ‘associative activation’: ideas that have been formed generate numerous other ideas. A word evokes memories, which triggers emotions, which evoke reactions like facial expressions and an avoidance tendency. These reactions intensify the feelings to which they are connected, and the feelings intensify compatible thoughts. This rapid process of physical, emotional and cognitive response is called ‘associatively coherent’. System 1 tries to make sense of the unusual situation (two random words) by linking them in a logical story. It starts with evaluating the current level of threat and then creates a context for the current situation and future events. System 1 treats the connection between two words as a representation of reality. Your body reacts as it would react to the real event and your emotional reaction is part of the interpretation of that event. As cognitive researchers recently emphasized: you do not merely think with your brain, but also with your body.

The process that causes mental events in sequences is called ‘the association of ideas’. Philosopher Hume identified three principles of association: causality, contiguity in place and time, and resemblance. An idea can be abstract or concrete and can be described as a noun, a verb, an adjective or in a physical way. A psychologist sees an idea as a node in a network, the associative memory, in which it is linked to numerous others. There are several types of links: cause-effect (drinking-hangover), things- properties (carrot - orange), things – categories (tulip – flower). According to the current view of the functioning of the associative memory, the mind goes through a sequence of ideas at once. One idea evokes many other ideas, of which only a few are conscious.

Psychologists discovered in the 1980’s that seeing or hearing words causes instant and measurable changes in the ease with which numerous related words can be evoked. If you have

.....read more
Access: 
Public
When is your mind at ease? - Chapter 5

When is your mind at ease? - Chapter 5

When we are conscious, several assessments take place in our brain, providing answers to important questions: Is something new happening? Are things going alright? Is there a threat? Should I redirect my attention? System 1 carries out these assessments automatically. It determines whether System 2 needs to put in more effort. ‘Cognitive ease’ is one of the variables being measured. On a scale of easy to strained, ‘easy’ means that things are going alright (no news, no threats, no redirecting of attention needed) and ‘strained’ means that a problem occurred and System 2 has some work to do. ‘Cognitive strain’ is affected by the presence of unmet demands and the current level of effort.

Processing a clearly written sentence or listening to someone when you are happy induces cognitive ease. Reading a blurry manual or when you are frustrated induces cognitive strain. The causes of strain or ease have interchangeable effects. When you are in a state of strained ease, you are probably suspicious, putting in more effort, feeling less comfortable but also less creative and intuitive. When you feel at ease, you are probably in a positive mood, satisfied, feeling comfortable and rather causal in your thinking.

What are illusions of remembering?

Thinking and memory are susceptible to illusions. Psychologist Jacoby first exemplified the memory illusion in his article ‘Becoming famous overnight’. When you are shown a list of made-up names and five minutes later you come across one of those names, you will remember where you saw it and that it is not the name of a famous person. Three days later you are presented a long list of names, including semi-famous people and new unknown names. You have to identify all the celebrities in the list. It is likely that you will identify the made-up name as a famous person. In case of very famous people, you have a mental file with extensive information. There is no information about someone whose name you heard twice. You remember seeing the name before, but there is nothing more than a feeling of familiarity. Familiarity has a quality of ‘pastness’ that suggests that there is a direct reflection of a past experience. This quality is an illusion. The made-up name looks familiar, because words you have seen earlier become easier to see again and quicker to read. Thus, seeing a word you have seen before induces cognitive ease, which results into the illusion of familiarity.

What are illusions of truth?

“Amsterdam is the capital of The Netherlands”. “You cannot drink seawater, because it is too salty”. “A duck has four legs”. After reading these statements, you instantly retrieved many related facts and quickly knew that the first two are correct and the third is incorrect. However, the statement “A duck has three legs” is more clearly incorrect. The associative machine slows the assessment of the third statement by providing the information that there are

.....read more
Access: 
Public
How does your mind deal with surprises? - Chapter 6

How does your mind deal with surprises? - Chapter 6

The main function of System 1 is maintaining and updating a model of your personal world, which represents normality. This model is constructed by associations that connect ideas of events, circumstances, outcomes and actions that regularly occur. The formed connections become a pattern of associated ideas, which represents the structure of events in your life. It determines how you interpret the present and your future expectations.

Surprises are crucial elements of our mental life, they are the most sensitive indication of our understanding of the world and our expectations from it. Surprises can be divided into two varieties: conscious and active surprises, and passive surprises. Around dinner time, you expect your partner to return home from work. When you hear car doors slamming, you expect to see his face. It would be a surprise if someone else walks in. In case of a passive event you do not wait for it, but you are also not surprised when it happens. Although not actively expected, it is normal in that situation.

One event can make a recurrence less surprising. Imagine you run into your old friend Kim while holidaying far away from home. A few weeks later, you see her at a concert in Amsterdam. The second meeting is a more remarkable coincidence, but you are less surprised to meet her. The first one changed the idea of Kim in your mind. While System 2 knows it is an absurd idea, System 1 made it seem normal to meet Kim in unusual places. You would have been more surprised if you met another old friend at the concert, even though meeting Kim was statistically no more likely than any other old friend.

Passive expectations sometimes turn active. Imagine you are driving on the highway and spot a car with a flat tire. Exactly one week later, you see another person stranded with a flat tire, at the same location. You will be less surprised the second time, because you consider that location as the ‘place where people strand with flat tires’. Because the circumstances were similar, the second event evoked an active expectation: you will be reminded of stranded cars every time you pass that location for a long time and are prepared to spot another unlucky driver.

How does our mind assess normality?

How incidents come to be perceived as abnormal or normal can be explained by the ‘norm theory’. If you witness two abnormal events, the second event will retrieve the first one from memory and together they will make sense. An example is the ‘Moses illusion’: “How many animals of each kind did Moses take into the ark?” Very few people realize that it was Noah who took them into the ark. The thought of animals in an ark creates a biblical context, in which Moses is normal. Hearing or reading his name did not come as a surprise. The (unconscious) associative

.....read more
Access: 
Public
Why do people so often jump to conclusions? - Chapter 7

Why do people so often jump to conclusions? - Chapter 7

One of the characteristics of System 1 is jumping to conclusions. Jumping to a conclusion is efficient if the conclusion is likely to be true, the costs of a potential mistake are acceptable and it saves a fair amount of effort and time. It is risky when the stakes are high, the situation is unfamiliar and there is a lack of time for collecting further information. In this case, it is likely to make an intuitive error, unless System 2 intervenes.

If you read a list of letters, including the number ‘13’ in the same shape as the letters, you tend to read it as the letter ‘B’. Would it be the letter ‘B’ in a list of numbers, you tend to read the number ‘13’. This is explained by the fact that the context affects the interpretation of each character. You jump to a conclusion and fail to detect the deviation. When there is no explicit context, System 1 produces a plausible context. When the situation is uncertain, System 1 takes a bet, which is guided by experience. The current context and recent events strongly influence the interpretation. When you do not remember recent events, you rely on older memories (like singing the alphabet). The B/13-example shows that a definite choice was made without you being aware of it. System 1 did not consider alternatives: it does not know conscious doubt. Doubt and uncertainty are typical for System 2.

What is the confirmation bias?

Psychologist Gilbert came up with the theory of believing and unbelieving. He argued that understanding an idea starts with attempting to believe it. What would it mean if it were true? The first attempt to believe is an automatic process of System 1, which constructs the most plausible interpretation of the situation. Even a foolish idea (“birds drink wine”) will initially be believed due to the automatic process of associative memory searching for connections between both ideas that would make sense of it.

Unbelieving is according to Gilbert a process of System 2. When System 2 is engaged, we tend to believe most things. We are more likely to be persuaded by commercials when we are depleted and fatigued.

The operations of associative memory are linked to ‘confirmation bias’. The question ‘Is Naomi nice?’ evokes different memories than the question ‘Is Naomi rude?’. System 2 tests a hypothesis by conscious searching for confirming facts. It is a rule of science to test a hypothesis by trying to refute it, but people (even scientists) tend to search for evidence that supports their beliefs. The confirmation bias of System 1 is guilty of uncritically accepting suggestions and exaggerating the probability of unlikely events. 

What is the halo effect?

If you like someone’s views and opinions, you are likely to also like his/her appearance and voice. The tendency to like or dislike everything about someone,

.....read more
Access: 
Public
How are your judgments formed? – Chapter 8

How are your judgments formed? – Chapter 8

System 2 deals with both questions from someone else (“Did you like the food?”) and from your own mind (“Do I really need to buy this?”). Both answers come from directing your attention and searching your memory. That is not how System 1 operates. It constantly monitors what is happening inside and outside our mind. It unintentionally and effortlessly assesses the elements of the situation. These ‘basic assessments’ affect our intuitive judgment, because they are easily replaced with harder questions. Two other characteristics of System 1 supporting the replacement of one judgment with another are the ability to translate values across dimensions (“If Chloe were as heavy as she is smart, how heavy would she be?”) and triggering other computations, like basic assessments.

An example of a ‘basic assessment’ is the ability to distinguish between an enemy and a friend in the blink of an eye. System 1 rapidly provides the judgment whether it is safe or not to interact with a stranger. In one glance at someone’s face, we can evaluate how trustworthy and dominant (thus threatening) that person is and whether we expect his/her intentions to be hostile or friendly. Dominance is assessed by looking at the shape of the face (square chin) and intentions are predicted through facial expressions. Face reading is not 100% reliable, a stranger with a round chin and a (fake) smile can still have bad intentions. In today’s society, this evolutionary ability is used to influence the voting behavior of people. Participants were shown campaign portraits of politicians and asked to rate their competence and likability based on their faces. The winner of the election turned out to be the person with the highest competence rating. Ratings of likability were less predictive of the voting result. Competence was judged by combining trustworthiness and strength. However, facial features are not predictive of how well someone will perform. Rejecting a candidate is based on the lack of attributes we consider important. Studies of the brain show that losing politicians evoked a greater negative emotional response, which is an example of a ‘judgment heuristic’.

The influence of System 1 on voting varies among people. Research shows that politically uninformed and television-prone voters are more likely to fall back on the automatic and quickly formed preferences of System 1. The effect of facial competence on their voting behavior is three times greater in comparison to informed voters who watch less television.

What is intensity matching?

Questions about one’s popularity, happiness or suitable punishments have one thing in common: they refer to an underlying dimension of amount or intensity. It is linked to using the word ‘more’ (more popular, more happy, more severe). This regards another ability of System 1: matching across various dimensions. An example is the following. “Richard read fluently when he was five years old. What is your prediction of his future GPA?” You will answer by translating from one scale

.....read more
Access: 
Public
How do you generate an intuitive opinion on a complex problem? – Chapter 9

How do you generate an intuitive opinion on a complex problem? – Chapter 9

Our minds have intuitive opinions and feelings about nearly everything, only in rare cases we face problems to which no solution comes spontaneously to mind. We dislike or fancy someone before we know much about him/her and we distrust a stranger without knowing why. We have answers to questions we don’t fully understand, based on evidence we can’t defend or explain.

How do we generate an intuitive opinion on a complex problem? If System 1 can’t find an adequate answer to a difficult question fast enough, it will seek for an easier, related question and answer that one instead. This operation is called ‘substitution’, the intended question the ‘target question’ and the easier question the ‘heuristic question’. ‘Heuristic’ is defined as the simple procedure that helps find an adequate but not perfect answer to a difficult question.

Substitution can be useful when you have to solve difficult problems. This strategy is consciously implemented by System 2. Other heuristics are the result of the mental shotgun, which are not chosen. Consider the following questions: “How satisfied are you with your current life?” “How popular will this politician be three months from now?” Before you are able to give an adequate answer, you must consider other questions. What is the definition of satisfaction? What political developments do you expect in the next three months? Taking all these questions into consideration would be impractical. Instead of providing an optimally reasoned answer, you can go for the heuristic alternative. Sometimes this works well and sometimes it results into a major error. Easily answered (heuristic) questions could be: “Am I satisfied right now?’ and “How popular is the politician currently?”

The automatic processes of intensity matching and the mental shotgun normally produce answers to simple questions that are related to the main question. The lazy system 2 tends to endorse a heuristic answer, although it could reject or modify it after retrieving more information. You probably won’t even notice how difficult the target question was, because an intuitive answer easily came to mind.

What is the mood heuristic for happiness?

A good example of substitution is the following experiment, in which participants were asked the questions: “How happy have you been lately?” “On how many dates did you go last month?” There turned out to be no correlation between the answers. Dating did not immediately came to mind when asked to rate happiness. Other participants also got both questions, but in reverse order. The outcome was totally different: the correlation was very high. Their lives did not revolve around dating (as the first experiment showed no correlation with happiness), but they experienced an emotional response when asked to think about their love life. The participants with the most dates were reminded of happy moments, while those who did not date experienced sadness. The happiness induced by the dating-question was still lingering when the happiness-question was asked. The last

.....read more
Access: 
Public
When should researchers be more suspicious of their statistical intuitions? – Chapter 10

When should researchers be more suspicious of their statistical intuitions? – Chapter 10

A study of cancer diagnoses in the United States showed a pattern: the number was lowest in sparsely populated, rural and Republican areas. What you make of this information, through searching memory and formulating hypotheses, is an operation of System 2. System 1 was also involved: System 2 depends on the suggestions and facts retrieved from associative memory. You probably focused on the rural-fact and did not link the lowest number to Republican policies. It makes sense to attribute it to the rural environment, with fresher foods and cleaner air. The number of diagnosis was highest in also rural and sparsely populated areas. You might link this information to poverty, access to good healthcare or smoking. However, living in rural areas cannot explain both numbers. The main factor was not the areas being Republican or rural, but having small populations. This example shows the complex relation between statistics and our mind.

System 1 excels in one form of thinking: it effortlessly and automatically detects causal links between events. System 1 fails to deal with merely statistical information, which affects the probability of the outcome but not the cause of the event.

Imagine a jar filled with balls: half of them are black, half are yellow. Without looking, you draw 4 balls from the jar, record the number of black balls and throw them back. You repeat this several times. If you summarize the outcomes, you will find that ‘2 black, 2 yellow’ occurs six times as often as the result ‘4 black’ or ‘4 yellow’. This is a mathematical fact. Now imagine the US-population as balls in a jar. Some balls are marked CD (cancer diagnosis). You draw samples of balls and populate each area. Rural samples are the smallest. Extreme outcomes (many or few diagnoses) will be found in sparsely populated areas. Fact: number of cancer diagnoses varies across areas. Explanation: extreme outcomes are more likely to be found in smaller samples. This is a statistical explanation, it is not causal. The small population of an area does not prevent or cause cancer. There is nothing to explain, the number of diagnoses is actually not higher or lower than normal, it just looks that way due to a sampling accident. The differences in sample sizes resulted into ‘artifacts’: observations that are produced exclusively by an aspect of the research method. Outcomes of large samples are more trustworthy, also known as the law of large numbers. The ‘sparsely populated’ part probably did not seem relevant to you and it took some effort to realize that large samples are more precise extreme outcomes are found more often in small samples. Even researchers have a poor understanding of sampling effects.

Research psychologists see sampling variation as an unpleasant obstacle in their research project. If the hypothesis to be tested is “The vocabulary of seven-year-old girls is greater than the vocabulary of seven-year-old boys”, you must use a large enough sample

.....read more
Access: 
Public
How do unknown quantities enhance bias in your mind? – Chapter 11

How do unknown quantities enhance bias in your mind? – Chapter 11

The ‘anchoring effect’ is the phenomenon that occurs when you consider a particular value for an unknown quantity, prior to estimating that quantity. Imagine you are asked to spin a wheel marked 0-100, write the number you got down and then have to answer the question “What do you think is the percentage of…?” Your estimated answer will be close to the number you had to consider. If the wheel stopped at the number 15, your answer might be 25%. If you get the question “Was Mother Theresa 112 years old when she died?”, your guess would be significantly higher than it would be if the anchoring question referred to the age of 40 years. Another example is the asking price on a house influencing what you consider a reasonable price. The anchoring effect is very important and common in our everyday lives. Our judgments are influenced by uninformative numbers.

Anchoring effects are produced by two mechanisms. One form of anchoring is an operation of System 2: deliberate adjusting. The other form is an automatic operation of System 1: priming. The anchoring and adjustment heuristic is a good strategy for estimating uncertain quantities: start from the anchoring number, assess whether it is too low or high and gradually adjust your estimated number. The adjusting process ends when people reach a certain level of uncertainty, which is usually too soon. An example of insufficient adjustment is the situation of a 16-year old listening to loud music and annoyed parents. The indulgent teenager will turn the music down, from extremely loud (high anchor) to a ‘normal’ volume, but probably fails to adjust it enough. Adjusting means deliberately trying to find reasons to mentally move away from the anchor, which requires effort. A mentally depleted person adjusts less (staying nearer to the anchor). Not adjusting enough is a failure of a lazy or weak System 2.

The priming effect of anchoring is explained by the same automatic operation of System 1 as suggestion. Suggestion is a priming effect, inducing compatible evidence. Low and high numbers activate different ideas in memory. A high temperature makes you retrieve summery memories, which leads to a biased estimation of the annual temperature. Bringing something to mind is sometimes enough to make you feel, see or hear it. The question “Was Mandela younger or older than 134 when he died?” results into your associative machine generating the impression of a very old man, although you immediately knew that Mandela did not live for 135 years. System 1 makes sense of statements by attempting to make them true, it tries to create a world in which the anchor is the truth.

What is meant by the anchoring index?

Anchoring is one of the few psychological phenomena that can be measured. One group of participants gets asked questions with a high anchor (135 years) and another group questions with a low anchor (30 years). The difference

.....read more
Access: 
Public
How do unknown frequencies enhance bias in your mind? – Chapter 12

How do unknown frequencies enhance bias in your mind? – Chapter 12

What do people do when they want to estimate the frequency of certain categories (for example the divorce rate among professors, poisonous snakes)? The reliance on the ease of memory search (instances coming to mind) is called the ‘availability heuristic’. This heuristic is both an automatic operation (System 1) as a deliberate problem-solving strategy (System 2).

The availability heuristic substitutes questions, which results into biases (systematic errors).  Examples of factors that are potential sources of bias are:

  • Conspicuous events attract attention and are easy to retrieve from memory. Instances of divorces among famous actors will come easily to mind, making you prone to exaggerate the frequency of divorces among famous actors.

  • Dramatic events temporarily increase the availability of the concerning category. Hearing about a fatal accident with a bike on the news temporarily influences your opinion about the safety of bikes.

  • A personal experience is more available than an incident that happened to someone else. Pictures are more available than words and vivid examples are more available than statistics.

It requires a fair amount of effort to resist so many potential availability biases. It takes reconsidering our intuitions and impressions by asking yourself questions. Examples are “Is my impression that house robbery is a major national problem due to my neighbor’s houses recently being robbed?” or “Is it possible that I feel no need to eat healthy because none of my friends got sick last year?”

A well-known study of availability indicates that being aware of our own biases contributes to peaceful marriage, and potentially other joint projects. Surveys among spouses about their own contributions to housekeeping and causing arguments demonstrated that they remember their own contributions more clearly. The same goes for people working in group projects: most members feel they had put in more effort than others.

An experiment carried out by psychologist Schwarz helped us gain a better understanding of the availability heuristic. He wanted to assess how our impressions of the frequency of a category will be influenced by the task to list a certain number of instances. The experiment showed that the listing instances task enhances the judgment by the ease with which they come to mind and the number of instances retrieved. The first instances will come easily to mind, but the fluency of the last instances will be low. People who list eight instances of indecisive behavior will rate themselves as less indecisive than people who list only three. People who are asked to list eight instances of decisive behavior will think of themselves as rather indecisive. Self-rating is dominated by the ease with which instances come to mind. The fluency of the retrieval counts more than the amount of retrieved instances. Numerous experiments have yielded paradoxical results. Other examples are: people who are asked to report more

.....read more
Access: 
Public
How do risk and availability enhance bias in your mind? - Chapter 13

How do risk and availability enhance bias in your mind? - Chapter 13

Economist Kunreuther found that availability effects are helpful in explaining the pattern of insurance purchase and prevention following disasters. Victims are worried after a disaster making them more eager to purchase insurance and adopt measures of prevention. This is temporary: once the memories start to fade, so does the worry. The recurrent cycles of disaster, worry and increasing complacency can be explained by the dynamics of memory.

A classic example of an availability bias is the survey carried out to analyse the public perceptions of risks. Participants were asked to consider sets of causes of death: accidents and strokes or asthma and diabetes. They had to indicate the most frequent cause per set and estimate the ratio of both frequencies. Their judgments were then compared to statistics. Some of the findings were:

  • 80% of participants judged accidental deaths more likely than death by strokes, although strokes cause nearly twice as many deaths.

  • Tornadoes were considered more deathly than asthma, although asthma kills 20 times more people.

  • Death by accident and by disease were seen as equally likely, although death by disease is 18 time more frequent.

It was clear that media coverage influenced the estimates of causes of death. Media coverage is biased towards sensationalism and novelty. The media shape the public interest and are shaped by it. Unusual causes of death receive disproportionate attention and are therefore seen as less unusual than they actually are. The world in our mind does not equal the real world. Expectations about the frequency of events are warped by the emotional intensity and prevalence of the information we are exposed to.

The estimates of causes of death represent the activated ideas in associative memory and are an example of substitution. Research also shows that the ease with which ideas of several risks come to mind and the emotional responses to these risks are connected. Terrifying images and thoughts easily come to mind, and vivid thoughts of danger induce fear. Psychologist Slovic introduced the affect heuristic: people rely on their emotions when making decisions and judgments. Do I hate or love it? In many aspects of life, our choices and opinions express our feelings. The affect heuristic is an example of substitution: the difficult question (What do I think about this?) is replaced by the easier question (How do I feel about this?). Slovic relates his finding to the finding of neuroscientist Damasio: when making decisions, our emotional evaluations of outcomes, the bodily state and the avoidance and approach tendencies connected to them all play a key role. Someone who does not show the appropriate emotions before making a decision also has an impaired ability to make reasonable decisions.

Slovic asked participants about their opinions about several technologies. They had to list the risks and benefits of these technologies. They found an extremely

.....read more
Access: 
Public
How do you prevent false intuitive judgement? - Chapter 14

How do you prevent false intuitive judgement? - Chapter 14

Imagine drawing one ball from a jar. To determine whether the ball is more likely to be black or yellow, you need to know how many balls of each color there are in the jar. The proportion of balls of a specific color is called a ‘base rate’.

If you have to guess the graduate specialization of university student Patrick by ranking a list of fields in order of likelihood, you quickly realize that the relative size of enrollment in these field is important. Due to the absence of more information about Patrick, you will go by the base rates and predict that Patrick is more likely to be enrolled in law, than in medicine or art history, because there are more students overall in law. We use base-rate information when there is no further information.

Now you are asked to read a description about Patrick’s personality. “He is smart, socially skilled, a great listener and he cares for others. He is very capable of taking decisions under pressure and working in a team. He is responsible, disciplined and committed. His handwriting is terrible.” Again, you are asked to guess the specialization of Patrick from a list of fields. This question requires retrieving a stereotype of graduate students in the fields mentioned. The description fits the stereotype of a medical student.

The task of ranking the fields is hard and requires organization and discipline, which are operations of System 2. The clues used in the description activated an association with a stereotype, which is an automatic operation of System 1. The accuracy of the description and knowing the base rates of the fields are irrelevant. Focusing exclusively on the similarity of someone’s description to stereotypes is called ‘representativeness’.

The description of Patrick causes a clash between base rates and representativeness. When a statistic was asked to carry out similar tasks and guess Patrick’s profession, he answered with ‘medical doctor’. He neglected the relevance of the base rates after reading the description. An experiment among psychology students also resulted into substitution: the easier question about similarity (judgment of representativeness) substituted the difficult question about probability. Ignoring base rates and not paying attention to the accuracy of evidence in probability tasks will certainly lead to serious mistakes.

Statisticians and logicians disagree about the meaning of ‘probability’. Many consider it a measure of subjective degree of belief. Both have formulated a competing, precise definition of ‘probability’. For non-experts, it is a vague notion, also known as ‘likelihood’. It is associated with plausibility, surprise and uncertainty. We more or less know what we mean when we use the word, so it is not troublesome. Questions about likelihood or probability activate a mental shotgun: evoking answers to less difficult questions. An example of an easy answer is the automatic assessment of representativeness. System 1 unintentionally produces an impression of similarity.

Other examples of the representativeness

.....read more
Access: 
Public
How is fallacy formed in you mind? - Chapter 15

How is fallacy formed in you mind? - Chapter 15

A famous and controversial experiment is known as the ‘Linda problem’. It was made up by Amos and Kahneman to demonstrate the role of heuristics in judgment and their incompatibility with logic. People who read the description of Linda instantly knew what University she had attended, because it was known for its politically engaged, radical students. Participants were asked to read a list of possible scenarios regarding Linda and rank them by representativeness and by probability. They agreed that one scenario (“She is a feminist bank teller”) seems more likely than another one (“She is a bank teller”). The stereotype of bank tellers does not include feminism, but adding that part to the description results into a more coherent story. The twist is found in the judgments of probability, because there is a logical connection between both scenarios. Since every feminist bank teller is a bank teller, the probability of Linda being a feminist bank teller must be lower than the probability of Linda being merely a bank teller. Specifying possible events in greater detail always lowers the probability. This issue causes a conflict between the logic of probability and the intuition of representativeness. The participants ranked ‘feminist bank teller’ higher than ‘bank teller’ in their ranking by probability and by resemblance.

The scenarios ‘bank teller’ and ‘feminist bank teller’ were placed on the list as number 6 and 8, close to each other. Kahneman and Amos expected that participants would notice the connection between them and that their rankings would follow logic. But surprisingly, they had ranked ‘feminist bank teller’ as more likely. Representativeness had won the battle, which is considered a failure of System 2. Nearly 90% of the participants violated the logical rule. Failing to apply an obviously relevant logical rule is called ‘fallacy’. It is called a ‘conjunction fallacy’ when people judge a conjunction of two events (in this case: feminist and bank teller) to be more probable than one of the events (bank teller) in a straight comparison.

In the short version of the Linda problem, participants had to answer the question which of the following alternatives is more likely: “She is a bank teller” or “She is a bank teller and a feminist”. Most of them gave the right answer: “bank teller”. The difference with the long version is the separation between these outcomes by the intervening seventh scenario on the list: they were judged independently, no comparison was made between them. The short version involved a direct comparison, which mobilized System 2 and prevented the commitment of the fallacy.

Hsee’s dinnerware study demonstrated how absurd the less-is-more pattern is. He asked participants to price dinnerware sets. One group was shown a display that allowed a comparison between two sets (set X: 40 pieces of which 9 are broken and set Z: 24 pieces). This is called a ‘joint evaluation’. The two other groups were shown only one set, making it a

.....read more
Access: 
Public
How does causally connected storytelling enhance bias in you mind? - Chapter 16

How does causally connected storytelling enhance bias in you mind? - Chapter 16

Read the following problem, what is your intuitive answer?

“Last night, a bus was involved in a traffic accident. Two bus companies, the Yellow and the Brown, operate in the area. 80% of the busses are Yellow and 20% are Brown. A witness declared the bus being Brown. The reliability of the witness in similar circumstances was tested: the witness correctly identified each color 75% of the time and failed 25% of the time. What is the probability that the bus was Brown?”

There are two pieces of information: a base rate and the not fully reliable testimony of the witness. Without the witness, the probability of the bus being Brown is 20%: the base rate. If the bus companies had been equally large, the base rate would be useless (50/50) and merely the reliability of the witness would be considered. Most people ignore the base rate and answer 75%. Both sources of information should have been combined by the probability rule of Bayes.

Now consider the following story, in which the base rate is differently presented. “The two bus companies operate the same number of busses, but Yellow busses are involved in 80% of traffic accidents”. The versions are psychologically different, although they are mathematically the same. People who read the first story tend to ignore the base rate, because they don’t know what to do with it. The base rate regards the number of Yellow and Brown busses in the area, which does not explain the bus driver getting into an accident. People who read the second version give more weight to the base rate, their judgments are closer to the right answer. Drivers of Brown busses cause four times as many accidents, which leads to the instant conclusion that Brown drivers are dangerous drivers. You have formed a stereotype of Brown drivers, which fits easily into a causal story.

The bus examples demonstrates two types of base rates: statistical base rates (irrelevant facts about a population) and causal base rates. Statistical base rate are often underweighted or even neglected when specific information about the individual case is available. Causal base rates are used as information about a concrete case and are easily combined with other relevant facts. The causal version of the bus problem was formulated as a stereotype. Stereotypes: statements about a group that are accepted as facts about individual members. System 1 is known for representing categories as prototypical exemplars and norms, our memory holds a representation of one or more regular members of a category (cats, blenders). A representation is called a stereotype when the category is social. In the bus problem, stereotyping the Brown drivers improved the accuracy of judgment. In other cases, like profiling or hiring, stereotyping is seen as morally (and lawfully) wrong and causal base rates get rejected. However, rejecting valid stereotypes results in judgments that are not optimal. It might be politically correct, but it is

.....read more
Access: 
Public
How does causal interpretation enhance bias in you mind? - Chapter 17

How does causal interpretation enhance bias in you mind? - Chapter 17

A key principle of skill training is that rewarding improvement works better than punishing mistakes. An experienced instructor doubted this, he stated that his students performed worse after receiving a compliment and did better after being shouted at. He was right and wrong. A praised performance is likely to be followed by a poor performance and punishment is normally followed by an improved performance. The conclusion he had drawn about the efficacy of punishment and reward was wrong. His observation is known as ‘regression to the mean’, which was due to random fluctuations in the performance quality. He praised only a student who performed much better than average, but that one performance was just a case of luck, which is why his next performance was of lower quality. The praise did not cause the poor performance. The mistake of the instructor was attaching a causal interpretation to random fluctuations.

Imagine two golf players competing in a tournament. One had a great performance on the first day, which makes you think he is more talented than the average competitor and that he had better luck than others. The other player performed poorly, so he must be less talented and unlucky. If you had to guess their scores on the second day, you would predict that the first player will score above average (he is still more talented) and the other player below average. Luck can change and is not predictable, so you expect it will be average. Conclusion: player 1 will perform well, but not as good as on the first day as he won’t be that lucky again and player 2 will perform below average but better than on the first day, as he won’t be that unlucky again. The difference between both players will shrink. The answer is that the performance on the second day will be more moderate: closer to the average than to the scores on the first day. This is another example of regression to the mean.

A famous example is the ‘Sports Illustrated jinx”. After gracing the cover of this magazine, a sportsperson is expected to perform worse in the next season. This is often explained by increased pressure or overconfidence. However, it is easier than that: a sportsperson that makes it on the cover has performed extremely well in the last season, most likely with the help of good luck and luck fluctuates. 

Conclusion: the difference between a first and a second performance does not need a causal explanation, it is a mathematically consequence of luck.

The notion of regression to the mean was introduced by Sir Galton, in the late 19th century. He compared the height of children to the height of their parents and found that the size of the children was not similar to that of their parents but was more mediocre. Large parents: children were smaller, very small parents: children are larger. The study also demonstrated

.....read more
Access: 
Public
How can you tame and correct your intuitive predictions? - Chapter 18

How can you tame and correct your intuitive predictions? - Chapter 18

Forecasting is a major part of our professional and private lives. A number of predictive judgments are based on analyses or computations, but most involve System 1 and intuition. Some intuitions draw on expertise and skill, gained through experience. The automatic and quick judgments and decisions of physicians, chess masters and fire chiefs are examples of skilled intuitions. They quickly come with solutions, because they recognize familiar cues. Other intuitions are the result of (substitution) heuristics. Numerous judgments arise from a combination of intuition and analysis. 

What are nonregressive intuitions?

A question regarding a current situation and a prediction activates System 1. “Mark is currently a bachelor student. He could count to 30 when he was two years old. What is his GPA?” People who have knowledge about the educational system provide quick answers thanks to the operations of System 1:

  • Seeking a causal connection between the evidence (Mark’s counting) and the target of the prediction (his GPA), which in this case is academic talent. The associative memory then automatically and rapidly forms the best possible story from the available information (WYSIATI).

  • Evaluating the evidence in relation to the relevant norm. How precocious is a child who counts to 30 at the age of two?

  • Substitution and intensity matching. The evaluation of the evidence of cognitive ability at an early age is substituted as an answer to the question about his GPA in college. Mark will be assigned the same percentile score for his GPA and for his achievements as an early counter.

  • Intensity matching: from the general impression of Mark’s academic achievements to a GPA score that matches the evidence for his talent. This impression needs to be translated into a corresponding GPA score.

The task required evaluating the evidence and predicting an outcome. This example clearly shows the role of substitution: people substitute an evaluation about the evidence when a prediction is asked, without being aware of the fact that the question they answer is not the question they were asked. This will lead to systematically biased predictions, as regression to the mean is fully ignored.

How can intuitive predictions be corrected?

The right way to predict Mark’s GPA is by using a formula for the factors that determine college grades and counting age:

GPA = factors specific to GPA + shared factors = 100%

Counting age = factors specific to counting age + shared factors = 100%

The shared factors are the degree to which family supports academic interests, genetically determined aptitude and other factors that would cause similar people to be precocious counters as minors and academical talents as adults. The correlation between both measures (GPA and counting age) equals the proportion of shared factors among their determinants. Assume the proportion being 30%.

.....read more
Access: 
Public
Why is every success story you read or hear often wrong? - Chapter 19

Why is every success story you read or hear often wrong? - Chapter 19

The concept of a ‘narrative fallacy’ was introduced by Nassim Taleb and describes how flawed stories of the past influence our current views and future expectations. An explanation is considered more appealing if it’s concrete, assigns a significant role to talent, intentions or ignorance (instead of luck) and focuses on a few conspicuous events that happened than on numerous events that did not happen.

People are prone to interpret someone’s behavior as a reflection of personality traits and general propensities, which are easy to match to effects. The halo effect contributes to coherence: our judgement of one significant attribute influences how we view all qualities. If you consider a soccer player to be strong and attractive, you are likely to think of him as an excellent player as well. If you find him unattractive, you will probably underrate his soccer skills. The halo effect exaggerates the consistency of judgments: bad people are all bad and nice people do only nice things. Reading ‘Hitler liked cats and toddlers” causes a shock, because such a bad person having a good side violates our expectations.

When you read a story about the founders of a highly successful company, with almost every choice they made having a good outcome, you get the sense that you understand what made the company succeed. You get the feeling that you learned what it takes to found successful companies. It is, however, very likely that your sense of understanding and learning from the story is mostly illusory. An explanation can be tested by determining whether it would have made the event predictable in advance. The story about the successful company won’t meet that test, because no story can include all the events that would have caused a divergent outcome. Our minds can’t handle events that did not happen. The fact that most significant events involved choices makes you exaggerate the role of skill and underestimate the influence of luck. Although the founders were skilled, luck had a big influence on the great outcome. This demonstrates the power of the WYSIATI-rule. You deal with the restricted information you received as if it were all there is to know. You construct the best possible story from the available information and if it’s a nice one, you believe it. The less you know, the easier it is to form a coherent story.

People saying “I knew well before the economic crisis happened that it was inevitable” are wrong, because they thought it would happen, they did not ‘know’ it. They afterwards say ‘knew’ because it did happen.

It is an illusion to believe that we understand the past, because we understand it less than we believe we do. The words ‘know’, ‘premonition’ and ‘intuition’ refer to past thoughts that turned out to be true. They need to be avoided in order to think clearly about future events.

What are the costs of hindsight?

.....read more
Access: 
Public
How does the illusion of validity make you overconfident in your ability to predict the future? - Chapter 20

How does the illusion of validity make you overconfident in your ability to predict the future? - Chapter 20

System 1 is known for jumping to conclusions from limited evidence (WYSIATI). The coherence of the story created by System 1 and System 2 makes us confident about our opinions. The quality and amount of the evidence are less important, because poor evidence created a good story. We even have beliefs without any evidence for them, we just believe something because our partner or friend does. It is ridiculous how confident we are in our beliefs when we know so little.

The illusion of validity can be explained by the following experiment. Two psychology students watched soldiers in officer-training trying to solve a problem. They decided to evaluate them: who would make a great army leader? The exercise was exhausting and required several attempts. Some soldiers seemed arrogant, persistent, submissive, hot-tempered, stubborn, patient or quitters. Some were expected to become great leaders, others were ruled out for the officer rank. The students were convinced the exercise revealed the true nature of the soldiers and were very confident in their evaluations. Their impressions were coherent and did not conflict. After some time, they learned how the soldiers were doing and compared their evaluations against the reports by their commanders. This showed that the predictions about their performance at the officer-training were hardly any better than blind guesses. The students decided to repeat the experiment with a new group of soldiers. The shocking truth about the quality of their previous predictions had no effect on how they evaluated the soldiers and very little effect on how confident they felt in their judgments and predictions. The evidence of their prior failure did not damage their confidence and did not cause a moderation of the predictions, which it should have. They still felt like their predictions were valid. Another example of the illusion of validity is the Müller-Lyer illusion.

The predictions of the future performances of the soldiers are instances of the representativeness heuristic and of substitution. The predictions were based on 60 minutes of the soldiers behavior in artificial circumstances. The predictions were totally nonregressive and supported by weak evidence (WYSIATI). The experiment demonstrates that confidence reflects the coherence of the information and the cognitive ease of processing the information. Remember that a very confident person has formed a coherent story in his mind, which does not necessarily mean it’s the truth.

What is the illusion of stock-picking skill?

Each day, billions of shares are traded, which involves many buyers and sellers. They have the same information, their exchange is based on a difference in opinions. A buyer thinks the price will rise, while the seller expects it to drop. Something makes them believe they know more about the future price than others, but for many, that belief is merely an illusion. The price of stock is based on the available information about the company value and the best prediction about the future of the stock. If someone believes

.....read more
Access: 
Public
How can you use statistics to correct intuitions? - Chapter 21

How can you use statistics to correct intuitions? - Chapter 21

Psychologist Meehl reviewed the results of studies that had assessed whether ‘clinical predictions’ based on the subjective impressions of trained professionals were more accurate than ‘statistical predictions’ made by combining ratings or scores according to a rule. In one study, trained counselors were asked to predict the grades of students at the end of their first school year. They interviewed the students and had access to personal statements, aptitude tests and their high school grades. The statistical formula used only one aptitude test and high school grades, but was more accurate than 11 out of 14 counselors. Other study reviews showed similar results (regarding a variety of predictions: criminal recidivism, parole violations, success in pilot training).

The outcome shocked clinical psychologist and lead to many more studies. But fifty years later, algorithms still score better than humans. 60% of the research shows that algorithms have better accuracy, other studies resulted into a tie. Domains that involve a fair amount of unpredictability and uncertainty are called ‘low-validity environments’. Examples are medical variables (longevity of patients, diagnoses of diseases, length of hospital stay), economic measures (prospects of success, assessments of credit risks) and governmental interests (odds of recidivism, likelihood of criminal behavior). In all these cases, the accuracy of algorithm was better or equally good.

Simple statistics beat the predictions of world-renowned professionals. Meehl’s explanation is that experts try to be smart, consider complex combinations of features and think outside the box. Complexity usually reduces validity. Research has shown that human experts are inferior to formulas even when they are handed the score predicted by the formula. They believe they can do better than the formula because they have more information about the case. Another explanation is that people are inconsistent in making summary judgments of complex information. Two evaluations of the same information result often into two different answers. This inconsistency is probably caused by System 1’s need for context. Unnoticed stimuli in our environment influence our actions and thoughts.

Meehl’s research indicates that final decisions should be made by formulas, particularly in low-validity environments. The final selection of students for medical schools is often determined by interviewing the candidates, which reduces the accuracy of the selection procedure. Interviewers have too much confidence in their intuitions and favor their impressions over other information sources, which reduces validity.

The dominant statistical practice in social sciences is assigning weight to several predictors by following the formula ‘multiple regression’. Robyn Dawes argues that this complex statistical algorithm is rather worthless. Recent studies show that formulas that assign equal weight to all the predictors are best, because they are not affected by sampling accidents. Equal-weighting has a major advantage: useful algorithms can be developed without any previous statistical research, Simple equally weighted formulas based on common sense or on existing statistics are excellent predictors of significant outcomes.

Clinical psychologists received Meehl’s finding with disbelief and hostility, due

.....read more
Access: 
Public
When do your judgments reflect true expertise? – Chapter 22

When do your judgments reflect true expertise? – Chapter 22

Gary Klein is the intellectual leader of students of Naturalistic Decision Making (NDM), who study real people in natural situations. He rejects the focus on biases in heuristics, doing artificial experiments and is highly skeptical about choosing algorithms over human judgments. Klein is known for studies of expertise in firefighters and the development of intuitive skills in experienced experts. Despite their differences, Kahneman worked together with Gary Klein on a joint project in order to answer the question “When can you trust an experienced professional who claims to have an intuition?” They both agreed about Gladwell’s bestselling book ‘Blink’ about art experts that had the gut feeling that the object was a fake, but could not tell what it exactly was that made them think it was not the real deal. They knew it was a fake without knowing how they knew: a perfect example of intuition.

While Kahneman’s views of intuition were shaped by observing the illusion of validity in himself and reading Meehl’s review about clinical predictions, Klein’s thinking was shaped by his studies of fire ground commanders. He observed and interviewed them. He introduced the ‘recognition-primed decision (RPD) model, which applies to several experts (from fire commanders to chess masters). System 1 and System 2 are both involved in this process. A tentative plan automatically comes to mind (System 1) and then gets mentally tested (System 2). The model of making intuitive decisions involves recognition: the situation provides a cue, the cue retrieves information from memory, which provides the solution. Intuition is merely recognition.

Information gets stored in memory by learning emotions, like fear. A scary experience stays with you for a long time. Fear can be learned by experience and by words. Soldiers get trained to identify situations and firefighters discussed all types of fires with others. Emotional learning is quick, developing expertise takes a long time. Chess masters need more than 10.000 hours of practice to reach the top. During these hours, players become familiar with all the possible moves and able to quickly read the situation.

Kahneman and Klein agreed that the confidence of people in their intuitions is not a reliable source of validity. But if subjective confidence cannot be trusted, how do we know when judgments reflect true expertise? The answer lies in the two conditions for acquiring a skill:

  • The environment must be sufficiently regular so its predictable

  • There must be an opportunity to learn the regularities through prolonged practice.

An intuition is normally skilled when both conditions are met. Chess players, nurses, physicians, firefighters and sportsmen are active in regular, orderly situations. Political scientists and stock pickers are not, they operate in a less regular (non-validity) environment.

Access: 
Public
What is the importance of the 'outside view' versus the 'inside view' for your judgements? – Chapter 23

What is the importance of the 'outside view' versus the 'inside view' for your judgements? – Chapter 23

What is the importance of ‘the outside view’?  - Chapter 23

Kahneman was asked to write a textbook about decision making and judgments. After one year, a number of chapters and the syllabus had been written, which was considered good progress. Kahneman asked his team to separately estimate how long it would take to finish the textbook. The average estimate was two years. He then asked an expert in developing curricula, who was part of the team, how long it took for similar teams to finish a textbook. He answered that about 40% of the teams never managed to complete one. Kahneman never considered the possibility of failing. The teams that completed the task had finished the book in seven to ten years. He also rated the resources and skills of Kahneman’s team slightly below average. Even the expert himself was surprised by it, as his previous estimate was two years. Before the questions were asked, his mind did not make the connection between his knowledge of the progress of other teams and his prediction of the future of the team he was in. While everybody ‘knew’ that a 40% chance of failure and a minimum of seven years was more likely than the prediction of two years, they did not acknowledge this information. It seemed unreal, because it was impossible to imagine it taking so long. The reasonable plan to finish the book in two years conflicted with the statistics. The base-rate information should lead to the conclusion that writing a textbook is much harder than previously thought, but that conflicted with the direct experience of making good progress. It would have been wise to cancel the project, because no one wanted to spend six more years on a project with a  fairly high chance failure, but they continued. The textbook was finished eight years later, due to numerous unpredictable events. Three lessons were learned from this story:

  1. There is a distinction between two very different approaches to predicting, called the inside view and the outside view.

  2. The initial predictions exhibited a planning fallacy. The predictions were closer to a best-case scenario, lacking realism.

  3. Irrational perseverance (not cancelling the project). When face with a choice, they gave up rationality.

The inside view was adopted to assess the future of the project. The team focused on their specific circumstances and searched for evidence in their own experiences They knew how many chapters they were going to write and knew how long it taken to write the already finished chapters. Only a few less optimistic members of the team added some months to their estimates (margin of error). The predictions were based on the available information: WYSIATI, but the already written chapters were probably the easiest and the motivation at its peak. The biggest problem was failing to take the ‘unknown unknowns’ into account. On that day,

.....read more
Access: 
Public
What is the best remedy for overconfident optimism? – Chapter 24

What is the best remedy for overconfident optimism? – Chapter 24

The planning fallacy is one of many manifestations of the optimistic bias. Many people view their attributes as more favorable than they probably are and consider their goals as more achievable than they probably are. Optimistic bias can be a risk as well as a blessing, which is why you should be cautious when you feel optimistic.

Some people are more optimistic than others. They are usually happy, popular and resilient. Optimists play a disproportionate role in shaping society. Their decisions have an impact on others: they are leaders, inventors, entrepreneurs. They seek challenges and take risks, are talented and lucky. Their successes and the admiration by others makes them even more confident. This description results into the following hypothesis: the most influential people are likely to be overconfident and optimistic, and take more risks than they are aware of. The evidence indicates that an optimistic bias causes institutions or people to take on risks.

The chances that a small company will survive for five years in the US are slightly over 33%. Someone who starts a company believes that these statistics do not apply to him/her. Research shows that American entrepreneurs are prone to believe that their company is something else: their estimated chance of success was almost twice as high: 60%. Would they still have invested time and money if they knew the odds? They never thought of the outside view.

One of the benefits of being an optimist is considered persistence when faced with obstacles. However, being persistent can be costly. Studies show how almost half of the people continue their project after being told it would not succeed. Their initial losses doubled.

According to psychologist, the majority of people genuinely believes that they are better than others, they would even bet money on it. This belief has significant consequences in the market. Misguided acquisitions by large businesses in the stock market are explained by the ‘hubris hypothesis’: leaders of acquiring firms are less competent than they think they are. The optimistic risk taking of entrepreneurs contributes to the economic dynamism of our capitalistic society, but also evoke policy issues. Should founders of small companies be financially supported by the government, when they are very likely to fail? There is no satisfying answer to this question.

Entrepreneurial optimism is not merely explained by wishful thinking, emotions and cognitive biases also play a significant role, especially the WYSIATI-rule of System 1. Focusing on the goal and neglecting relevant base rates can result into the planning fallacy. Focusing on the causal role of skill and neglecting the role of luck can result into the illusion of control. Focusing on what is known and neglect what is not known leads to overconfidence.

Many founders believe that the success of their company depends to a great extent on their effort. They think their fate is almost completely in their own hands.

.....read more
Access: 
Public
How does your valuing relate with actual value? – Chapter 25

How does your valuing relate with actual value? – Chapter 25

Economics and psychologist have very different views of people. The first think of them as rational and selfish beings. The latter argue that people are neither completely rational nor selfish. Kahneman and Amos studied the attitudes of people to risky options in order the answer the question “What rules govern choices between different simple gambles and between sure things and gambles?”

A simple gamble is for instance “45% chance to win € 500”. Gamble: the consequences of the choice are always uncertain. Choices between simple gambles provide a model that shares main features with more complex decisions. The ‘expected utility theory’ was the basis of the rational-agent model and still is the most important theory in the social sciences.

Consider the following simple decision problem. Which do you prefer? 

  1. Toss a coin. Heads: you win € 105. Tails: you win nothing.

  2. Get € 50 for sure.

The intuitive choice of most people would be the second option. The study of Kahneman and Amos resulted into the ‘prospect theory’, a descriptive model that was constructed to explain systematic violations of the axioms of rationality in choices between gambles. Their article about the theory is one of the most cited in their field. A few years later they published an essay about framing effects: the significant changes of preferences that are sometimes caused by inconsequential variants in the way a choice problem is worded.

Daniel Bernoulli introduced a theory about the relationship between the psychological desirability or value of money (now: utility) and the actual amount of money. According to Bernoulli, a gift of 10 euros has the same value to someone who already has 100 euros as a gift of 20 euros to someone who already has 200 euros. This is true, as we also define a change in income as an percentage. A 20% raise evokes a similar psychological response for the poor and for the rich, which a numeric amount would not do. Psychological responses to a change in wealth are proportional to the initial amount of wealth: utility is a logarithmic function of wealth. Bernoulli used this knowledge to introduce a new approach to the evaluation of gambles. He argued that the majority of people dislikes risk and wants to avoid the poorest outcome. People will choose the sure thing, even if it is less than expected value. His theory is that the psychological value of gambles is the average of the utilities of the outcomes, each weighted by its probability, and not the weighted average of the possible euros outcomes. The theory explains why poor people buy insurance and wealthy people sell it to them.

300 years later, his theory of risk attitudes and the preference for wealth is still being used in economic analysis. This is quite surprising, as it fairly flawed. This is illustrated by the following example:

    .....read more
    Access: 
    Public
    Why is 'Prospect theory' better than 'Utility theory' in understanding the evaluation of financial outcomes? – Chapter 26

    Why is 'Prospect theory' better than 'Utility theory' in understanding the evaluation of financial outcomes? – Chapter 26

    What is the prospect theory? – Chapter 26

    In utility theory, the utility of a gain is determined by comparing the utilities of two states of wealth. The utility of receiving an extra € 400 when your wealth is € 2 million is the difference between the utility of € 2.000.400 and the utility of 2 million. If you lose € 400, the disutility is again the difference between the utilities of both states of wealth. It was assumed that the distinction between losses and gains did not matter and was not examined due to the theory-induced blindness.

    Kahneman and Amos had focused on differences between gambles with low or high probabilities of winning, until Amos casually mentioned the losses. The risk aversion turned out to be replaced by going for the risk. Consider the following problems:

    1. Which do you prefer? Get € 800 for sure or a 90% chance to get € 900.

    2. Which do you prefer? Lose € 800 for sure or 90% to lose € 900.

    The first problem induces risk aversion, you would go for gaining € 800. In case of the second problem, you will probably go for the gamble (the risk). When all options are bad, people tend to become risk seeking.

    The comparison between the problems emphasize the importance of the reference point: the previous state relative to which losses and gains are evaluated. Reference points usually get ignored by people and Bernoulli’s theory lacks them. The prospect theory takes reference points into account. The prospect theory involves three cognitive features (associated with System 1), which play a crucial role in the evaluation of financial outcomes and are common to a lot of automatic processes of emotion, judgment and perception:

    • Evaluation is relative to a neutral reference point (‘adaptation level’).

    • A principle of diminishing sensitivity applies to the evaluation of changes of wealth and to sensory dimensions.

    • Loss aversion.

    Access: 
    Public
    Why is 'Prospect theory' better than 'Utility theory' in understanding the endowment effect of valuing valuables? – Chapter 27

    Why is 'Prospect theory' better than 'Utility theory' in understanding the endowment effect of valuing valuables? – Chapter 27

    Imagine looking at a graph displaying someone’s ‘indifference map’ for two goods: income and vacation days. This map specifies particular combinations. Each curve connects the combinations of the goods that are equally desirable: they have the same utility. The convex shape suggests diminishing marginal utility: the more vacation days you have, the less you care for one more, and each added day is worth less than the previous one. The more income you have, the less you care for an extra euro, and the amount of money you are willing to give up for an extra vacation day increases. All locations on an indifference curve are equally appealing.

    All economics textbooks for students contain images of indifference curves, but only a few students have noticed that something is missing: an indication of the person’s current income and vacation days, also known as the reference point. This is another example of Bernoulli’s error. The utility is not completely determined by your current situation, the past is also relevant. The missing of the reference point is also an example of theory-induces blindness.

    Richard Thaler introduced the ‘endowment effect’: owning a good increases its value, especially if the goods are not regularly traded. Imagine you bought a ticket for a major soccer match for the normal price of € 300. You are a big fan of the participating team and would have been willing to pay a maximum of € 600. You read online that all tickets are sold and people now offer € 3.500. Would you sell your ticket? Probably not: your lowest selling price is € 3.500 and your maximum buying price is € 600. The endowment effect can be explained by the prospect theory. The willingness to sell or buy depends on the reference point: whether or not the person currently owns the good. If he is the owner, he considers the pain of giving up the good. If he is not the owner, he considers the pleasure of getting the good. The values are not equal because of loss aversion: giving up the good is more painful than getting a similar good is enjoyable. The reaction to a loss is stronger than the reaction to a corresponding gain.

    Access: 
    Public
    How is your decision-making affected by avoiding a loss and achieving a gain? – Chapter 28

    How is your decision-making affected by avoiding a loss and achieving a gain? – Chapter 28

    In an experiment, people lying in a brain scanner were shown several images. Among them were pictures of the eyes of a happy person and of a terrified person. They were shown for a fraction a second: the participants never consciously knew they had seen the pictures. One part of their brain did know: the amygdala, the ‘treat center’. Brain images showed an intense reaction to the threatening picture. The same process makes us process angry faces (a possible threat) more efficiently and faster than happy faces. An angry person in a happy crowd gets noticed faster than the opposite situation. Our brains are equipped with a mechanism that gives priority to bad news.

    Our brains also respond faster to merely symbolic threats. Bad words (war, murder),  emotionally loaded words and opinions with which you strongly disagree attract attention quicker than their opposites. Loss aversion is another manifestation of negativity dominance. Bad feedback and bad parenting proved to have more impact, and bad impressions and stereotypes are formed faster. As Gottman argues: long-term success of marriages depends more on the avoidance of negatives than on looking for positives. One bad action can ruin a long-term relationship. The boundary between good and bad is a reference point that changes over time and depends on the current situation.

    People are driven more strongly to avoiding a loss than to achieving a gain. A reference point can be a future goal or a the status quo. These two motives have different strengths: loss aversion (not reaching the goal) is a lot stronger than the wish to exceed it. This explains why many people set short-term goals.

    The different intensities of the motives to achieve gains and avoid losses show up in many situations. It is often detected in negotiations, in particular the renegotiations of existing contracts. Reference point: existing terms. Any proposed change is considered a concession (loss) by one of the parties. Loss aversion makes reaching an agreement difficult. 

    A study on what the public considers unfair behavior by employers, landlords and merchants showed that the opprobrium linked to unfairness imposes constraints on profit seeking.

    Reference point: the existing rent, wage or price. The participants deemed it unfair for stores to impose losses on customers, while the stores behaved according to the standard economic model: increased demand leads to a raised price. The latter is seen as a loss. Exploiting market power to impose losses on others is considered unfair. On the other hand, companies are entitled to retain current profit if it faces a loss by transferring the loss to customers or workers. Research shows that merchants who set unfair prices are likely to lose sales and that employers who are considered unfair have to deal with reduced productivity.

    Access: 
    JoHo members
    How is your decision-making affected by the value you attribute to losses, gains and wealth? – Chapter 29

    How is your decision-making affected by the value you attribute to losses, gains and wealth? – Chapter 29

    When we evaluate complex objects (mother-in-law, gadgets), we assign weights to their characteristics: some have a bigger influence than others, which we might not be aware of. When we evaluate an uncertain situation, we assign weights to the possible outcomes. These weights are correlated with the probabilities of the outcomes: a 40% chance of winning the jackpot is more appealing than a 2% chance. Assigning weights sometimes happens deliberately, but often it is an automatic process of System 1.

    The decision making in gambling provides a natural rule for the assignment of weights to outcomes: the more probable an outcome, the more weight it gets. The expected value of a gamble is the average of the outcomes, all weighted by their probability. This is called the ‘expectation principle’. Bernoulli applied this principle to the psychological value of the outcomes: the utility of a gamble is the average of the utilities of the outcomes, all weighted by their probability.

    The expectation principle is flawed, because it does not describe how we think about the probabilities associated with risky prospects. Consider the following example. The chance of winning the jackpot improves by 5%. Is every option equally good?

    1. From 0 to 5 %

    2. From 5% to 10%

    3. From 50% to 55%

    4. From 95% to 100%

    Expectation principle: utility increases by 5% in each option, but this does not describe how you experience it. From 0-5 and from 95-100 appears more impressive than the other two options. The first option creates a (previously non-existing) possibility, which gives hope and therefore is a qualitative change. This impact is known as the ‘possibility effect’: highly unlikely outcomes are weighted disproportionally more than they should. The option 5-10 is merely a quantitative change: it doubles your chance of winning, unlike the psychological value of the prospect. The change from 95-100 is also a qualitative change that induces the ‘certainty effect’: almost certain outcomes are assigned less weight than it should.

    Certainty and possibility both have powerful effects when it comes to losses. When your mother needs to have surgery, with a 5% risk of dying, that risk feels worse than half as bad as a 10% risk. The possibility effect causes us to overweight small risks and being more willing to pay a lot more than expected value to avoid those risks. The psychological difference between 95% risk of a bad event happening or 100% (certainty) seems even bigger: a tiny bit of hope looms large. The overweighting of small probabilities increases the appeal of insurance policies and gambling.

    What is the ‘Paradox of Allais’?

    Maurice Allais introduced the theory that people are susceptible to a certainty effect and thus violate expected utility theory and the axioms of rational choice. Consider the following puzzles.

    X. 62% chance of

    .....read more
    Access: 
    Public
    How is your decision-making affected by rare events? – Chapter 30

    How is your decision-making affected by rare events? – Chapter 30

    Try remembering a time in which terrorist attacks in public transport were relatively common. The attacks were fairly rare in absolute numbers and the risks for travellers very small, but that is not how they felt about it. People tried to avoid public transport or were very cautious. Kahneman also disliked being near buses, even though he knew the risk of a terrorist attack happening was negligible. People assigned an absurdly high decision weight to a very small probability due to the experience of the moment: being near a bus made them have unpleasant thoughts, so they avoided buses. Terrorism is effective because it evokes an availability cascade. Very vivid images of victims, constantly mentioned by media and the topic of many conversations, become highly accessible, especially if it related to a specific situation (seeing a bus). This emotional response is automatic, uncontrolled, associative and it generates an impulse for protective behavior. System 2 knows about the low probability, but System 1 cannot be switched off.

    The same goes for big lotteries. The exciting possibility of winning the jackpot is shared by the community and reinforced by interactions with others. Buying a ticket instantly results into appealing fantasies, just like avoiding public transport was an immediate response to fear. Merely the possibility matters, not the actual probability. According to the prospect theory, highly unlikely events get overweighted or ignored. Kahneman’s current view of weighting decisions has been shaped by research on the role of vividness in decision making and of emotions. Vividness and emotion influence judgments of probability, availability and fluency and therefore explain disproportionate responses to rare events.

    Consider the following questions:

    1. What is your judgment of the probability that the national football team of Saudi Arabia wins the next world cup?

    2. How much will you pay for a bet in which you receive € 500 if the next world cup winner is Saudi Arabia, and no money otherwise?

    The first question regards the assessment of the probability of an unlikely event. The second question requires assigning a decision weight to the same event. People tend to overestimate the probability of an unlikely event and overweight the unlikely event. Overweighting and overestimation are different notions, but the psychological mechanism behind them are the same: cognitive ease, confirmation bias and focused attention. The associative machinery of System 1 is triggered by specific descriptions. When thinking about the unlikely win of Saudi Arabia, your associative machinery starts selectively retrieving evidence, images and instances that would make the statement true. The judgment of probability was determined by the cognitive ease with which a credible scenario come to mind. The probability of a rare event will be overestimated in case of a not fully specified alternative.

    Research demonstrates that the valuation is a gamble is much less sensitive to probability when the outcomes are emotional (kissing, getting electric shocks) than when the outcomes

    .....read more
    Access: 
    Public
    How can you remedy the exaggerated caution evoked by loss aversion and the exaggerated optimism of the planning fallacy? – Chapter 31

    How can you remedy the exaggerated caution evoked by loss aversion and the exaggerated optimism of the planning fallacy? – Chapter 31

    What are risk policies? – Chapter 31

    The emotional evaluation of ‘sure loss’ and ‘sure gain’ is an automatic response of System 1, which takes place before the computation of the expected values of the gambles. People that have to make choices that involve high or moderate probabilities tend to be risk seeking when it comes to losses and risk averse when it comes to gains, which can be costly. These tendencies make you willing to pay a high price to receive a sure gain rather than face a gamble, and willing to pay a high price to avoid a sure loss.

    Construing decisions is possible in two ways:

    • Broad framing: a single comprehensive decision, with four options.

    • Narrow framing: a sequence of two simple decisions, considered apart from each other.

    The concept of logical consistency cannot be achieved be our mind. We tend to avoid mental effort and are susceptible to WYSITA, so we have the tendency of making decisions as problems arise, even when they have to be considered jointly. Broad framing blunts the emotional response to losses and increases the willingness to take risks. Financial traders shield themselves from the pain of losses by this type of framing. The combination of narrow framing and loss aversion must be avoided. Individual investors avoid it by checking less often how their investments are doing. Constantly checking is unwise, because the pain of frequent small losses trumps the joy of small gains. Deliberately avoiding being exposed to short-term outcomes improves the quality of decisions and outcomes. The short-term reaction to bad news is usually increased loss aversion.

    A decision maker who is prone to narrow framing should have a ‘risk policy’ that he applies whenever a relevant problem arises. Examples of risk policies are: “never buy extended warranties” and “always take the highest possible deductible when purchasing insurance". A risk policy is a broad frame that embeds a certain risky choice in a set of similar choices. The risk policy and the outside view are remedies against two opposite biases that influence a lot of decisions: the exaggerated caution evoked by loss aversion and the exaggerated optimism of the planning fallacy.

    Access: 
    Public
    How do you keep mental account of gains, losses and regret? – Chapter 32

    How do you keep mental account of gains, losses and regret? – Chapter 32

    For most people, gaining money reflects achievement and self-regard. We keep score in our mind when we lose or gain money and consider them punishments and rewards, threats and promises. The ‘scores’ motivate our actions and influence our preferences. Cutting our losses feels like a failure, so we refuse doing it.

    We hold money in both physical and mental accounts. We have general savings, special savings for our kids or emergencies and money to spend. We are more willing to draw on some of these accounts compared to other accounts to cover current needs. Self-control also plays a role: some accounts can be used only for household expenses, they limit have much we can spend. Mental accounts are a form of narrow framing: they keep things manageable and under control. Mental accounts are used for keeping score. For instance, successful golfers have a separate account for each hole, not just one for their overall score.

    Consider the following example of mental accounting. Two Rihanna fans have to travel 50 miles to visit her concert. One of them bought the ticket, the other one got it as a gift. A heavy snowstorm is announced for the day of the concert. Which of the two fans is more likely to brave the storm to see Rihanna? The fan who paid for the ticket. Both of them set up a mental account for the concert they hoped to visit. Missing the concert will result into a negative balance. Both will be sad, but the closing balance is more negative for the fan who paid for the ticket. System 1 performs the calculations of emotional balance. For System 2 to respond rationally, it would have to be aware of the counterfactual possibility: “Would I still travel in this blizzard if I had gotten the ticket as a gift?” This requires a disciplined and active mind.

    Imagine an individual investor having to sell stock because he is in need of money. He still remembers the price at which he bought each stock and can identify it as a loser or a winner. Fafsung is a winner: if he sells it now, he gains € 8.000. He holds an equal investment in Fapple, which is now worth € 8.000 less than he paid for it. In the last month, the value of both stocks has been stable. Which is he more likely to sell? Closing the Fafsung account results into an positive score. Closing the Fapple account would lead to a negative score. He has to choose between pleasure and pain, so he will likely sell Fafsung. Financial research shows that there is a major preference for selling winners. This bias is called the ‘disposition effect’, which is an example of narrow framing. The state of the mental account was considered a valid consideration for selling. If you care more about your wealth, you would sell the loser.

    Imagine a firm

    .....read more
    Access: 
    Public
    When do preference reversals occur? - Chapter 33

    When do preference reversals occur? - Chapter 33

    Consider the following task. “A woman lost the use of her left leg as a result of a stabbing. She was stabbed during a robbery in a grocery store in her neighborhood. There are two grocery stores near the woman’s home, one of which she is a regular customer. Two scenario’s:

    A. The robbery happened in the woman’s regular grocery store.

    B. The woman’s regular grocery store was closed, so she went to the other grocery store, where she was stabbed. Should the grocery store in which the victim was stabbed make a difference to her compensation?

    Your judgment was made in a joint evaluation. Both scenarios were considered and compared at the same time. We would all agree that the compensation should be the same in both scenarios. The location is not a relevant factor to victim compensation. This evaluation involved System 2 (thinking slow).

    In an experiment, participants saw only one scenario and were asked to assign it a dollar value (single evaluation). The victim was awarded a much greater compensation if she was stabbed in the store she usually does not visit. Poignancy, related to regret, is a counterfactual feeling: “if only she had shopped at her regular grocery store”. The mechanism of intensity matching and substitution (System 1) translate the strength of the emotional response to the case into a monetary value. This explains the big difference in sums.

    Comparing these experiments demonstrates a distinct contrast. People who see the scenarios together (within-subject) endorse the principle that poignancy is not a legitimate consideration. The principle is relevant only when both scenarios are shown together, and this usually is not the case in daily life. Life is usually experienced in the between-subjects mode. The lack of contrasting alternatives that could change your mind and WYSIATI result into the fact that your (moral) beliefs do not necessarily govern your emotional responses. The discrepancy between joint and single evaluation of the stabbing scenario is part of a broad category of reversals of choice and judgment (preference reversals).

    Preference reversals occur because joint evaluation focuses attention on a specific aspect of the case, which was less salient in single evaluation. Single evaluation is mostly determined by the emotional responses of System 1. Joint evaluation involves a effortful and more careful assessment (System 2).

    Access: 
    Public
    How is your decision-making affected by words that induce emotion? - Chapter 34

    How is your decision-making affected by words that induce emotion? - Chapter 34

    France and Argentina competed in the 2022 World Cup final. The following sentences both describe the outcome: “Argentina won.” “France lost.” Whether these statements have the same meaning or not depends on your idea of ‘meaning’.

    The truth conditions of the two interchangeable descriptions are identical: one is true, so the other is true as well. Economics consider the preferences and beliefs of people as reality-bound, which are not influenced by the wording of their descriptions. There is another sense of ‘meaning’, in which both sentences have different meanings. The two sentences induce different associations (System 1). “Argentina won” induces thought of the actions of their national team and “France lost" induces thought of what they did that made them lose. In terms of induced associations, the sentences mean something else.

    Most people do not have reality-bound preferences as System 1 is not reality-bound. Many are influenced by the formulating of a problem. For instance, a negative outcome is more acceptable if it is framed as the cost of a lottery ticket instead of lost gamble. Losses evoke stronger negative feelings than costs. The same goes for discounts and surcharges: they are economically the same thing, but emotionally not.

    Neuroscientists performed an experiment in which they studied framing effects by recording the activity of several brain areas. Participants were asked to imagine receiving 50 pound. They then had to choose between a sure outcome and a gamble on a wheel. Wheels stops on white: they receive the full amount. Wheels stops on black: they get nothing. The sure outcome is a gain of 20 pound (expected value of the gamble). The same sure outcome can be framed as a ‘lose’ of 30 pound and a ‘keep’ of 20 pound. These words evoke tendencies to avoid or approach and System 1 tends to be biased in favor of the option framed as ‘keep’. All participants demonstrated framing effect: they choose the sure thing in the keep frame and accepted the gamble in the lose frame. There was a distinction among the participants though: some were very susceptible to the framing, others were reality-bound. The participants were ranked by a ‘rationality index’.

    The study resulted into three significant findings:

    • The amygdala (region related to emotional arousal) was most likely to be active when participants’ choices conformed to the frame. This region is accessed very quickly by emotional stimuli (System 1).

    • The anterior cingulate (region related to self-control and conflict) was more active when participants did not act naturally (choosing the sure thing despite the ‘lose’-label). Resisting the suggestion by System 1 appears to cause conflict.

    • The most rational participants showed enhanced activity in the frontal area that is known for combining reasoning and emotion.

    This study illustrates how words that induce emotion influence our decision making.

    Access: 
    Public
    How can our memory affect our judgments of experiences? - Chapter 35

    How can our memory affect our judgments of experiences? - Chapter 35

    The notion ‘utility’ has two different meanings. Jeremy Bentham argued that people are under the governance of two masters: pleasure and pain. They determine what we shall do and what we ought to do. Kahneman refers to this idea as ‘experienced utility’. When economists use the term, they mean ‘wantability’, which Kahneman refers to as ‘decision utility’. Expected utility theory concerns the rationality rules that should govern decision utilities.

    Both concepts of utility can coincide: when people want what they will like and like what they chose. There are several possible discrepancies between the forms of utility. Imagine you have to receive a daily painful injection. The pain is the same every day. Will you attach the same value to reducing the number of future injections from 19-17 as from 5-3? You would be willing to pay more for the first option. The decision utility of avoiding two injections is higher in the first option, which is remarkable because the reduction of pain is the same. It shows that experienced utility can be measured (number of injections) and that experienced utility is the criterion by which decisions should be assessed.

    Economist Edgeworth argued that experienced utility could be measured by using a ‘hedonimeter’: an imaginary instrument that measures the level of pain or pleasure that someone experience at one moment. Time is an important factor in his theory.

    In a study of the experiences of two patients undergoing a painful medical procedure, the patients were asked to indicate how much pain they experienced every 60 seconds. 0: no pain. 10: intolerable pain. The experience lasted 8 minutes for patient Y and 24 minutes for patient Z. Which patient suffered more? You would go for patient Z, as his procedure lasted a lot longer. After the procedure, they were asked to rate the total amount of pain they had experienced. The two main findings were:

    • Duration neglect: the duration of the procedure did not influence the ratings of total pain at all.

    • Peak-end rule: the global retrospective rating was predicted by the level of pain at the end of the experience and by the average level reported at the worst moment.

    Patient Y retained a much worse memory of the experience than patient Z. It was bad luck that his procedure ended at a painful moment, which left him with a distressing memory.

    The two measures of experienced utility, the retrospective assessment and the hedonimeter, are different. Judgments based on the hedonimeter are duration-weighted: it assigns equal weights to all moments. The retrospective assessment is insensitive to duration and weights two singular moments (the end and the peak). Which one is best? For the medical practice, this is an important question. It depends:

    • If the physician wants to reduce the memory of pain, minimizing the peak intensity of

    .....read more
    Access: 
    Public
    How does our memory affect our choices? - Chapter 36

    How does our memory affect our choices? - Chapter 36

    When people are about to die, their loved ones rush to them for one last moment, before death sets in. Why do we care so much about that final moment? Why is it so important that a movie ends with people arriving in time, while the length of the life was not taken in consideration? It hardly matters learning that they actually had several days together instead of 5 minutes, but someone being too late would be a big deal to you. Stories are about memorable moments and important events, not about the passing of time. In a story, duration neglect is normal and the ends often defines whether it’s a good or bad story.

    Caring for someone usually means being concerned for the quality of his/her story, not for his/her feelings. We also deeply care for the narrative of our own life story.

    Psychologist Diener examined whether the peak-end rule and duration neglect govern the evaluation of an entire life. The results demonstrated that both did. Doubling the duration of the life of the fictitious woman had no effect on the judgments of her total happiness nor on the desirability of her life. In addition, a less-more effect was found: adding ‘slightly happy’ years to a really happy life caused a drop in evaluations of total happiness: they made the whole life worse.

    Consider you are making vacation plans. Would you go for the beautiful place you enjoyed with your family last summer or visit a whole new location, enriching your memory store? The tourism industry helps people collecting memories and constructing stories. The goal of storing memories shapes the vacation plans and the experience of it. The word ‘memorable’ is frequently used to describe the highlights of the vacation. The remembering self is the one that chooses vacations. A study shows that the final evaluation of a vacation entirely determines the intentions for future breaks, although that did not accurately reflect the quality of the whole experience (as described in a diary). We choose by memory when we decide whether we repeat an experience or not. Eliminating memories is likely to significantly reduce the value of the experience.

    Access: 
    Public
    What does research about experienced well-being learn us? – Chapter 37

    What does research about experienced well-being learn us? – Chapter 37

    Research about well-being revolves around one survey question, which was considered a measure of happiness and was addressed to the remembering self: “All things considered, how satisfied are you with your life as a whole these days?” In his experiments, Kahneman found that the remembering self was not the best witness, so he decided to focus on the well-being on the experiencing self.

    There are numerous experiences we would prefer to continue instead of stop, including psychical and mental pleasures. Example are being in a ‘flow’ (absorbed in a task) and playing with toys. The resistance to interruption is an indicator of having a good time.

    Together with other specialists Kahneman developed a measure of the well-being of the experiencing self. Experience sampling seemed a good option, but it is burdensome and expensive. This led to the development of the ‘Day Reconstruction Method’ (DRM). Most moments in life can be classified as ultimately negative or positive. The American participants experienced negative feelings approximately 19% of the time. This percentage is called the U-index. The advantage of the U-index is that it is not based on a rating scale but on an objective measurement of time. It can also be computed for activities.

    A remarkable finding was the extent of inequality in the distribution of emotional pain. Half of the participants went through a whole day without experiencing unpleasant episodes. A significant number of participants experienced negative feelings for a big part of the day. This suggests that a minority of the population does most of the emotional suffering.

    The mood of people at any moment depends on their overall happiness and temperament. Emotional well-being also fluctuates over the week and day. The mood of the moment depends mostly on the current situation. Situational factors are the most important. We are usually focused on our current activities and environment, but sometimes the quality of subjective experience is dominated by recurrent thoughts (being in love, grieving). However, in most cases we draw pain and pleasure from what currently is happening.

    The findings have implications for society and individuals. People have some control over their use of time. A number of people could arrange their lives to spend more time doing things they like and less time doing things that make them unhappy. Some aspects of life have more effect on the evaluation of someone’s life than on the experience of living, like educational attainment. Bad health, living with children and religious participation have a stronger adverse effect on experienced well-being than on life evaluation. Does money make us happy? Being poor is depressing and being rich is satisfying, but having a lot of money does not improve experienced well-being.

    Access: 
    Public
    How does your thinking affect your experience of happiness? – Chapter 38

    How does your thinking affect your experience of happiness? – Chapter 38

    The decision to marry someone reflects a huge error of ‘affection forecasting’. On their big day, the groom and bride know that the divorce rate is high, but they believe that these numbers do not apply to them.

    A study on the level of life satisfaction from the day people get married shows a gradual drop. It is argued that the honey moon phase fades and married life becomes a routine. Another example is plausible: heuristics of judgment. A mood heuristic is one way of answering questions about life-satisfaction. In addition to the current mood, people are likely to think about significant events in the recent past. Only a few relevant ideas come to mind, but most do not. The rating of life-satisfaction is heavily influenced by a small amount of highly available ideas, not by carefully weighting all life domains. People who recently got married will retrieve that happy event when asked a general question about life. As time passes, the salience of the thought will diminish. This explains the remarkably high level of life satisfaction in the first years after marriage. On average, experienced well-being is not affected by marriage, not because marriage does not makes us happy, but because it changes some aspects of life for the worse and others for the better.

    A reason for the low correlations between life-satisfaction and the circumstances of individuals, is that life-satisfaction and experienced happiness are significantly determined by the genetics of temperament. A disposition for well-being is heritable. In other cases, like marriage, the correlations with well-being are low due to balancing effects. Setting (financial) goals also proved to have lifelong effects.

    People tend to respond fairly quick to life questions. This speed of answering and the effects of current mood on the answers demonstrate that they skip a careful assessment. They probably use heuristics, which are examples of WYSIATI and substitution. When attention is directed to a specific aspect of life, it greatly affects the overall evaluation. This is known as the ‘focusing illusion’. The most important thing in life seems the thing you are thinking about. The essence of this illusion is WYSIATI. The focusing illusion results into a bias in favor of experiences and goods that are initially appealing, but will eventually lose their charm.

    Access: 
    Public
    Work for WorldSupporter

    Image

    JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

    Working for JoHo as a student in Leyden

    Parttime werken voor JoHo

    Check all content related to:
    How to use more summaries?


    Online access to all summaries, study notes en practice exams

    Using and finding summaries, study notes en practice exams on JoHo WorldSupporter

    There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

    1. Starting Pages: for some fields of study and some university curricula editors have created (start) magazines where customised selections of summaries are put together to smoothen navigation. When you have found a magazine of your likings, add that page to your favorites so you can easily go to that starting point directly from your profile during future visits. Below you will find some start magazines per field of study
    2. Use the menu above every page to go to one of the main starting pages
    3. Tags & Taxonomy: gives you insight in the amount of summaries that are tagged by authors on specific subjects. This type of navigation can help find summaries that you could have missed when just using the search tools. Tags are organised per field of study and per study institution. Note: not all content is tagged thoroughly, so when this approach doesn't give the results you were looking for, please check the search tool as back up
    4. Follow authors or (study) organizations: by following individual users, authors and your study organizations you are likely to discover more relevant study materials.
    5. Search tool : 'quick & dirty'- not very elegant but the fastest way to find a specific summary of a book or study assistance with a specific course or subject. The search tool is also available at the bottom of most pages

    Do you want to share your summaries with JoHo WorldSupporter and its visitors?

    Quicklinks to fields of study (main tags and taxonomy terms)

    Field of study

    Access level of this page
    • Public
    • WorldSupporters only
    • JoHo members
    • Private
    Statistics
    1184
    Comments, Compliments & Kudos:

    Add new contribution

    CAPTCHA
    This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
    Image CAPTCHA
    Enter the characters shown in the image.
    Promotions
    Image
    The JoHo Insurances Foundation is specialized in insurances for travel, work, study, volunteer, internships an long stay abroad
    Check the options on joho.org (international insurances) or go direct to JoHo's https://www.expatinsurances.org