How is fallacy formed in you mind? - Chapter 15
A famous and controversial experiment is known as the ‘Linda problem’. It was made up by Amos and Kahneman to demonstrate the role of heuristics in judgment and their incompatibility with logic. People who read the description of Linda instantly knew what University she had attended, because it was known for its politically engaged, radical students. Participants were asked to read a list of possible scenarios regarding Linda and rank them by representativeness and by probability. They agreed that one scenario (“She is a feminist bank teller”) seems more likely than another one (“She is a bank teller”). The stereotype of bank tellers does not include feminism, but adding that part to the description results into a more coherent story. The twist is found in the judgments of probability, because there is a logical connection between both scenarios. Since every feminist bank teller is a bank teller, the probability of Linda being a feminist bank teller must be lower than the probability of Linda being merely a bank teller. Specifying possible events in greater detail always lowers the probability. This issue causes a conflict between the logic of probability and the intuition of representativeness. The participants ranked ‘feminist bank teller’ higher than ‘bank teller’ in their ranking by probability and by resemblance.
The scenarios ‘bank teller’ and ‘feminist bank teller’ were placed on the list as number 6 and 8, close to each other. Kahneman and Amos expected that participants would notice the connection between them and that their rankings would follow logic. But surprisingly, they had ranked ‘feminist bank teller’ as more likely. Representativeness had won the battle, which is considered a failure of System 2. Nearly 90% of the participants violated the logical rule. Failing to apply an obviously relevant logical rule is called ‘fallacy’. It is called a ‘conjunction fallacy’ when people judge a conjunction of two events (in this case: feminist and bank teller) to be more probable than one of the events (bank teller) in a straight comparison.
In the short version of the Linda problem, participants had to answer the question which of the following alternatives is more likely: “She is a bank teller” or “She is a bank teller and a feminist”. Most of them gave the right answer: “bank teller”. The difference with the long version is the separation between these outcomes by the intervening seventh scenario on the list: they were judged independently, no comparison was made between them. The short version involved a direct comparison, which mobilized System 2 and prevented the commitment of the fallacy.
Hsee’s dinnerware study demonstrated how absurd the less-is-more pattern is. He asked participants to price dinnerware sets. One group was shown a display that allowed a comparison between two sets (set X: 40 pieces of which 9 are broken and set Z: 24 pieces). This is called a ‘joint evaluation’. The two other groups were shown only one set, making it a ‘single evaluation’. Which set is worth more? Set X contains the same pieces as set Z and seven extra pieces, so it must be valued higher. The participants in the joint evaluation group priced set X higher, but the participants in the single evaluation group valued set Z much higher. This happened because the average value of the pieces is much lower for set X due to the broken pieces and the single evaluation was dominated by the average. Hsee calls this pattern ‘less is more’. Removing broken pieces from set X improves the value, just like adding a high valued item increases the value of the set.
The incidence of the conjunction fallacy can be reduced by formulating an easier question. The question “What percentage of the participants…?” is much harder than “How many of the 100 participants..?” 100 people are easier to imagine, while the percentage-question does not make you think of individuals.
The experiments learn us that System 2 is not very alert and lazy.
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
Summary of Thinking, Fast and Slow by Kahneman - 1st edition - bundle
- What is the book 'Thinking, fast and slow' by Kahneman about?
- What distinguishes fast and slow thinking? - Chapter 1
- How do fast and slow thinking deal with effortful tasks? - Chapter 2
- How does the 'lazy control' of slow thinking work? - Chapter 3
- How does the 'associative machinery' of fast thinking work? - Chapter 4
- When is your mind at ease? - Chapter 5
- How does your mind deal with surprises? - Chapter 6
- Why do people so often jump to conclusions? - Chapter 7
- How are your judgments formed? – Chapter 8
- How do you generate an intuitive opinion on a complex problem? – Chapter 9
- When should researchers be more suspicious of their statistical intuitions? – Chapter 10
- How do unknown quantities enhance bias in your mind? – Chapter 11
- How do unknown frequencies enhance bias in your mind? – Chapter 12
- How do risk and availability enhance bias in your mind? - Chapter 13
- How do you prevent false intuitive judgement? - Chapter 14
- How is fallacy formed in you mind? - Chapter 15
- How does causally connected storytelling enhance bias in you mind? - Chapter 16
- How does causal interpretation enhance bias in you mind? - Chapter 17
- How can you tame and correct your intuitive predictions? - Chapter 18
- Why is every success story you read or hear often wrong? - Chapter 19
- How does the illusion of validity make you overconfident in your ability to predict the future? - Chapter 20
- How can you use statistics to correct intuitions? - Chapter 21
- When do your judgments reflect true expertise? – Chapter 22
- What is the importance of the 'outside view' versus the 'inside view' for your judgements? – Chapter 23
- What is the best remedy for overconfident optimism? – Chapter 24
- How does your valuing relate with actual value? – Chapter 25
- Why is 'Prospect theory' better than 'Utility theory' in understanding the evaluation of financial outcomes? – Chapter 26
- Why is 'Prospect theory' better than 'Utility theory' in understanding the endowment effect of valuing valuables? – Chapter 27
- How is your decision-making affected by avoiding a loss and achieving a gain? – Chapter 28
- How is your decision-making affected by the value you attribute to losses, gains and wealth? – Chapter 29
- How is your decision-making affected by rare events? – Chapter 30
- How can you remedy the exaggerated caution evoked by loss aversion and the exaggerated optimism of the planning fallacy? – Chapter 31
- How do you keep mental account of gains, losses and regret? – Chapter 32
- When do preference reversals occur? - Chapter 33
- How is your decision-making affected by words that induce emotion? - Chapter 34
- How can our memory affect our judgments of experiences? - Chapter 35
- How does our memory affect our choices? - Chapter 36
- What does research about experienced well-being learn us? – Chapter 37
- How does your thinking affect your experience of happiness? – Chapter 38
Work for JoHo WorldSupporter?
Volunteering: WorldSupporter moderators and Summary Supporters
Volunteering: Share your summaries or study notes
Student jobs: Part-time work as study assistant in Leiden

Contributions: posts
Summary of Thinking, Fast and Slow by Kahneman - 1st edition - bundle
Summary with Thinking, Fast and Slow
- Type: summaries per chapter
- Book title: Thinking, Fast and Slow
- Author: Kahneman









Add new contribution