Introduction
Classic statistics tell us something about the long-run relative frequency of different types of errors. This is called objective probability. However, classic statistics can not tell us about the probability of hypotheses being true. Also, many people erroneously believe that significance values tells them the probability of hypotheses. In this chapter, subjective probability is explained. Subjective probability is defined as the subjective degree of someone's confidence in a hypothesis. Bayesians state that people want statistics to tell them the probability of their hypothesis being right. They state that this is a natural intuition.
Subjective probability
In our everyday lives, we often say things such as: "There is a high chance that it will snow tomorrow", or "England is most likely to win the match". These statements have nothing to do with objective probability (the long-run relative frequency). Instead, these statements refer to subjective probability: the degree of confidence we have in a hypothesis. So, according to the authors, subjective (or personal) probability is in the mind and not in the world. So, if you say: "There is a high chance that it will snow tomorrow", you are actually making a statement about how confident you are that it will snow tomorrow. A weather expert can not tell you are wrong in your statement. Even though this expert knows a lot about weather patterns, he or she does not know any better than you how confident you are in your statement (or: in what state your mind is). This expert might change your thoughts by providing you with more information, but how strongly you believe that it will snow tomorrow, is up to you.
The authors state that the problem with subjective probability, is how to assign a number of how confident you are. Let's say that we choose a number between 0 and 1. The zero stands for "zero probability", which means that there is no chance that the statement is true. One stands for that you are a hundred percent certain that a statement is true. However, if you are not certain that the statement is false (0) nor certain that the statement is true (1), then what number would you choose? The author states that a solution is to determine how much money you would be willing to bet on the statement. However, sometimes it is more useful to express your personal confidence in terms of odds rather than in terms of probabilities:
odds(theory is true)= probability(theory is true)/probability(theory is false)
So, if your personal probability is 0.5, then your odds are 0.5/(1-0.5) = 1, or 1:1 (which is called even odds). If your personal probability is 0.75, then your odds are 0.75/(1-0.75) = 3:1.
The authors state that people do not follow the rules (the axioms) of probability with regard to their subjective probability. This means that people often do not change their probabilities correctly when they are provided with new and useful information. Therefore, they can use statistics, which forces them to be disciplined.
The authors note that there only a few axioms or rules, which are reasonable. For example, there are two axioms that decide what values probabilities can take. First: all probabilities lie between 0 and 1. Second, P(A or B)= P(A) + P(B), if A and B are mutually exclusive. For example, when you roll a dice, it can come up as 1, 2, 3, 4, 5, or 6. Each of these possibilities are mutually exclusive, which means that if there is a '1', then there is not a 2, 3, 4, etc. So, when you have a personal probability of a 1 coming up of 1/6 and a 2 coming up of 1/6, then the axiom asserts that the personal probability P(getting '1' or getting '2'), then the probability of getting either a 1 or 2 on a roll, should be: P(getting '1') + P(getting '2') = 1/6 + 1/6 = 1/3. A final axiom states that P(A and B) = P(A) * P(B/A) * P(B/A) is the probability of B given A. This means that we should ask ourselves: when A is the case, than what is the probability of B?
Bayes' theorem
Bayes had worked on the problem of how one can obtain the probability of a hypothesis given some data, that is P(H/D). This is called Bayes' theorem. Consider to have hypothesis H and hypothesis D. Then the Bayes theorem states that:
P(H and D) = P(D) x P(H/D) and P(H and D) = P(H) x P(D/H)
These are comparable to the axioms we just described.
So, Bayes' theorem tells you how to move from one conditional probability to its inverse. When you are interested in comparing the probability of different hypothesis given the same data D, then P(D) is a constant for all the comparisons. So:
P(H/D) is proportional to P(D/H) x P(H)
P(H) is called the prior. This reflects how probable you thought the hypothesis was, prior to collecting data. In other words, it is your subjective personality, so the value is completely up to you. P(H/D) is the probability of the hypothesis given the data. This is called the posterior. Posterior means: "coming after". So, it tells you how probable your hypothesis is to you after you have collected the data. P(D/H) is the probability of obtaining the data, given your hypothesis. So, this refers to the likelihood of the hypothesis. So, in words, this equation states that: your posterior is proportional to the likelihood times the prior. This statement is also referred to as "the mantra of Bayesian statistics". So, it tells us how we can update our prior probability in a hypothesis, when we are provided with data. So, the prior is up to you, but the posterior is determined by the axioms (rules) of probability.
The likelihood
So, Bayes' theorem tells you that if you want to update your personal probability in a hypothesis, then you can use the likelihood to know everything you need to know about the data. This is because all support for a hypothesis coming from the data, is captured by the likelihood. The likelihood principle refers to that all the information that is relevant to inference contained in data is provided by the likelihood.
To understand likelihood, consider the following example: We are interested in how men respond to people telling them about their problems. A man can respond in two ways: he can offer a solution, or he can provide empathy. The first is called a man as a solver, and the latter is called the man as an empathizer. Gray (2002) suggested that men often offer solutions when women describe problems, while women are actually only looking for empathy. The research question is: what proportion of men in a population are solvers? To examine this, we tell our problems to five different men. All of these men suggest solutions to our problems, so they are called solvers. So, our data is that five out of five men were solvers. Then, the likelihood is the probability of obtaining these data given a hypothesis. One hypothesis is that the proportion of mn who are solvers in our population is 0.1 The likelihood, P(D/H) = P (obtaining 5 solvers / proportion of solvers = 0.1), is 0.1^5 = 0.000001. Another hypothesis is that the proportion of men who are solvers is 0.5. Then, the likelihood = P(obtaining 5 solvers / proportion of solvers is 0.5) = 0.5^5 = 0.03125. So, the latter has the highest likelihood.
In everyday language, saying that a hypothesis has the highest likelihood is the same as saying that it has the highest probability. However, these two things are not the same for statistics. The probability of the hypothesis is P(H/D), the posterior. The likelihood of the hypothesis is the probability of the data given the hypothesis, P(D/H). One can use the likelihood to obtain the posterior, but these two are not the same!
When we say that a hypothesis has the highest likelihood, this means that the data supports that hypothesis the most. When the prior probabilities for each hypothesis are the same, then the hypothesis with the highest likelihood will have the highest posterior probability.
Bayesian analysis
So, Bayes' theorem states that posterior is proportional to likelihood times prior. This theorem can be used in two ways when we have to deal with psychological data: first, we can calculate a credibility interval (the Bayesian equivalent of a confidence interval). Second, we can calculate how to adjust our odds in favour of a theory we are testing over the null hypothesis with regards to our experimental data (the Bayes factor). This is the Bayesian equivalent of null hypothesis testing.
Credibility intervals
When we want to test the degree to which one gram of a new drug can change blood pressure, each possible value of population change in blood pressure, is a hypothesis. Then, you need to decide what your prior probability density is for each of these hypothesis. Assume that you have a normal distribution: certain values are reasonably probable, while more extreme values are less probable in a symmetric way. The values that are most probable, belong to the centre (mean) of the distribution. The spread in values is then defined as the 'standard deviation'. Plus one or minus one standard deviation from the mean has a 68% probability of including the actual population value. Plus or minus two standard deviations has 95% probability of including the actual population value. So, for plus or minus three standard deviations, you can be certain that it includes the true population value. When the standard deviation is infinite, then all the population values are equally likely. This is then called a 'flat prior' or 'uniform prior'. Then, you have no idea of what the population value is likely to be.
The Bayes factor
In Bayesian statistics, there is no significance testing. Instead, a Bayesian statistician merely determines posterior distributions. But, sometimes individuals want to compare the probability of their experimental theory to the probability of the null hypothesis. This can be done using the 'Bayes factor'. This is the Bayesian equivalent of null hypothesis or significance testing. With the use of the Bayes factor, one can compare the probability of an experimental theory to the probability of the null hypothesis.
Bayes states that P(H/D) is proportional to P(D/H) * P(H). To look at two hypotheses, the experimental hypothesis H1 and the null hypothesis H0, we have:
P(H1/D) is proportional to P(D/H 1) x P(H1)
P(H0/D) is proportional to P(D/H0) x P(H0)
Then, we divide these hypotheses: P(H1/D)/P(H0/D)=P(D/H1)/P(D/H0)*P(H1)/P(H0).
So, the posterior odds = likelihood ratio x prior odds.
In this case, the likelihood ratio is called the Bayes factor B. So, after data collection you have to multiply your prior odds by B to get your posterior odds. If this B is greater than 1, then your data supports the experimental hypothesis over the null. If your B is less than 1, your data supports the null hypothesis over the experimental hypothesis. If B was almost 1, then your experiment was not sensitive. This means that you did not have enough participants, so this data does not distinguish your experimental hypothesis from the null hypothesis.
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
Contributions: posts
Spotlight: topics
Online access to all summaries, study notes en practice exams
- Check out: Register with JoHo WorldSupporter: starting page (EN)
- Check out: Aanmelden bij JoHo WorldSupporter - startpagina (NL)
How and why use WorldSupporter.org for your summaries and study assistance?
- For free use of many of the summaries and study aids provided or collected by your fellow students.
- For free use of many of the lecture and study group notes, exam questions and practice questions.
- For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
- For compiling your own materials and contributions with relevant study help
- For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.
Using and finding summaries, notes and practice exams on JoHo WorldSupporter
There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
- Use the summaries home pages for your study or field of study
- Use the check and search pages for summaries and study aids by field of study, subject or faculty
- Use and follow your (study) organization
- by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
- this option is only available through partner organizations
- Check or follow authors or other WorldSupporters
- Use the menu above each page to go to the main theme pages for summaries
- Theme pages can be found for international studies as well as Dutch studies
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
- Check out: Why and how to add a WorldSupporter contributions
- JoHo members: JoHo WorldSupporter members can share content directly and have access to all content: Join JoHo and become a JoHo member
- Non-members: When you are not a member you do not have full access, but if you want to share your own content with others you can fill out the contact form
Quicklinks to fields of study for summaries and study assistance
Main summaries home pages:
- Business organization and economics - Communication and marketing -International relations and international organizations - IT, logistics and technology - Law and administration - Leisure, sports and tourism - Medicine and healthcare - Pedagogy and educational science - Psychology and behavioral sciences - Society, culture and arts - Statistics and research
- Summaries: the best textbooks summarized per field of study
- Summaries: the best scientific articles summarized per field of study
- Summaries: the best definitions, descriptions and lists of terms per field of study
- Exams: home page for exams, exam tips and study tips
Main study fields:
Business organization and economics, Communication & Marketing, Education & Pedagogic Sciences, International Relations and Politics, IT and Technology, Law & Administration, Medicine & Health Care, Nature & Environmental Sciences, Psychology and behavioral sciences, Science and academic Research, Society & Culture, Tourisme & Sports
Main study fields NL:
- Studies: Bedrijfskunde en economie, communicatie en marketing, geneeskunde en gezondheidszorg, internationale studies en betrekkingen, IT, Logistiek en technologie, maatschappij, cultuur en sociale studies, pedagogiek en onderwijskunde, rechten en bestuurskunde, statistiek, onderzoeksmethoden en SPSS
- Studie instellingen: Maatschappij: ISW in Utrecht - Pedagogiek: Groningen, Leiden , Utrecht - Psychologie: Amsterdam, Leiden, Nijmegen, Twente, Utrecht - Recht: Arresten en jurisprudentie, Groningen, Leiden
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
649 |
Add new contribution