Funder et al. (2014). Improving the dependability of research in personality and social psychology: Recommendations for research and educational practice.” - Article summary
There appears to be a high rate of false positives in science. The p-value refers to the conditional probability that the present data or more extreme will be observed in a random sample given that the null hypothesis is true. Critique of using the p-value includes not interpreting the p-value properly (1) and the fact that the p-value varies with a varying sample size (2).
The effect size refers to the magnitude of the found statistical relationship. Unstandardized effect sizes are preferred when there is a clear consensus regarding that the measurement unit is in the interval level (e.g. seconds; blood pressure).When this is not the case (e.g. in psychology), standardized effect sizes should be used. The effect size can be easily interpreted if the sample sizes are equal and the total sample size is moderate to large. However, this becomes more complex if the sample sizes differ greatly.
Power refers to the probability that a true effect of a precisely specified size in the population will be detected using significance testing. It is the probability of finding an effect given that an effect of the specified size (i.e. the power) exists. The statistical power is one minus the type II error rate. The type II error rate refers to the probability that a true effect will not be detected using significance testing. It is the probability that the alternative hypothesis is wrongfully rejected.
Power should be maximized in a study. The power, however, is affected by the sample size (1), the measurement error (2) and the homogeneity of the participants (3). According to Cohen, research should use power of at least 0.8.
It is problematic to solely focus on the observed p-value because findings with equivalent p-values can have very different implications. In cases with large sample sizes, very small effect sizes can be significant and in cases with small sample sizes, a significant effect may produce an implausible large effect size. Therefore, there is a big difference between practical and statistical significance. A lot of research with small sample sizes is underpowered.
There are several recommendations to improve research:
- Describe and address choice of N and consequent issues of statistical power
The researchers should design studies with sufficient power to detect the key effects of interest. The sample size should be justified on the smallest effect of interest. More studies with adequate power means more statistical effects of interest achieve significance. - Report effect sizes and 95% CIs for reported findings
The p-values should be supplemented by effect sizes that provide information on the magnitude of the finding. - Avoid questionable research practices
Research practices that tweak the results afterwards and undermine the researcher’s ability to draw a valid conclusion should be avoided. Tweaked data leads to a larger effect size than the true effect size. - Include in an appendix the verbatim wording of all independent and dependent variable instructions, manipulations and measures
This allows other researchers to replicate the study as the precise procedures are made known to the public. - Use open access data
All data should be shared if the anonymity of the participants can be protected (1), it is not illegal (2) and the data requesting researcher agrees with the previous two points (3). This makes it easier to replicate studies and confirm findings. - Encourage and improve the availability of publication outlets for replication studies
Replicating more studies leads to a better foundation on which to base novel research. - Maintain flexibility and openness to alternative standards and methods when evaluating research
The rules should not be rigid and should be open to change, if a proper argument is presented.
Conducting multiple tests of significance on a dataset without statistical correction (1), running participants until significant results are obtained (2), dropping observations, measures, items, conditions or participants after looking at the effects on the outcomes of interest (3) and running multiple experiments with similar procedures and only reporting those with significant results (4) are questionable research practices.
There are several recommendations for educational practice:
- Encourage a culture of ‘getting it right’ rather than finding significant results
Accurate knowledge should be supported over significant results. The contribution of a particular piece of research should be evaluated in terms of whether the research is carefully designed to address important questions and whether it makes use of an appropriate sample. - Teach and encourage transparency of data reporting, including imperfect results
There should be a focus on informativeness of data. All the information should be available and not only information that helps the flow of the paper or the career of the researcher. - Improve methodological instructions on topics such as effect size, confidence intervals, statistical power, meta-analysis, replication and the effects of questionable research practices
There should be a training regarding the issues in psychology described in the recommendations in the (under)graduate courses of students. - Model sound science and support junior researchers who seek to get it right
The current research practice should be modelled after the improved version and the new researchers who seek to get it right should be supported.
Replication studies might need to use a lower than reported effect size (1), develop a minimum value for an effect size that is deemed to small (2) and use a higher value than 0.80 for statistical power (3). This makes sure that replication is possible because if this is not done, the probability of finding a significant result decreases steadily. These steps enhance the credibility and the usefulness of a replication study.
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
Evidence-based Clinical Practice – Full course summary (UNIVERSITY OF AMSTERDAM)
- Dennis et al. (2009). Why IQ is not a covariate in cognitive studies of neurodevelopmental disorders.” – Article summary
- Kazdin (2008). New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care.” - Article summary
- Kraemer et al. (2003). Measures of clinical significance.” - Article summary
- Funder et al. (2014). Improving the dependability of research in personality and social psychology: Recommendations for research and educational practice.” - Article summary
- Halpern, Karlawish, & Berlin (2002). The continuing unethical conduct of underpowered clinical trials.” - Article summary
- Moore (2016). Pre-register if you want to.” - Article summary
- Gelman & Geurts (2017). The statistical crisis in science: how is it relevant to clinical neuropsychology?” - Article summary
- Kazdin & Weisz (1998). Identifying and developing empirically supported child and adolescent treatments.” - Article summary
- Kahn (2011). Multilevel modelling: Overview and applications to research in counselling psychology.” - Article summary
- Robinaugh, Hoekstra, Toner, & Borsboom (2020). The network approach to psychopathology: A review of the literature 2008-2018 and an agenda for future research.” - Article summary
- De Vent et al. (2016). Advanced neuropsychological diagnostics infrastructure (ANDI): A normative database created from control datasets.” - Article summary
- Maric et al. (2015). Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: An SPSS method to analyze univariate data.” - Article summary
- Evidence-based Clinical Practice – Lecture 1 (UNIVERSITY OF AMSTERDAM)
- Evidence-based Clinical Practice – Lecture 2 (UNIVERSITY OF AMSTERDAM)
- Evidence-based Clinical Practice – Lecture 3 (UNIVERSITY OF AMSTERDAM)
- Evidence-based Clinical Practice – Lecture 4 (UNIVERSITY OF AMSTERDAM)
- Evidence-based Clinical Practice – Lecture 5 (UNIVERSITY OF AMSTERDAM)
- Evidence-based Clinical Practice – Lecture 6 (UNIVERSITY OF AMSTERDAM)
- Evidence-based Clinical Practice – Lecture 7 (UNIVERSITY OF AMSTERDAM)
Evidence-based Clinical Practice – Article overview (UNIVERSITY OF AMSTERDAM)
- Dennis et al. (2009). Why IQ is not a covariate in cognitive studies of neurodevelopmental disorders.” – Article summary
- Kazdin (2008). New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care.” - Article summary
- Kraemer et al. (2003). Measures of clinical significance.” - Article summary
- Funder et al. (2014). Improving the dependability of research in personality and social psychology: Recommendations for research and educational practice.” - Article summary
- Halpern, Karlawish, & Berlin (2002). The continuing unethical conduct of underpowered clinical trials.” - Article summary
- Moore (2016). Pre-register if you want to.” - Article summary
- Chambless & Hollon (1998). Defining empirically supported therapies.” - Article summary
- Gelman & Geurts (2017). The statistical crisis in science: how is it relevant to clinical neuropsychology?” - Article summary
- Kazdin & Weisz (1998). Identifying and developing empirically supported child and adolescent treatments.” - Article summary
- Kahn (2011). Multilevel modelling: Overview and applications to research in counselling psychology.” - Article summary
- Robinaugh, Hoekstra, Toner, & Borsboom (2020). The network approach to psychopathology: A review of the literature 2008-2018 and an agenda for future research.” - Article summary
- De Vent et al. (2016). Advanced neuropsychological diagnostics infrastructure (ANDI): A normative database created from control datasets.” - Article summary
- Maric et al. (2015). Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: An SPSS method to analyze univariate data.” - Article summary
Work for JoHo WorldSupporter?
Volunteering: WorldSupporter moderators and Summary Supporters
Volunteering: Share your summaries or study notes
Student jobs: Part-time work as study assistant in Leiden

Contributions: posts
Evidence-based Clinical Practice – Full course summary (UNIVERSITY OF AMSTERDAM)
This bundle gives a full overview of the course "Evidence-based Clinical Practice" given at the University of Amsterdam. It contains both the articles and the lectures. The following is included:
- “Dennis et al. (2009). Why IQ is not a covariate in cognitive
Evidence-based Clinical Practice – Article overview (UNIVERSITY OF AMSTERDAM)
This bundle contains an overview of all the articles used in the course "Evidence-based Clinical Practice." given at the University of Amsterdam. It contains the following articles:
- “Dennis et al. (2009). Why IQ is not a covariate in











Add new contribution