MVDA - analysis of covariance
Week 3: Analysis of covariance (ANCOVA)
ANCOVA can be used when one of the independent variables (factor X) is nominal level,
while the other one (covariate) is interval level.
The goal of ANCOVA is the reduction of error variance and removal of systematic bias.
The research question is: What is the effect of X on Y after correction for C?
Is there a significant effect of x method (e.g. teaching method) on posttest? (report test statistic, df, p value and a suitable measure of effect size). If yes, interpret that effect.
Here we look at the table tests of Between-Subjects Effects. We need the eta squared value, the F statistic and the p value.
The eta squared can be calculated by dividing the Sum of Square (SS) of the method by the SS of the total.
The F is the Mean Square (MS) of method, divided by the MS of error. This value can also be seen in the table. The two degrees of freedom used are the one for method (in the example given below, 2) and for error (in the example: 177). The p value is seen in the table under significance.
- An example of how the test statistic should be reported: F(2, 177)=13.371, p<0.001.
If the answer is: yes there is a significant effect of method on posttest, then we look at the multiple comparisons table. We look at the mean difference for all methods and see if there is any that stand out or are significantly different. For example, if the mean difference of method B with all other methods is a lot larger than mean differences without method B, then we can say B is significantly different.
Is there a significant correlation between pretest and post-test in all groups?
Here we look at the within-group correlations table and see if all are significant (p < .001)
Are there significant differences between the groups at pretest? If yes, interpret these differences.
Here we look at the Tests of Between-Subjects Effects table, with the dependent variable: pretest. If the F test is significant, there are differences between the groups. Then, in the multiple comparisons table look at mean differences.
Example of reporting what we found:
Significant group differences at pretest, F(2,177) = 10.221,p<.001.
No difference between methods A and B (Mdif f= -2.67,p= .339), but Method C differs from both method A (Mdif f= -5.72,p= .008) and B (Mdif f= -8.38,p<.001).
Do you think that adding the covariate might lead to reduction of error, reduction of bias, neither, or both?
If there are large and significant within-group correlations→Possible reduction of error
If there are significant differences between group means→Possible elimination of bias
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
Contributions: posts
Spotlight: topics
Multivariate data analysis (MVDA) bundle
Bunndle for the course Multivariate Data Analysis (2019)
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
Add new contribution