Comparing means adjusted for other predictors (analysis of covariance)
The linear model to compare means can be extended to include one or more continuous variables that predict the outcome (or dependent variable).
Covariates: the additional predictors.
ANCOVA: analysis of covariance.
Reasons to include covariates in ANOVA:
- To reduce within-group error variance
- Elimination of confounds
Happinessi = b0 + b1Longi + b2Shorti + b3Covariatei + Ɛi
We can add a covariate as a predictor to the model to test the difference between group means adjusted for the covariate.
With a covariate present, the b-values represent the differences between the means of each group and the control adjusted for the covariate(s).
Independence of the covariate and treatment effect
When the covariate and the experimental effect are not independent, the treatment effect is obscured, spurious treatment effects can arise, and at the very least the interpretation of the ANCOVA is seriously compromised.
When treatment groups differ on the covariate, putting the covariate into the analysis will not ‘control for’ or ‘balance out’ those differences.
This problem can be avoided by randomizing participants to experimental groups, or by matching experimental groups on the covariate.
We can see whether this problem is likely to be an issue by checking whether experimental groups differ on the covariate before fitting the model.
If they do not significantly differ then we might consider it reasonable to use it as a covariate.
Homogeneity of regression slopes
When a covariate is used we loot at its overall relationship with the outcome variable:; we ignore the group to which a person belongs.
We assume that this relationship between covariate and outcome variable holds true for all groups of participants: homogeneity of regression slopes.
There are situations where you might expect regression slopes to differ across groups and that variability may be interesting.
What to do when assumptions are violated
- bootstrap for the model parameters
- post hoc tests
But bootstrap won’t help for the F-tests.
There is a robust variant of ANCOVA.
The main analysis
The format of the ANOVA table is largely the same as without the covariate, except that there is an additional row of information about the covariate.
- looking first at the significance values, the covariate significantly predicts the dependent variable if p < 0.05.
Covariates can help us to exert stricter experimental control by taking account for confounding variables to give us a ‘purer’ measure of effect of the experimental manipulation.
The significances of the t-test tell us whether the adjusted group means differ significantly.
The degrees of freedom for the t-test of the b-values are: N-k-1
- When the linear model is used to compare several means adjusted for the effect of one or more other variables (covariates) it can be referred to as analysis of covariance (ANCOVA)
- Before the analysis check that the covariates are independent of any independent variables by seeing whether those independent variables predict the covariate (the covariate should not differ across groups)
- In the table labelled Tests between-subjects effects, assuming you’re using a alpha of 0.05, look to see if the value in the column called Sig is below 0.05 for both the covariate and the independent variable. If it is for the covariate then this variable has a significant relationship to the outcome variable. If it is for the independent variable then the means (adjusted for the effect of the covariate) are significantly different across categories of this variable.
- If you have generated specific hypotheses before the experiment use planned contrasts, if not, use post hoc tests.
- For parameters and post hoc tests, look at the columns called sig to discover if your comparisons are significant. Use bootstrapping to get robust versions of these tests.
- In addition to the assumptions in Chapter 6, test for homogeneity or regression slopes by customizing the model to look at the independent variable x covariate interaction.
When we include a covariate we have more than one effect and we could calculate eta squared for each effect.
Partial eta squared: it looks at the proportion of variance that a variable explains that is not explained by other variables in the analysis.
Ŋ2 = Sseffect / SSTotal
Partial Ŋ2 = (SSeffect + Ssresidual) / SSTotal
For the covariate and the experimental effect give details of the F-statistic and the degrees of freedom from which it was calculated.
And p and effect size.
Add new contribution