JoHo kan jouw hulp goed gebruiken! Check hier de diverse bijbanen die aansluiten bij je studie, je competenties verbeteren, je cv versterken en je een bijdrage laten leveren aan een mooiere wereld

Statistics

Chapter 13

Comparing means adjusted for other predictors (analysis of covariance)

## What is ANCOVA?

The linear model to compare means can be extended to include one or more continuous variables that predict the outcome (or dependent variable).

Covariates: the additional predictors.

ANCOVA: analysis of covariance.

Reasons to include covariates in ANOVA:

- To reduce within-group error variance
- Elimination of confounds

## ANCOVA and the general linear model

For example:

Happiness_{i} = b_{0} + b_{1}Long_{i} + b_{2}Short_{i} + b_{3}Covariate_{i} + Ɛ_{i}

We can add a covariate as a predictor to the model to test the difference between group means adjusted for the covariate.

With a covariate present, the b-values represent the differences between the means of each group and the control adjusted for the covariate(s).

## Assumptions and issues in ANCOVA

**Independence of the covariate and treatment effect **

When the covariate and the experimental effect are not independent, the treatment effect is obscured, spurious treatment effects can arise, and at the very least the interpretation of the ANCOVA is seriously compromised.

When treatment groups differ on the covariate, putting the covariate into the analysis will not ‘control for’ or ‘balance out’ those differences.

This problem can be avoided by randomizing participants to experimental groups, or by matching experimental groups on the covariate.

We can see whether this problem is likely to be an issue by checking whether experimental groups differ on the covariate before fitting the model.

If they do not significantly differ then we might consider it reasonable to use it as a covariate.

**Homogeneity of regression slopes **

When a covariate is used we loot at its overall relationship with the outcome variable:; we ignore the group to which a person belongs.

We assume that this relationship between covariate and outcome variable holds true for all groups of participants: homogeneity of regression slopes.

There are situations where you might expect regression slopes to differ across groups and that variability may be interesting.

**What to do when assumptions are violated **

- bootstrap for the model parameters
- post hoc tests

But bootstrap won’t help for the F-tests.

There is a robust variant of ANCOVA.

## Interpreting ANCOVA

**The main analysis **

The format of the ANOVA table is largely the same as without the covariate, except that there is an additional row of information about the covariate.

- looking first at the significance values, the covariate significantly predicts the dependent variable if p < 0.05.

Covariates can help us to exert stricter experimental control by taking account for confounding variables to give us a ‘purer’ measure of effect of the experimental manipulation.

The significances of the t-test tell us whether the adjusted group means differ significantly.

The degrees of freedom for the t-test of the b-values are: N-k-1

**SPSS**

- When the linear model is used to compare several means adjusted for the effect of one or more other variables (covariates) it can be referred to as analysis of covariance (ANCOVA)
- Before the analysis check that the covariates are independent of any independent variables by seeing whether those independent variables predict the covariate (the covariate should not differ across groups)
- In the table labelled Tests between-subjects effects, assuming you’re using a alpha of 0.05, look to see if the value in the column called Sig is below 0.05 for both the covariate and the independent variable. If it is for the covariate then this variable has a significant relationship to the outcome variable. If it is for the independent variable then the means (adjusted for the effect of the covariate) are significantly different across categories of this variable.
- If you have generated specific hypotheses before the experiment use planned contrasts, if not, use post hoc tests.
- For parameters and post hoc tests, look at the columns called sig to discover if your comparisons are significant. Use bootstrapping to get robust versions of these tests.
- In addition to the assumptions in Chapter 6, test for homogeneity or regression slopes by customizing the model to look at the independent variable x covariate interaction.

## Calculating the effect size

When we include a covariate we have more than one effect and we could calculate eta squared for each effect.

Partial eta squared: it looks at the proportion of variance that a variable explains that is not explained by other variables in the analysis.

Ŋ^{2} = Ss_{effect} / SS_{Total}

Partial Ŋ^{2} = (SS_{effect} + Ss_{residual}) / SS_{Total}

## Reporting results

For the covariate and the experimental effect give details of the F-statistic and the degrees of freedom from which it was calculated.

And p and effect size.

**for free**to follow other supporters, see more content and use the tools**for**€**10,- by becoming a member**to see all content

**Why create an account?**

- Your WorldSupporter account gives you access to all functionalities of the platform
- Once you are logged in, you can:
- Save pages to your favorites
- Give feedback or share contributions
- participate in discussions
- share your own contributions through the 7 WorldSupporter tools

Ga jij binnenkort studeren in het buitenland?

Regel je zorg- en reisverzekering via JoHo!

- 1 of 188
- next ›

# Discovering statistics using IBM SPSS statistics by A. Field (5th edition) a summary

This is a summary of the book "Discovering statistics using IBM SPSS statistics" by A. Field. In this summary, everything students at the second year of psychology at the Uva will need is present. The content needed in the thirst three blocks are already online, and the rest will be uploaded soon.

## Add new contribution