Factorial designs are used when there are more than one independent variables. There are several factorial designs:
- Independent factorial design (between groups)
There are several independent variables measured using different entities. - Repeated-measured (related) factorial design
There are several independent variables using the same entities in all conditions. - Mixed design
There are several independent variables. Some conditions use the same entities and some conditions use different entities.
INDEPENDENT FACTORIAL DESIGNS AND THE LINEAR MODEL
The calculation of factorial designs is similar to that of ANOVA, but the explained variance (between-groups variance) consists of more than one independent variable. The model sum of squares (between-groups variance) consists of the variance due to the first variable, the variance due to the second variable and the variance due to the interaction between the first and the second variable.
It uses the following formula:

This is the model sum of squares and shows you how much variance the independent variables explain. It can be useful to see how much of the total variance each independent variable explains. This can be done by using the same formula, but then only for one independent variable. In order to achieve this, the independent variable has to be grouped together in one group (this normally increases the n, as more multiple groups are being put together in one big group).
The residual sum of squares, the error variance (SSR) shows how much variance cannot be explained by the independent variables. It uses the following formula:

It is the variance of a group times the number of participants in the group minus one for each group added together. The degrees of freedom are added up together too. In a two-way design, the F-statistic is computed for the two main effects and the interaction.
OUTPUT FROM FACTORIAL DESIGNS
A main effect should not be interpreted in the presence of a significant interaction involving that main effect. In other words, main effects don’t need to be interpreted if an interaction effect involving that variable is significant.
Simple effects analysis looks at the effect of one independent variable at individual levels of the other independent variable. When judging interaction graphs, there are two general rules:
- Non-parallel lines on an interaction graph indicate some degree of interaction, but how strong and whether the interaction is significant depends on how non-parallel the lines are.
- Lines on an interaction graph that cross are very non-parallel, which hints at a possible significant interaction, but does not necessarily mean that it is a significant interaction.