One way of avoiding the null hypothesis testing ritual in science is to increase the precision of theories by casting them as formal models. Rituals can be characterized by a repetition of the same action (1), fixations on special features (2), anxieties about punishment for rule violation (3) and wishful thinking (4). The null hypothesis testing ritual is mainly maintained because many psychological theories are too weak to make precise predictions besides the direction of the effect.
A model is a simplified representation of the world that aims to explain observed data. It specifies a theory’s predictions. Modelling is especially suited for basic and applied research about the cognitive system. There are four advantages of formally specifying the theories as models:
- Designing strong tests of theories
Modelling theories leads to being able to make quantitative predictions about a theory, which then leads to comparable, competing predictions between theories which allows for comparison and testing of theories. - Sharpening research questions
Null hypothesis testing allows for vague descriptions of theories and specifying the theories as models requires more precise research questions. These vague descriptions make theories difficult to test and sharpening the research questions makes it easier to test the theories. - Going beyond linear theories
Null hypothesis testing is especially applicable to simple hypotheses. The statistical tools available are used to create theories, mostly linear theories and by specifying the theory as a model, this is not necessary anymore. - Using more externally valid designs to study real-world questions
Modelling can lead to more externally valid designs, as confounds are not eliminated in the analysis, but built into the model.
Goodness-of-fit measures cannot make the distinction between variation in the data as a result of noise or as a result of the psychological process of interest. A model can end up overfitting the data, capturing the variance of the psychological process of interest and variance as a result of random error. The ability of a model to predict new data is the generalizability. The complexity of a model refers to a model’s inherent flexibility that enables to fit diverse patterns of data. The complexity of a model is related to the degree to which a model is susceptible to overfitting. The number of free parameters (1) and how parameters are combined in the model (2) contribute to the model’s complexity.
Increased complexity makes a model more likely to overfit while the generalizability to new data decreases. Increased complexity can also lead to better generalizability of the data, but only if the model is complex enough and not too complex. A good fit to current data does not predict a good fit to other data.
The irrelevant specification problem refers to the difficulty bridging the gap between description of theories and formal implementations. This can lead to unintended discrepancies between theories and their formal counterparts. The Bonari paradox refers to when models become more complex and realistic, they become less understandable. The identification problem refers to that there are numerous models that are able to predict the data for a single psychological process. In this case, it is not clear which model is the ‘best’.
Add new contribution