Applied Cognitive Psychology: Articlesummaries 21/22

Article summaries with the Leiden University bachelor course: Applied Cognitive Psychology: 21/22

Check supporting content in full:
Information Processing (Chapter 4) - Wickens & Carswell - 2012 - Article

Information Processing (Chapter 4) - Wickens & Carswell - 2012 - Article


In many situations, humans interact with systems. During these interactions, the operator must perceive information and transform information into different forms. Sometimes these transformations lead to errors. Understanding these transformations and thus understanding information processing, is important for predicting and modeling human-system interactions.

Three approaches to information processing

There are three distinct approaches to information processing: the classic stage-based approach; the ecological approach and cognitive engineering or ergonomics.

The classic stage-based approach

In this approach, the digital computer is used as a metaphor to human behavior. Information is seen as passing through a number of discrete stages. So, there is a distinction between a perceptual stage and a stage of execution and action, which is based on the morphological distinctions between perceptual and motor cortex. Proof for this approach comes from the fact that different tasks and environmental factors have a different influence on different stages. Within this approach, processing does not always start at stage 1: sometimes processing starts when someone gives a response.

The ecological approach

This approach places more emphasis on the integrated flow of information through the human rather than making a distinction between stages. It also emphasizes the interaction between humans and the environment. This approach is most relevant to describing human behavior in interaction with the natural environment, so it is used most when designing controls and displays that mimic characteristics of the natural environment. 

The cognitive engineering approach

This approach is a hybrid of the stage-based and ecological approach. On the one hand, it is based on a very careful understanding of the environment and tasks constraints within which an operator works. On the other hand, this approach places great emphasis on modeling and understanding the knowledge structures that expert operators have of the domain.

Selecting information

Broadbent's book lead to that human information processing is now seen as part of a filtering process. This filtering happens through mechanisms of human attention. Attention has three different modes: selective attention, focused attention and divided attention.

Selective attention

Selective attention refers to how attention is focused on a particular object in the environment for a certain period of time. It is influenced by four factors: salience, effort, expectancy and value. So, selective attention dictates where attention is given to.

Focused attention

Focused attention is used to maintain processing of the desired sources and avoid the distracting influence of potentially competing stimuli.

Divided attention

This is the ability to process more than on attribute or element of the environment at a given time.

Visual search

When people are looking for something in a cluttered environment (for instance when they are looking for a sign by the roadway), they use selective and focused attention as well as discrimination. Visual search models are used to predict the time that is required to find a target. These predictions can be very important for safety and productivity.

The most simple model of visual search is the 'serial self-terminating model'. In this model, it states that search space is filled with items of which most are nontargets (so, distractors). The mean time to find a target is modeled to be RT = NT/2. N is the number of items in the space and T is the time that is needed to examine each item and determine that it is not a target before moving on to the next. This model is influenced by three factors: bottom-up parallel processing, top-down processing and target familiarity. Bottom-up processing is about that for example all targets are 'highlighted',  so that searching for these targets is easier. Top-down processing is about how the operator's knowledge or expectations influence the information processing. For example, location expectancy will create search strategies that scan the most likely locations first. Another influence is the expectancy of whether a target will be present or not. This is called the 'target prevalence rate'. A third factor that influences visual search is target familiarity: this means that repeated exposures to the same consistent target can speed the search for that target and reduce the likelihood that the target may be missed.

Perception and data interpretation

The Signal Detection Theory 

When designing displays, it is very important that critical targets must be detectable in the environment. However, assuring this detectability can be difficult. It is often the case that changes in a scene are missed. However, sometimes it is also the case that people respond as if they saw something, while there was no target. This is called a false alarm. The SDT provides a framework for describing the processes that can lead to both types of errors. 

Expectancy, context and identification

Prior knowledge can also influence the ability to identify enormous numbers of objects. For example, it seems that objects and attributes are recognized more quickly when they are embedded in consistent contexts, rather than when they are presented alone or in different, inconsistent contexts. For example, words are more easily identified when they are embedded in sentences, compared to when words are presented alone.

Judgments of Two-Dimensional Position and Extent

Spatial judgements that are required to read even everyday graphs, are prone to systematic distortions. A few examples of these distortions are that people overestimate the values that are represented in bar graphs; perceptual flattening of line graphs with respect to the y-axis, which results in larger underestimations of the represented data as the reader follows the line from its origin; cyclic patterns of bias in estimations of part- whole relationships that are dependent on the number of available reference points on the graphs; distance distortions between cursor locations and target/icon locations induced by the shape of the cursor.

Judgments of Distance and Size in Three-Dimensional Space

There are five kinds of cues that help during perception:

When making judgements in spaces, human perception depends on different cues that provide information about the relative or absolute distance from the viewer. Many of these cues are called pictorial cues, because they cues can be used to generate the impression of depth in 2D pictures.

Next to pictorial cues, there are five cues that have to do with characteristics of the viewer: Motion parallax: this refers to that objects moving at a constant speed across the frame will appear to move an greater amount if they are closer to an observer than they would if they were at a greater distance. Binocular disparity: this refers to the difference in viewpoint of the two eyes. Stereopsis: this is the use of binocular disparity to perceive depth (think about 3D displays). Accommodation and binocular convergence: these cues result from the natural adjustment of the eyes that is needed to focus on specific objects at different distances. 

Comprehension and cognition

Working Memory limitations

There is a limited number of ideas, sounds and images that we can maintain and use in our mind. For example, the items in the working memory are lost when they are not repeated. This is called the memory span. Baddeley developed a four-part model of the working memory, which includes two temporary storage systems: the phonological loop and visuospatial sketchpad. These subsystems are used by a central executive, which manipulates the information in these stores and creates multimodal representations of coherent objects. These representations are then held in an episodic buffer. 

Knowledge about the working memory has some implications for design:

Whenever possible, try to avoid codes that are too long for the memory capacity; when it is necessary to use codes that exceed this limits, use methods such as parsing material into lower units (chunking); because information from working memory is lost after a few seconds, systems should be designed in such a way that they can use the information (think about voice menu systems: in these cases, the users should be able to make a choice immediately); the need to scan should be minimized if a person must hold spatial information in the sketchpad; avoid the need to transfer information from one subsystem into the other before further transformations or integrations can be made; if working memory subsystems are updated too rapidly, old information may interfere with new; interference in working memory is most likely when-to-be remember information is similar in either meaning or sound; the capacity of working memory varies between people.

Planning and problem solving

In contrast with cognitive activities that are heavily driven by information the environment, the information-processing tasks of planning and problem solving are more dependent on the interplay between information that is available in the long-term memory and information-processing transformations carried out in working memory. 


Planning can depend on two types of cognitive operations: planners may depends on scripts that they have stored in their long-term memory which are based on past experience; planning may involve guess work and some level of mental stimulation of future activities.

Problem solving, diagnosis and troubleshooting

These three activities are related to each other: they all have in common that there is a goal to be obtained by the human operator, that information to achieve that goal is currently missing and that some physical action or mental operation must be taken to seek these entities. 


The term metacognition refers to a person's knowledge about his or her own cognitive processes and the use of this information to regulate performance. Education is the most active area of research on metacognition: researchers have looked at how students' beliefs about their own information-processing capabilities influence learning strategies and ultimate academic success.

Most researchers make a distinction between metacognitive knowledge and metacognitive control processes. 

Action selection

In the stage of action selection and execution, it is important to look at the speed with which information is processed from perception to action. This speed is described in 'bandwidth': the amount of information processed per unit time. Units of information is described in bits.

Findings related to action selection

Response times for either rule-or skill based behavior is longer when there are more possible choices. 

When people do not expect certain stimuli, they may respond more slowly. 

Practice leads to that frequent events will be responded to more rapidly, but also expertise (and thus practice) in something may lead to less speedy processing for rare events, compared to novice events.

Spatial compatibility

The spatial compatibility of a display influences speed and accuracy of the control respones. One relates to the location of the control relative to the display and the other relates to how the display reflects control movement.


Machine systems are progressively more becoming executed by voice. There are three characteristics of voice control in the context of information processing: voic options allow more possible responses to be given in a shorter period of time compared to hand-control; voice options represent more compatible ways of transmitting symbolic or verbal information; voice options are valuable in environments when the eyes or hands are otherwise engaged.

Multiple-task performance

In multiple-task environments, there is a distinction between three different modes of multiple-task behavior: perfect parallel processing, degraded concurrent processing and strict serial processing. Perfect parallel processing means that two (or more) tasks are performed concurrently as well as either is performed alone, degraded concurrent processing means that both tasks are performed concurrently but one or both suffers, and strict parallel processing means that only one task is performed at a time.

When two tasks are similar, this may lead to confusion. Easier tasks are more likely to be performed perfectly compared to more difficult tasks. Also, when the two tasks are different, then this may lead to better performance.

Interaction Design, beyond Human Computer Interaction (Chapter 1) - Preece et al. - 2015 - Article

Interaction Design, beyond Human Computer Interaction (Chapter 1) - Preece et al. - 2015 - Article

What is interaction design?

In everyday life, products that humans interact such as smartphones, coffee machines, printers, e-readers, game consoles, and so forth are very common. Some of those are easy to use, and enjoyable, while others can be harder to use and can lead to annoyance and frustration.

Interaction design is exactly about this, and answers: how can we help users to positively interact with products and how can we reduce the negative aspects of user experience? It is thus about designing interactive products that are easy, effective, and pleasurable to use from an user’s perspective.

What is the difference between good and poor design?

The authors start with describing two examples of poor designed products: a voice mail system used in hotels and the remote control device.

Voice Mail System

Imagine that you are in a hotel for a week for a business trip. You have left your cell phone at home, and so you use hotel’s facilities. In each room of the hotel, there is a voice mail system. You can find out if you have a message by picking up the handset and listen to the tone. If you hear ‘beep beep beep’, this means that you have a message. Then, to find out how you can access the message, you need to read instructions next to the phone. This instruction says: touch 41. Then, you touch 41 and you hear that you need to enter your Room Number to leave a message. However, you do not have any instructions on how to hear your messages. You look at the instructions again, and you see that you have to touch *, then dial your room number and end with a #. You do so, and then the system replies: “You have reached your mailbox for room 106. To leave a message, type in your password.” However, you do not have a password. You call the reception for help, and the person at the desk explains to you the correct procedure. However, this is a very lengthy procedure… Therefore, you decide to go and get your own phone.

This is thus an example of a poor design, but why is it poor? Well, there are multiple reasons for why we would call this a poor design: it is infuriating, confusing, inefficient, difficult to use, it does not let you know whether there are any messages or how many there are, and the instructions are unclear.

The marble answering machine

The marble answering machine is a bit different from the voice mail system. In this machine, familiar physical objects are used that indicate how many messages have been left. It looks fun and is enjoyable to use. It also only requires one-step actions to perform core tasks. It is simple, but elegant. Lastly, it offers functionality and allows anyone to listen to any of the messages.

This design was created by Durrell Bishop. His goal was to design a messaging system that was enjoyable to use and would also be efficient. However, even though this marble answering machine is elegant and usable, it would not be practical in a hotel setting. For example, it is not robust enough to be used in public places: the marbles could get easily lost or be taken as souvenirs. Also, in hotels, it is important to identify the user before allowing the messages to be played. Therefore, when considering the design of an interactive product, it is important to take into account where it is going to be used and who is going to use it. The marble answering machine is better suited in a home setting than at a hotel, even though at home children could also be tempted to play with the marbles.

Remote Control Device

Unfortunately, remote devices are often poorly designed. Many users find it difficult to locate the right buttons, even for the simples tasks, like pausing or finding the main menu. For some users, it is even more difficult, because they have to put their reading glasses on each time to read the buttons.

However, one type of remote, the TiVo remote control is better designed. The buttons are large, clearly labelled, and logically arranged. This makes the buttons easy to locate and use in conjunction with the menu interface which appears on the TV monitor. The remote was also designed to fit into the palm of a hand, and has a peanut shape. Furthermore, it has a playful look and feel: colourful buttons and cartoon icons were used that are very distinctive, which makes them easy to identify in the dark and without having to put glasses on.

But, why have so many other creators of remote devices failed? Well, the answer is that TiVO invested a lot of time and effort to follow a user-centered design process. For example, TiVo involved potential users in the design process, and got their feedback on everything: how does the device feel in their hand? Where should we place the batteries? They also restricted the number of control buttons and only included the essential ones. The other functions were then represented as part of the menu options and dialog boxes displayed on the TV screen.

How can we know what to design?

When designing interactive products, it is important to consider who is going to use them, how they are going to be used, and where they are going to be used. It is also important to understand what kind of activities people are doing when they use the products. For example, when people are banking online, then the interface should look secure, trustworthy, and needs to be easy to navigate.

Technologies are increasing, and the world becomes suffused with technologies for diverse activities. There are a lot of interfaces and interactive devices available, and they are also very diverse. Interfaces that used to be physical, such as cameras, microwaves, and washing machines, are becoming digital and require interaction design. This is called consumer electronics. There is also another type of costumer interaction: self-checkouts at stores, and libraries in which costumers have to check in their own books. Often, thee interfaces are not friendly. Thus, it is more cost-effective and requires less personnel, but it can lead to frustration for the users: accidentally pressing the wrong button can be frustrating.

The question of: how do you optimize the users’ experience during interaction with a system, environment, or product, so that they support and extend the users’ activities in effective, useful, and usable ways? The authors list some principles that can help in deciding which choice to make:

  1. Take into account what people are good and bad at;
  2. Consider what might help people with the way they currently do things;
  3. Think through what might provide quality user experiences;
  4. Listen to what people want and get them involved in the design;
  5. Use tried and tested user-based techniques during the design process.

What does Interaction Design entail?

Interaction design is defined as “designing interactive products to support the way people communicate and interact in their everyday and working lives.” It is thus very concerned with practice: how to design user experiences. It differs from other approaches to design computer-based systems such as software engineering. For example, think of someone who works in creating buildings. There are architects and there are civil engineers. Architects think about the people and their interactions with each other and with the house: are there enough family and private spaces? Are the spaces for eating and cooking in close proximity? In contrast, engineers are interested in issues with realizing the ideas. They think about costs, durability, structural aspects, environmental aspects, and so forth. Thus, such as there is this difference, there is also a difference between designing an interactive product (architects) and engineering the software for it (engineers).

What are the components of interaction design?

Interaction design is fundamental to all disciplines, fields, and approaches that are concerned with researching and designing computer-based systems for people. In the book, the figure shows which disciplines and fields these are. The differences between interaction design and these approaches lie mainly in the methods, philosophies, and lenses that they use to study, analyze, and design computers. Another difference is in terms of the scope and problems that they address. For example, the Information Systems approach is about computing technology in domains like business, health, and education. The Computer-Supported Cooperative Work (CSCW) is about finding ways to support multiple people to work together, using computer systems.

Who is involved in interaction design?

Effective designers need to know about users, technologies, and interactions between them in order to create effective user experiences. They need to understand how people act and react to events, and how they communicate and interact with each other. They also need to understand how emotions work, what is meant by aesthetics, desirability, and what the role of narrative in human experience is. Furthermore, they need to understand the business side, the technical side, the manufacturing side, and the marketing side. Thus, it is not surprising that interaction design is often carried out by multidisciplinary teams, in which engineers, designers, programmers, psychologists, artists, and so forth are part of the team. A benefit of this is that many more ideas can be generated. A downside is that it can be difficult to work together, because there are a lot of different perspectives involved.

What are interaction design consultants?

The importance of good interaction designs are acknowledged by many companies. Therefore, there are now many interaction design consultancies. These include companies such as Cooper, NielsenNorman Group, and IDEO and also more recent ones. IDEO is a big company that has developed thousands of products, for example the first mouse used by Apple.

What is the user experience?

The user experience (UX) is very important in interaction design. UX refers to how a product behaves and is used by people in the real world. Every product that people use has a user experience: newspapers, ketchup bottles, and so forth. It is about how people feel about a product and how much pleasure and satisfaction they derive from it. It is about the overall impression of how good it is to use, to how nice it feels in your hand.

An important point is that user experience can not be designed: one can only design for a user experience. Some designers say UXD instead of UX. The ‘D’ refers to encouraging design thinking which focuses on the quality of the user experience rather than on the set of design methods to use.

There are thus many different factors that interaction designers need to take into account. Unfortunately, there is no unifying framework that can be used, but there are conceptual frameworks, tested methods, guidelines and other relevant research findings, which will be described.

According to McCarthy and Wright, there are four core threads that make up our holistic experiences: sensual, emotional, compositional, and spatio-temporal:

  1. The sensual thread. This refers to our sensory engagement with a situation. It involves the level of absorption people experience with technological devices and applications. Think of computer games, smartphones, and chat rooms, in which people are highly absorbed in their interactions at a sensory level: they feel thrill, fear, pain, joy and comfort.
  2. The emotional thread. Research on this thread is about how emotions are intertwined with the situation. For example, a person may become angry with a computer because it does not work properly. Emotions also refer to judgments about value: when someone purchases a new phone, one might be drawn to the ones that are cool-looking. However, they may experience turmoil because these are the most expensive phones.
  3. The compositional thread. This thread refers to the narrative part of the experience, and the way a person makes sense of it. For example, when shopping online, sometimes the options are very clear to people, but sometimes it can also lead to frustration. For example, people might ask themselves: “What is this about?” or “What happened?”. This thread is thus about the internal thinking that occur during experiences.
  4. The spatio-temporal thread. This refers to the space and time in which our experiences take place and their effect on these experiences. There are different ways to talk about this, for example we talk of time speeding up, slowing down, and we talk about space in terms of public and personal space, and for example needing one’s own space.

These threads can be used as ideas to help designers think and talk more clearly about the relationship between technology and experience.

What is the process of interaction design?

The process of interaction design involves four basic activities:

  1. Establishing requirements
  2. Designing alternatives
  3. Prototyping
  4. Evaluating

These activities are used to inform one another and they are also meant to be repeated. For example, one can measure the usability of what has been built by looking at how easy it is to use. This can provide feedback about that some things need to be changed, and that certain requirements are not met yet. It can also help to elicit responses from potential users about what they think and how they feel about what has been designed. This evaluation is really important in interaction design, and is needed to make sure that a product is appropriate. As important as it is to involve users, it is also important to understand people’s behaviour. This knowledge can help the designers to create better interactive products. Learning about people can also help to correct incorrect assumptions that designers have. For example, it is often assumed that old people want bigger texts, because of poorer vision. However, studies have shown that people in their 70s, 80s and older have good vision and are capable of interaction with standard-size information. It is also important to be aware of cultural differences. For example, the time and date in different countries: in the USA, date is written as month/day/year (05/26/1998), and in other countries it is often written as day/month/year (26/05/1998). Designers can also use contrasting designs, in which different colors, images and structures are provided to appeal to people in different countries.

What about interaction design and the user experience?

Before developing an interactive product, it is important to understand what the goal of this product will be. Is the goal to make the users more productive? Is the goal to create a learning tool that is challenging and motivating? The authors suggest to classify the goals in terms of usability and experience goals.

What are usability goals?

Usability is defined as making sure that the interactive products are easy to learn, effective to use, and enjoyable to use from a user’s perspective. It is broken down into these goals:

  • Effective to use (effectiveness);
  • Efficient to use (efficiency);
  • Safe to use (safety);
  • Having good utility (utility);
  • Easy to learn (learnability);
  • Easy to remember (memorability).

These goals are often operationalized as questions. Through answering these questions, designers can be alerted very early on in the design process to potential problems and conflicts. An example of a good question is: “How long will it take a user to figure out how to use the most basic functions  for a new smartwatch; how much can they capitalize on from their prior experience; and how long would it take a user to learn the whole set of functions?”. Simply asking: “is the system easy to learn?” is not a good question.

What are user experience goals?

There are a lot of different experience goals. These goals include emotions and felt experiences, and are divided into desirable and undesirable ones. This is shown in Table 1.1. Examples of desirable aspects are: satisfying, helfpul, ful, and undesirable aspects are: boring, unpleasant, frustrating.

What are design principles?

Design principles are generalizable abstractions which are intended to orient designers toward thinking about the different aspects of their designs. A common example is feedback: a product should incorporate adequate feedback to the users to ensure that they know what to do next in their tasks. Another one is ‘findability’, which refers to the degree to which a particular object is easy to discover or locate (navigating a website, finding the delete image option on a digital camera).

These principles are derived from theories, experience, and common sense. They are prescriptive: they thus prescribe what designers should provide and what they should avoid (do’s and don’t’s).

The authors describe the most common design principles: visibility, feedback, constraints, consistency, and affordance.


Think back to the voice mail system: the voice mail system made it unclear as to how many messages there are, while the answer machine using the marbles made it very clear. Norman (1988) uses an example of the controls of a car: the controls for the different operations are clearly visible (indicators, headlights, horn). This makes it easier for the driver to find the appropriate control. When things are not visible, it makes it harder for users to use them. Nowadays, a lot of products do poor on this principle: for example think of the sensor technology used in bathrooms. When you washed your hands and there is a sensor technology for drying your hands, it can be sometimes difficult to know where to place your hands in this drying machine.


Feedback is defined as sending back information about what action has been done and finished. There are different types of feedback for interaction design: audio, tactile, verbal, visual, and combinations of these. It is important to decide which combinations are appropriate for different kinds of activities.


Constraining users refers to determining ways to restrict the user interaction that can take place at a given moment. There are different ways to achieve this, and a common example is to deactivate certain menu options, thereby restricting the user to actions permissible at that stage of the activity. This prevents users from selecting incorrect options and reduces the chance of making a mistake. This can also be incorporated in the physical design of a device: the external slots in a computer have been designed to only allow a specific cable or card to be inserted in there.


A consistent interface is one which follows rules, such as the same operation to select all objects. For example, always clicking the left mouse button to highlight a graphical object. Consistent interfaces are easier to learn and use. This can be more difficult to achieve in complex interfaces.


Affordance means that people are given hints about how to use the product. For example, a mouse button invites pushing (clicking) because it’s constrained in its plastic shell. When products are affordable, it makes them easier to interact with. Other examples are a door handle which affords pulling, and a cup handle affording grasping. Norman (1999) suggests two kinds of affordance: perceived and real. Physical objects have real affordance (a cup has actual handles). This means that it does not have to be learned. Interfaces are screen-based and do not have real affordances. Instead, screen-based interfaces are said to have perceived affordance, which can be learned.

An Introduction to Human Factors Engineering (Chapter 8) - Wickens et al. - 2004 - Article

An Introduction to Human Factors Engineering (Chapter 8) - Wickens et al. - 2004 - Article

What are the principles of response selection?

The difficulty and speed of selecting a response or an action is influenced by different factors. Five of them are important for system design: decision complexity, expectancy, compatibility, the speed-accuracy tradeoff, and feedback.

Decision complexity

How fast an action is selected is influenced by how many alternative actions there are possible. This is referred to as the complexity of the decision. For example, each action of the Morse code operator has only two alternatives (dit or dah) and is simpler than the choice of a typist, who has to choose between 26 letters. Thus, the Morse code operator can generate a greater number of keystrokes per minute, and users can select an action faster because there are only two options.

This relationship between response selection time and decision complexity is explained by the Hick-Hyman law of reaction time (RT). When reaction time or response time is plotted as a function of Log2(N) instead of (N), the function is linear. According to the Hick-Hyman law, humans process information at a constant rate.

This law does not suggest that systems designed to make simpler decisions are better. Instead, if an user has to transmit information, it seems to be more efficient to do this by a smaller number of complex decisions than a large number of simple decisions. This is called the ‘decision complexity advantage’. As an example, think of a typist who can convey the same message faster than the Morse code operator. Even though the keystrokes are made more slowly, there are fewer keystrokes. This means that ‘shallow menus’ with many items are better than ‘deep menus’ with just a few items.

Response expectancy

When we expect information, we perceive it more rapidly. We also select actions more rapidly and accurately when we expect to carry these actions out. For example, we do not expect a car in front of us to stop suddenly. This means that in this case we are slower to apply the brake than we would be when the light turns yellow or red at an intersection, in which we expect it.


Stimulus-response compatibility describes the relationship between the location of a control or movement of a control response and the location or movement of the stimulus or display to which the control is related. There are two subprinciples that characterize a compatible mapping:

  1. Location compatibility. This means that the control location should be close to the entity being controlled or the display of that entity.
  2. Movement compatibility. This means that the direction of movement of a control should be congruent with the direction both of movement of the feedback indicator and of the system movement itself.

The speed-accuracy tradeoff

Factors that make the selection of a response longer (complex decisions, unexpected actions, incompatible responses) also lead to more errors. There is thus a positive relationship between response time and error rates, or a positive relationship between speed and accuracy. However, in this relationship, there is no tradeoff. However, if we try to execute actions in a fast way, we are more likely to make errors. If we have to be very cautious, we will eb slow. Then, there is a negative correlation or a speed-accuracy tradeoff.


Most of the controls and actions that we take are associated with visual feedback which indicates the system response to the control input. In a car, there is the speedometer which offers visual feedback from the control of the accelerator. A good control design should also include more direct feedback, for example the resistance on a stick as it is moved. The feedback can be auditory (the beep of a phone) or visual (a light next to a switch show). Feedback is said to be good, when it is direct or instantaneous. When feedback is delayed, even by 100 msec, it can be harmful and especially when the operator is less skilled or if the feedback cannot be filtered out by selective attention mechanisms. One example of a harmful delayed feedback is while talking on a radio or telephone.

What is discrete control activation?

One way to make controls less susceptible to errors and delays is to make the controls early visible. In addition, there are other design features that also make the activation of controls less susceptible to errors and delays.

Physical feel

As said, feedback is a positive feature of controls. In some controls, there is more feedback than in others. For example, the toggle switch has good feedback: the state changes in a clear visual manner and there is also an auditory click and a tactile snap (a loss of resistance) when it moves into a new position. For other types of discrete controls, one should focus on how to provide feedback. Some have touch screens, which do not work well. Push-button phones that lack an auditory beep following the keypress also do not do this well. Feedback lights should be complemented with other indications of state change, and visual feedback should be immediate.

Size. Smaller keys are problematic: they can lead to errors when people accidentally press multiple keys at the same time.

Confusion and Labeling. Keypresses can also lead to errors when the identification of a key is not well specified or when it is new to the user (when someone does not know which location to touch). These errors are more likely to happen when large sets of identically appearing controls are unlabeled or poorly labelled, and when labels are physically displaced from their associated controls.

What about positioning control devices?

Positioning or pointing something refers to for example moving a cursor to a point on a screen, reaching with a robot arm to grab an object, or move the frequency of a radio to a new frequency. There are control devices such as the mouse, joystick for these goals. The authors now describe the relationship between the movement of an entity (the cursor) to a destination (a target). They describe a model that accounts for the time to make such movements.

Movement time

Control requires two types of movements:

  1. Movement required for the hands or fingers to reach the control (grab the mouse)
  2. Moving the control in some direction (positioning the cursor)

These movements take time. These times can be predicted by a model called the Fitt’s law: MT = a + b log2(2A/W). In this formula, a = amplitude of the movement, W = width of the target or the desired precision with which the cursor must land. Movement time is then linearly related to the logarithm of the term (2A/W), which is an index of difficulty of the movement.

Device characteristics

There are four categories of control devices that are used for positioning or pointing: direct position controls (light pen and touch screen), in which the position of the human hand/finger corresponds with the desired location of the cursor. Second, there are indirect position controls (the mouse), in which the hands are on a different location than on the screen, but with the hands they move the mouse to point somewhere. Third, there are indirect velocity controls such as the joystick and the cursor keys. This means that control in a given direction leads to velocity of cursor movement in that direction. For example, for cursor keys, this means repeated presses or holding it down for a longer period of time. Joysticks can be of three types: isotonic (moved freely and will rest wherever they are positioned), isometric (rigid but produce movement proportional to the force that is applied), and spring-loaded (offer resistance proportional to the force applied and the amount of displacement, springing back to the neutral position when pressure is released). The fourth category of control devices is voice control.

There are two important variables which affect usability of controls for pointing:

  1. Feedback of the current state of the cursor should be salient, visible, and immediate.
  2. Performance is activated in a more complex way by the system ‘gain’.

Gain is described by: G = (change of cursor)/(change of control position.

A high-gain device is then one in which a small displacement of the control leads to a large movement of the cursor or produces a fast movement in the case of a velocity control device. The gain of direct position controls (touch screen and light pens) will be 1.0. It seems that the ideal gain for indirect control devices should be in the range of 1.0 to 3.0.

What is task performance dependence?

It seems that the two best control devices are the two direct position controls and the mouse. When there are more complex spatial activities such as drawing or handwriting, indirect positioning devices seem to be the best in providing natural feedback. Cursor keys are adequate for some tasks, but they do not produce long movements such as during text editing.

What is the work space environment?

Devices are often used within a broader workspace, such as a display. Display size is also important. Greater display size need high-gain devices. Smaller displays require precise manipulation and lower gain. Vertically mounted displays are also of influence: these impose greater costs on direct positioning devices.

What are verbal and symbolic input devices?

For symbolic, numerical, or verbal information that is involved in system interaction, keyboards or voice control are often the interfaces of choice.

Numerical data entry

For numerical data, numerical keypads or voice control are the best. Voice control is the most compatible and natural, but it has technologies problems which slow the rate of possible input. Keypads come in three forms: the linear array (above the keyboard). This is not preferred, because it costs a lot of time to move from key to key. Then there is the 3*3 square arrays, this reduces movement time. Some research suggests that the layout with 123 on the top (such as on the telephone) is better than the 789 on top (calculator, laptop keyboard), but not that better that the 789 keyboards should be redesigned.

Linguistic data entry

The computer keyboard is the commonly used device for linguistic data. An alternative to the standard QWERTY layout is the chording keyboard. In a chording keyboard, individual items of information are entered by simultaneous depression of combinations of keys. This has three advantages:

  1. The hands never need to leave the chord keyboard, and there is no requirement for visual feedback to monitor the correct placement of a thumb or finger.
  2. The chording board is less susceptible to repetitive stress injury or carpal tunnel syndrome.
  3. After extensive practice, chording keyboards support more rapid word transcription processing than the standard type-writer keyboard, because there is no movement-time trade-off.

However, before one can use a chording keyboard, extensive learning is required.

What about voice input?

The benefits of voice control

Voice is a natural form of communication. This naturalness could be employed in many control interfaces. The benefits are especially clear in dual-task situations. For example, when the hands are busy with other tasks (driving the car), it would be handy if someone could talk to the interface. For example, dialling by voice command is handy.

Costs of voice control

There are four distinct costs of voice control:

Confusion and Limited Vocabulary Size. Voice recognition systems are prone to make errors, because they could classify similar-sounding utterances as the same (cleared to vs. cleared through).

Constraints on Speed. The natural flow of speech does not necessarily place pauses between different words. Then, the computer does not know when to ‘stop counting syllables’ and determine the end of a word. This may require the speaker to speak in a slow way, with pausing between each word. There is also a lot of time required to ‘train’ voice systems to understand the individual speakers’ voice.

Acoustic Quality and Noise and Stress. A noisy environment hinders the voice control system. Also, under conditions of stress, one’s voice can change. Therefore, there should be great caution when designing voice control systems that are used as a part of emergency procedures.

Compatibility. Voice control is less suited for controlling continuous movement than most of the available manual devices. For example, it is easier to steer a car by manually controlling the steering wheel compared to saying ‘a little left, now a little more left’.

What about continuous control and tracking?

Sometimes we need to make a cursor or a vehicle follow a ‘track’ or ‘continuously moving dynamic target’.

The Tracking Loop: Basic Elements

In a tracking task, there are basic elements. Each element receives a time-varying input and produces a corresponding time-varying output. Every signal in the tracking loop is represented as a function of time, f(t). When driving a car, the human operator perceives a discrepancy or error between the desired state of the vehicle and its actual state. Then the driver wants to reduce this error function of time, e(t). To do so, he or she uses force, f(t) to control the wheels. This produces a rotation, u(t) of the steering wheel, and this produces control output. The relationship between the force and the steering wheel is called the ‘control dynamics’. The movement of the wheel to a given time function u(t) is called the system output, o(t). When presented on a screen, this output position is called the cursor. The relationship between control output u(t) and system response o(t) is defined as the system dynamics.

What about the input?

Examples of tracking tasks are drawing a straight line on a piece of paper, or driving a car down a straight road on a windless day. In both these cases, there is a command target input and a system output. However, the input does not vary: after you get the original course set, there is nothing to do but to move forward. You can drive fast or drive slow. But, when the road is curvy, one needs to make corrections and there is uncertainty. Then, error and workload can increase if you try to move faster. The frequency with which corrections must be made are called ‘the bandwidth of input’. In tracking tasks, this bandwidth is expressed in terms of the cycles per second (Hz) of the highest input frequency present in the command or disturbance input. When bandwidth is above 1 Hz, it is hard for people to perform tracking tasks. In most systems, the bandwidth is about 0.5 Hz. Higher bandwidth means higher complexity. This complexity is based on the order of a control system.

Control order

Position control. The order of a control system refers to whether a change in the position of the control device leads to a change in the position (zero-order), velocity (first-order), or acceleration (second-order) of the system output. For example, moving a pen across a paper leads to a new position of the system output. If you hold your pen still, the system output is also still. This is called zero-order control.

Velocity control. Think of a digital car radio. When you depress the button to position the radio, this creates a constant rate of change (velocity) of the frequency setting. For some devices, pressing the button harder leads to a proportionally greater velocity. This is called first-order control. Most pointing devices use velocity control: the greater a joystick is deflected, the faster will be the cursor motion. Another example of first-order control is the position of the steering wheel (input) and the rate of change (velocity) of heading your car (output).

Acceleration control. In a spacecraft there is inertia, and each rocket thrust produces an acceleration of the craft for as long as the engine is firing. This is called a second-order acceleration control system. Another example of this rolling a pop can to a new position or command input on a board. Second-order systems are often difficult to control, because they are sluggish and instable. Therefore, they are rarely designed into systems. Second order systems can only be successfully controlled if the tracker anticipates, inputting a control now for an error that will be predicted to occur in the future.

What about time delays and transport lags?

Higher-order systems (second-order) have lags. For example, when navigating through virtual environments, there is often a delay between movement of the control device and the position. These delays are called transport lags and they also require anticipation, which leads to higher human workload and can lead to system errors.

What is stability?

Next to lag, gain, and bandwidth, stability is also an important factor for control systems. When there is instability of control, this is called closed-loop instability, or negative feedback instability. Closed-loop instability results from three factors:

  1. There is a lag in the total control loop from the system lag or from the human operator’s response time.
  2. The gain is too high.
  3. The human is trying to correct an error too fast and is not waiting until the lagged system stabilizes before applying another input.

Human factor engineers can offer five solutions which can be implemented to reduce closed-loop instability:

  1. Lower the gain
  2. Reduce the lags
  3. Caution the operator to change strategy so that he or she does not try to correct every input but filters out the high-frequency ones, thus reducing the bandwidth
  4. Change strategy to seek input that can anticipate and predict
  5. Change strategy to go ‘open loop’

What is open loop?

When the operator perceives an error and tries to correct it, the loop is called ‘closed’. However, it can also be the case that the operator knows where the system output needed to be and responded with precise correction to the control device to produce the goal. Then, the loop is ‘open’: the operator does not need to perceive the error and will not be looking at the system output. However, open-loop behavior depends on the operator’s knowledge of:

  1. Where the target will be and;
  2. How the system output will respond to his or her control input.

What is remote manipulation / telerobotics?

Sometimes, direct human control is desirable, but not feasible. One example is remote manipulation, for example when operators control an underseas explorer of an unmanned air vehicle (UAV). The second one is hazardous manipulation, such as when working with highly radioactive material. This is called ‘telerobotics’. Telerobotics comes with challenges because of the absence of direct viewing. The goal of the designer of such systems is to create a sense of ‘telepresence’: a sense that the operator is actually immersed within the environment and is directly controlling the manipulation as an extension of his or her arms and hands. There are different factors that prevent this goal from being achieved, which are discussed last.

Time delay

Systems often involve time delays between manipulation and visual feedback for the controller. These delays can present challenges for effective control.

Depth perception and Image Quality

Teleoperation requires tracking or manipulating in three dimensions. However, human depth perception in 3-D displays is often less adequate for precise judgment. One solution for this may be the implementation of stereo. However, a problem with stereo implementation might be that two cameras must be mounted and two separate dynamic images must be transmitted over what may be a limited bandwidth channel.

Proprioceptive Feedback

In addition to visual feedback, proprioceptive or tactile feedback is also important. Consider for example what happens when a remote manipulator punctures a container of radioactive material by squeezing too hard. To prevent such accidents, designers would like to present the same tactile and proprioceptive sensations of touch, feel, pressure, and resistance that we experience as our hands grasp and manipulate objects directly. To prevent such accidents, designers need to present the same tactile and proprioceptive sensations of touch, feel, pressure, and resistance that we experience as our hands grasp and manipulate objects directly. But it is challenging to present such feedback effectively.

What are the solutions?

The biggest problem in teleoperator systems is the time delay. An effective solution would then be to reduce the delay. Sometimes this involves reducing complexity. A second solution might be to develop predictive displays that are able to anticipate future motion and position of the manipulator on the basis of present state and the operator’s current control actions and future intentions. These tools are useful, but they are only as effective as the quality of the control laws of system dynamics that they embody. Furthermore, the system cannot achieve effective predictions of a randomly moving target. A third solution to avoid delayed feedback is by implementing a computer model of the system dynamics, allowing the operator to implement the required manipulation in ‘fast time’.

An Introduction to Human Factors Engineering (Chapter 15) - Wickens et al. - 2004 - Article

An Introduction to Human Factors Engineering (Chapter 15) - Wickens et al. - 2004 - Article

What is automation?

Automation is the definition for when a machine performs a task that was normally performed by the human operator. It has some ironies. For example, when it works well, we trust it. However, sometimes it fails, and those failures can be catastrophic. Think of airplane crashes, of which the consequences are severe.

Often, these failures are not solely related to software or hardware components. Instead, the problem is often in human issues of attention, perception, and cognition in managing the automated system. The performance of most automation depends on the interaction between people with the technology.

Why do people automate?

There are four different categories of reasons for why designers develop machines to replace or aid human performance:

  1. Impossible or hazardous. Sometimes processes are automated because it is impossible or dangerous for humans to perform the task. For example, think of teleoperation or robotic handling of hazardous material. Sometimes there are also special populations who have disabilities that lead them to be unable to carry out skills without assistance. Examples of these are automatic guidance systems for the quadriplegic or automatic readers for the visually impaired. Thus, automation often enables people to do what would otherwise be impossible.
  2. Difficult or unpleasant. Sometimes the tasks are not impossible, but very difficult for humans. For example, humans can also calculate digits, but it is more effortful and error-prone compared to an automatic calculator.
  3. Extend human capability. Sometimes automated functions may not replace, but simply aid humans in doing things. For example, the human working memory is vulnerable to forgetting. Then automated aids that supplement memory are useful. For example, automated telephone operators that directly print the desired phone number on a small display on your telephone. Automation is particularly useful in extending human’s multitasking capabilities: pilots report that autopilots can be useful in temporarily reliving them from duties of aircraft control when other task demands make their workload extremely high.
  4. Technically possible. Sometimes, automated functions exist because they CAN exist (the technology is available and it is inexpensive). This does not always add value to the human user. For example, when we are calling a service, we often go through a menu which redirects us to the specific help desk. However, it would save the callers a lot more time if a speaker directly picks up the phone. But, for the company, the menu system is way cheaper than the actual person. According to the authors of the book, automation should focus on supporting system performance and humans’ tasks rather than being about technical sophistication.

What are the stages and levels of automation?

To explain what automation is, it can be useful to talk about the stages of human information processing that it replaces, and also the amount of cognitive or motor work that automation replaces (the level of automation). There are four stages of automation, with different levels in each stage:

  1. Information acquisition, selection and filtering. Automation is a replacement for many cognitive processes of human selective attention. Examples are spell-checker systems in Word which redline misspelled words. Other, more aggressive examples of stage 1 automation are those that filter or delete information which is assumed to be irrelevant.
  2.  Information integration. Automation serves as a replacement for many of the cognitive processes of perception and working memory, in order to provide the operator with a situation assessment, inference, diagnosis, or easy-to-interpret picture. Examples are automation at stage 2 are visual graphics that are configured in a way that makes perceptual data easier to integrate. At higher levels are automatic pattern recognizers, predictor displays, and diagnostic expert systems. A lot of intelligent warning systems that guide attention (stage 1) also include integration.
  3. Action selection and choice. In stage 2, there are automated aids that diagnose a situation. These are different from those that recommend a particular course of action. In stage 3, there is an automated agent which assumes a certain set of values for the operator who is dependent on its advice. For example think of the airborne traffic alert and collision avoidance system (TCAS) which advises the pilot of a maneuvre to avoid colliding with another aircraft.
  4. Control and action execution. Examples of control automation are autopilots in aircraft, cruise control in driving, and robots in industrial processing.

At stages 3 and 4, the level of automation is so high that it is of critical importance.

What are possible problems in automation?

As noted, there are shortcomings in automation. However, when talking about the shortcomings, one should never forget about the benefits of automation. For example, the ground proximity warning systems in aircraft has helped to save many lives by alerting pilots to possible crashes.

What is automation reliability?

Automation is reliable, when it does what the human operator wants it to do. However, for human interaction, perceived reliability is more important than actual reliability. There are four reasons for why automation may be perceived as unreliable:

  1. It is indeed unreliable. This is the case when a component fails, or when the design has flaws. It is important to note that automated systems are often complex and consist of more components and are therefore more prone to errors in creating them.
  2. There may be certain situations in which the automation is not designed to operate or in which it may not operate well. All automation are created to be used for a limited operation range. Then, using automation for other purposes may lead to lower reliability. For example, cruise control is used to maintain a constant speed level on a highway. It does not slow the car when going off a steep hill.
  3. The human operator may incorrectly ‘set up’ the automation. For example, nurses sometimes make errors when they program systems that allow patients to administer periodic of painkillers. If they enter a wrong dose, the system will perform it anyhow.
  4. Sometimes, the automated system does exactly what it intends to do, but it is so complex that the human operator does not fully understand it. This makes the automation seem like it is acting erroneously to the operator.

According to the authors, it is better to say ‘imperfect’ automation than ‘unreliable automation’, because automation is often used for tasks which are impossible to do perfectly (weather forecasting).

What about trust?

Trust is related to perceived reliability. We trust others, when we know that they do what is expected. The same goes for automated systems. Trust should thus be well calibrated: our trust in the agent should be in direct proportion to its reliability. Mistrust refers to when trust is not directly related to reliability: as reliability decreases, our trust should go down and we should be prepared to act ourselves and be receptive to sources of advice or information.

Studies have shown that human trust in automation is not fully well calibrated: sometimes there is distrust (too little trust) and sometimes there is overtrust (too much trust). As an example of distrust, think of circumstances in which people prefer manual control over automatic control, such as in the case of automation that enhances perception: people still want to see for themselves. There is also distrust of alarm systems with many false alarms. Distrust in automation can result from a failure to understand the system. The consequences of distrust are not always severe, but they may lead to inefficiency when this leads people to reject the good assistance that automation can offer.

What about overtrust and complacency?

Overtrust is sometimes called ‘complacency’.  This means that people trust the automation more than they should. This can have severe negative consequences. For example, if a airline pilot trusts his automation too much, this can result in a crash. Overtrust results from positive experiences: most often, automated systems do work well. Sometimes there are no failures at all. However, this does not mean that the automated system is perfect. When people overtrust the automated systems, they can stop monitoring it. This is a problem, when the system does fail!

There are three distinct implications of this:

  1. Detection. When there is overtrust, the operator will be slower to detect a real failure.
  2. Situation awareness. When people are active participants of the process, they are better aware of the dynamic state of processes, and they will be better in selecting and executing actions compared to when they are passive monitors. When people are thus distracted, or do not fully understand the system, this may lead them to be less likely to intervene correctly and appropriately.
  3. Skill loss. When operators become passive monitors, this leads to ‘deskilling’, or gradual skill loss. This can have two consequences: the operator becomes less confident in his or her own performance, and becomes more likely to continue to use automation. Second, it may hinder appropriate actions when the system fails.

Another irony is that the circumstances in which automation devices fail, are the same circumstances that are most challenging to humans. This thus means that automation fails, when humans need it the most. However, as humans have learned to trust the automated systems, they might be unable to perform the task themselves.

What about workload and situation awareness?

One goal of automation is to reduce operator workload. For example, an automated device for lane keeping can help the driver to reduce driving workload. However, sometimes automation replaces workload in situations in which the workload was already very low and instead of workload, the loss of arousal is the problem. Sometimes, the reduced workload can also result in lower situation awareness: as automation level goes up, both workload and situation awareness go down. Sometimes automation reduces workload during already low-workload tasks and sometimes it increases workload during high-workload tasks. This is called clumsy automation: easy tasks become easier, and harder tasks become harder.

What about training and certification?

Automation can also lead to that complex tasks are perceived as being easy. This can then lead to reduction in trainings. For example, on ships, there was a lot of misunderstanding about the new radar and collision avoidance systems. This has contributed to accidents.

What about human cooperation?

In nonautomated systems, there are many circumstances in which communications are important. Sometimes this negotiation between humans is eliminated with automation, and this can be frustrating when humans try to interact with an uncaring, automated phone menu.

What about job satisfaction?

This book does not discuss the moral implications of replacing workers by automation. Many operators are highly satisfied and proud with their job, and when this person is then asked to remain in a potential position in case that the automation fails, could lead to negative and unpleasant situations.

What about function allocation between the person and automation?

Automation can be designed to avoid problems with operators. This could be done by systematic allocation of functions to the humans and to the automation, based on the capabilities of each. For example, one could allocate functions depending on whether the automation or human performs the function better. This begins with a task and function analysis. Functions are considered in terms of the demands that they place on the human and automation. This guides the decision to automate each function. As an example, think of a maritime navigitation which involves the position and velocity of surrounding ships using radar signals. This function involves complex operations. Then, automation is better at performing this. In contrast, the course selection involves judgement regarding how to interpret the rules of the road. Humans are better at exercising judgment, and then this task should be allocated to the human. In Table 2 of the book, you can see what things humans are better at compared to what automation is better at. I suggest you take a look at this, since this can be asked on the exam.

What about human-centered automation?

It seems better to think about how automation can support and complement humans, instead of limiting function allocation to one of the two. It is best that the automation design focuses on creating a human-automation partnership by incorporating the principles of human-centered automation. This could mean that the human has more authority over the automation, that a level of human involvement is chosen that leads to the best performance or that the worker’s satisfaction with the workplace is enhanced. There are six human-centered automation features that the authors believe will achieve the goal of maximum harmony between human, system, and automation:

  1. Keeping the human informed. It is important for the operator to be informed about what the automation is doing and why. This can be done via displays. Thus, the pilot should be able to see the amount of thrust delivered by an engine as well as the amount of compensation that the autopilot might have to make to keep the plane flying straight.
  2. Keeping the human trained. Automation often changes the task, and therefore operators should perform more abstract reasoning and judgment. Training for the automation-related demands is needed. Also, in case of failure, the operator’s skills should be as high as possible to avoid problems.
  3. Keep the operator in the loop. This is one of the hardest challenges of human-centered automation. The question is: how can we keep operators in the control loop, so that awareness remains high? It seems that as long as the human maintains some involvement in decision making regarding whether to accept the automation suggestions or not, there are adequate levels of situation awareness. This is true even when workload was reduced.
  4. Selecting appropriate stages and levels when automation is imperfect. Designers have to choose the stage and level of automation to incorporate into a system. It seems that the extent to which the automation is imperfect, the negative consequences of late stage imperfection are more harmful than early stage imperfection. Therefore, in implementing the recommendation for levels and stages in automation for high-risk decisions, it is important to realize the effect of time pressure. If a decision has to be made in a time-critical situation, later stages of automation can usually be done faster than by human operators.
  5. Make the automation flexible and adaptive. The amount of automation needed for any task varies from person to person and within a person over time. A flexible automation system in which the level can vary is thus preferable over one that is fixed and rigid. Flexible automation means that there are different levels of automation: one driver may choose to use cruise control, the other may not. Adaptive automation is a bit different. In adaptive automation, the level of automation is based on particular characteristics of the environment, user, and task. For example, an adaptive automation system would be one in which the level of automation increases as the workload increases, or as the operator’s capacity decreases (fatigue). For example, when a system detects a high workload, the degree of automation can be increased.
  6. Maintain a positive management philosophy. The management philosophy influences a worker’s acceptance and appreciation of automation. If they feel like automation is ‘imposed’ because it does the job better than that they do, they might have negative attitudes. However, if they see it as a complement on their performance, it will be accepted more.

What about supervisory control and automation-based complex systems?

Process control

Process control, such as during manufacturing of petro-chemicals, nuclear or conventional energy, the systems are so complex that there needs to be high levels of automation. Then the question is how to support supervisors in times of failures and fault management. This can be achieved using interfaces. These interfaces have two important features:

  1. They are highly graphical. They use configural displays which represent the constraints on the system, in ways that these constraints can be easily perceived without heavy computations.
  2. They allow the supervisor to think flexibly at different levels of abstraction, ranging from physical concerns to abstract concerns.

In robotics control, automation is desirable, because of the repetitious, fatiguing, and hazardous mechanical operations involved. Then the issue of ‘agile manufacturing’ can emerge, in which manufacturers are able to respond quickly to the need for high-quality customized products. Sometimes, remote operators have to supervise behaviour of a group, not directly but by ‘encouraging or exorting’ the desired behavior of the group. This is called ‘hortatory control’, in which the systems that are being controlled require a high degree of autonomy. An example of this is road traffic controllers, which try to influence the flow of traffic in an area around a city by informing travellers of current and expected road conditions and encouraging them to take certain actions. The biggest challenge in this is to provide information that is the most effective in attracting users to adopt certain behavior.

What can be concluded?

Automation has been beneficial to safety, comfort, and job satisfaction, but it also has lead to problems. Therefore, automation should be carefully designed with consideration of the role of the human.

Safety science - Hudson, 2010 - Article

Safety science - Hudson, 2010 - Article

This course will start with a short review of different models within safety science. Let us first examine the history of thinking about the causes of accidents and how they happen.

“Act of God”

A long time ago people used to think that accidents were caused by gods and other spirits. Their strategy was to try and prevent accidents by performing rituals and sacrifices and pleasing these gods and spirits. This kind of thinking comes to life when two events take place very close together and people attribute causal power to the first event. The attribution of this power is based on classical or operant conditioning. People can break through this false attribution by conceptualization and rationalization.

Chain of events

Some people see accidents as the consequence of a chain of events. The simplest models have a single chain, one example being Herbert Heinrich’s Domino Theory. This theory contains five ‘dominos’, each labeled with accident causes. These causes have the following order:

  • Social environment and ancestry. Undesirable personality traits can be passed along through inheritance or develop because of a person’s social environment.

  • Fault of person. These traits can cause character flaws, such as ignorance and recklessness. Heinrich calls these ‘secondary personal defects’. They contribute to the next domino.

  • Unsafe act and/or unsafe condition. The faults of a person can lead to unsafe acts and/or conditions. These are caused by careless people, poorly designed or badly maintained equipment. Think of starting machines without precisely following the safety instructions or checking if it is in a good state.

  • Accident.

  • Injury.

Even though this chain works in a straight line, it can be stopped by reinforcing or taking away certain dominos. An employer can try to eliminate unsafe acts by stronger control and regulation, or provide training to change the faults of the employees.

The Domino Theory and its criticism

First of all, there usually is not only one single cause to an accident. The reality contains many different little factors (or dominos), that all contribute (and fall at the same time), and eventually cause the accident to happen. By investigating the process you can see how different causes are woven together

Secondly, it may be so that an event or condition on its own cannot cause the accident, but only in combination with other events or conditions. None of the lines individually can be seen at the primary cause of the accident.

An alternative interpretation of the Domino Theory states that combinations may or may not become causes with a degree of probability. This is a more realistic interpretation, because it doesn’t state that A and B automatically cause C, but only that they might. Unfortunately this diminishes the causal power of any event or condition and the requirement for linearity.

Latent conditions and non-linear thinking

Real accidents have several causes and arise through a number of events and conditions. An example of such a model is called Tripod. This model has defensive barriers that are in between the hazard and the accident. These barriers can have holes though, and through them the hazard can cause an accident. These holes can be caused by all kinds of factors. This model is also known as the Swiss Cheese model.

Newtonian and Einsteinian universes

With a linear and deterministic model you can look at it from above and predict what will happen. This is also called a Newtonian universe. However, there might be forces at work that slightly influence parts of the models and change the outcomes. This is called a relativistic or Einsteinian universe. This has attractions and repulsions that turn this linear model into a three-dimensional one.

Non-linear and non-deterministic models

All these models choose to ignore that common effects of higher order causes on lower order barriers. Cultural factors have serious effects on many levels of organizations and as well on immediate defenses. The models should be expanded and allow for holes to be altered by common organizational factors. These factors are probabilities, which are in turn also being influenced by other higher-order factors.

Causal effects are non-linear and non-deterministic. The only conclusion to be drawn is that the relationships between causes are probabilistic and are themselves being influenced by higher order factors. Sometimes, small variations in the starting conditions can explain the accident.

Common mode failure

Common mode failure means that failure of one defense may increase failure of another defense.

Why keep using old models?

The Fundamental Attribution Error

During the attribution error people attribute the behavior of others to dispositional factors, whereas they attribute their own behavior to external factors. People tend to attribute failures in others to personal weaknesses, but their own failures are allegedly caused by the environment.

This also works the other way around. When a result is good, we tend to give ourselves credit for that. Other people might not think that you personally created that success, and think that you are profiting from situational forces.

Managers and people in charge like to use simple linear models, because they can attribute the causes of accidents to personal failings in certain people. At the same time, they can make it seem that they would not have made that mistake. The models are thus attractive because they make it seem the accident was predictable, even though really that is hindsight.


Hindsight is the tendency to exaggerate in hindsight what one knew in foresight. People tend to pretend that they knew all along what was going to happen. They also pretend others should have known that, and the fact that the accident happened, is being blamed on their incompetence to deal with that.

The attribution error and hindsight together make it more appealing for managers to choose a linear model of accident causation. The manager believes that the employee knew what was happening, he could do something about it, but was apparently incapable of doing the right thing. This way, the individual gets the blame and the manager goes free. The reality is of course far from this.

Human factors & adverse events - Reason - 1995 - Article

Human factors & adverse events - Reason - 1995 - Article

Accidents in certain areas can be catastrophic. Think about accidents in nuclear power generation or air transport. Between the forties and the eighties, this became a main concern of the human factors specialists.

Sparkling interest in human factors in the medical profession

After the eighties, several scientists started investigating the reliability of the medical provisions. This community arose on two levels. At the doctor-patient interface (also called “sharp end”) common features include uncertain and dynamic environments, sources of concurrent information, changing and badly defined goals, actions having immediate and/or multiple consequences, the combination of time stress and routine activities, advanced but limited new technologies, confusing human-machine interfaces and multiple players with different interfering goals. There is however also a second, organizational level. There, these activities are being carried on within complex institutional settings. They entail several interactions between different professional groups.

There is a growing concern for human factors in health care. Fortunately, the models of accident causation that were developed for other domains, can be applied to the medical domain as well.

Human errors

Since the sixties there has been an increase in the amount of human errors when it comes to accidents with hazardous technologies. A possible explanation for this might be that equipment has become more reliable. The accident investigators have become more aware that errors are not only restricted to the ‘sharp end’. They realize now that human errors cannot all be put in the same group, they can be very different and have very different causes and consequences.

Human errors can be classified based upon their causes or their consequences. Classifications based on consequences describe the error in terms of the proximal actions that contributed to the mishap (e.g. wrong intubation). Classifications based on causes make assumptions about the psychological mechanisms implicated in creating the error.

Slips, lapses and mistakes

An error is being defined here as the failure of planned actions to achieve their desired goal. There are two ways in which that can occur. Either the plan is okay, but the actions do not go as planned. These failures are called slips and lapses. Slips are attentional failures and relate to observable actions. Lapses are failures of memory and relate to internal events. Or the actions do go as planned, but the plan is not good and they do not achieve the intended outcome. These failures of intentions are called mistakes. There are rule based mistakes and knowledge based mistakes. Rule based mistakes relate to problems for which the person has some solution, which he got as the result of training, experience, or the availability of appropriate procedures. The mistake can either be the misuse of a good rule, the use of a bad rule, or the non-use of a good rule. Knowledge based mistakes occur in novel situations where the solution to a problem has to be worked out in the moment, without any help of already ready solutions.

Errors and violations

Violations are defined as deviations from safe operating practices, procedures, standards, or rules. Violations can be divided in three categories. Routine violations entail cutting corners whenever the opportunity arises. Optimising violations are actions that are being taken to further personal goals instead of only task-related goals. Situational violations occur when the violation itself seems to be the only way of reaching the intended goal, and the rules and procedures that are in place in that moment seem to be inappropriate.

Differences between errors and deliberate violations

First of all, errors arise mostly from informational problems, such as forgetting and not paying attention. Violations are more associated with motivational problems, such as a lack of motivation or poor supervision. Secondly, violations occur in a regulated social context. Errors on the other hand occur by what goes on in the mind of the person. Finally, violations can be reduced by motivational and organizational solutions. Errors require quality improvement and the delivery of necessary information.

Active and latent human failures

The difference between active and latent human failures is based on the amount of time that passes before the failures have a negative effect on the safety. With active failures, the negative effect is nearly instantaneous. For latent failures it can take a very long time before the negative effect is shown.

Another difference between active and latent human failures is the level in which the failures are made. Active failures are made by those at the ‘sharp end’. They are the people at the human-system interface, and their actions can have immediate negative consequences. Latent failures arise as the result of decisions taken at the higher organizational level. It may take some time before the effects of their decisions become visible, for instance because the effects don’t occur unless in combination with certain factors that arise after a long time.

Stages of development of organizational accidents

The accident start with the negative consequences of organizational processes (e.g. bad planning, scheduling, designing, communicating). This is how the latent failure is created. It is then transmitted along various organizational pathways to the workplace. Here they create the local conditions, which in turn increase the commission of errors and violations. For instance through a high workload or too little staff.

Risk management

Risk management usually focuses on introducing new procedures, sanctions, guidelines and increased automation. There are however serious problems with this strategy. People don’t intentionally make errors. It will be hard for others to control what the employees cannot even control themselves. Even harder to control will be the psychological factors that are related to errors, such as stress and fatigue. Also, accidents are hardly ever the consequence of a single unsafe action; they are the product of many different factors as we have seen. Finally, the countermeasures can be interpreted as a false sense of security.

Effective risk management should focus on enhancing human performance in all the levels of the system, and not just on minimizing certain errors.

Team, task, situational and organizational factors

  • Teamfactors. Improvements in team management and communication can seriously improve human performance. Improving institutional performance is expensive, whereas team performance can be improved much cheaper and easily through training.

  • Taskfactors. It is very important to identify and modify tasks that might cause failures.

  • Situational factors. Certain conditions increase error probabilities, such as a high workload, inadequate knowledge, experience, bad interface design, bad instructions or supervision, stress, mental state and change. Conditions that increase violation probabilities are a lack of safety culture, a lack of concern, a poor morale, having norms that condone violation, a can-do-attitude and meaningless/ambiguous rules.

  • Organizational factors. The core of the organization should be checked regularly, to improve proactive accident prevention (instead of reactive little repairs). The health of the organization can be investigated by looking at the organizational factors that played a role during accidents, and by looking at the core processes that are common in all technological organizations.

Human errors and education - Reason - 2000 - Article

Human errors and education - Reason - 2000 - Article

There are two approaches when it comes to the human error problem. They differ in their opinion on causation and error management.

Human error approaches

The person approach

The person approach is very similar to what has earlier been called the ‘sharp end’. It focuses on the unsafe acts, errors and procedural violations by people such as nurses, physicians and surgeons. They are being caused by deviant mental processes such as forgetfulness, lack of motivation and recklessness. Error management is aimed at reducing unwanted variability in human behavior. Think about posters, procedures, punishments and blaming. Errors are often seen as moral issues, where bad things happen to those who are not good at their job. This is sometimes called ‘the just world hypothesis’.

This approach is still dominant in the medical field. There are several reasons why people choose this approach. First of all, it is much more satisfying to blame a person, than to blame an entire institution. Secondly, taking distance from the error and blaming another individual is in the best interest of those in charge.

The person approach has serious shortcomings however. There are many errors for which identifiable individuals cannot be blamed. Also, it can be the best people who make mistakes. It is not only the bad, lazy and unmotivated workers who make mistakes. Finally, work conditions can provoke mistakes. They can be in such a way that it doesn’t matter who is working, the mistake is nearly inevitable or very likely to be made.

The system approach

This approach doesn’t ‘blame’ the person, but seeks for a cause within the system. Error management is based on the assumption that we cannot change humans, but that we can change the conditions that they work in. There is a big focus on defenses. When an error occurs, they do not look for which human to blaEffective risk management should focus on enhancing human performance in all the levels of the system, and not just on minimizing certain, but they investigate why the defenses weren’t sufficient.

The Swiss cheese model

The Swiss cheese model shows us how defenses can be penetrated by an accident trajectory. The system approach focuses on creating defenses. In the reality, these defensive layers are not completely intact and have holes in them. Just a single hole in one of the defensive layers does not immediately cause an error. But if the holes align and overlap, they create an accident opportunity.

There are two reasons these holes exist. Active failures are the unsafe acts that are committed by people who are in direct contact with the system. Examples are slips, lapses, mistakes, fumbles, and procedural violations. They usually have a direct and short impact on the defenses. The person approach usually does not look any further than this. However, in the Swiss cheese model there is another reason that is causing holes in the defenses and that therefore helped creating the error. Latent conditions are the inevitable resident pathogens within the system. They are the consequence of decisions by top level management. They have two kinds of adverse effects. They can create error provoking conditions in the workplace (e.g. timepressure), and they can create weaknesses in the defenses (e.g. bad material).

These latent conditions can exist for years without problems. The error arises when they latent conditions align with the active failures. However, it is not only the active failures that can be blamed then.

Managing errors and high reliability

Error management focuses on two aspects. It tries to prevent dangerous errors and tries to create systems that are better able to deal with errors and their effects. A system has ‘safety health’ when it is able to deal with the operational dangers and still achieve its goals.

High reliability?

Reliability is defined as a dynamic non-event. It is dynamic because safety is guarded by human adjustments. It is a non-event because successful outcomes rarely call attention to themselves.

High reliability organization are systems that operate in hazardous conditions that have fewer than their fair share of adverse events. They can adapt themselves to match with local circumstances. In their core, they are being managed in the conventional hierarchical manner. But in emergency situations control is in the hands of the employee on the spot.

Complexity theory - Dekker - 2011 - Article

Complexity theory - Dekker - 2011 - Article

According to complexity theory, performance is the result of complex interactions and relationships. It is seen as an emergent property. The performance can however not overlap with was stakeholders like to see in accident investigations. They prefer to blame individuals when their system fails. It is this narrow-minded and easy way of thinking that is being criticized here.

Newtonian science

The Newtonian way of thinking is very appealing, because it is simple, coherent, seems to be complete and consistent with common sense. The most famous principle is that of analysis of reductionism. The entire system can be explained by combining all the separate elements. To find out the cause of an error, they rely on the defenses-in-depth metaphor, which breaks down the system in a linear way to find the broken layers or parts. The goal is to analyze the basic components and find out where it is failing.

The following are important aspects of the Newtonian science:

  • Error causation. According to Newtonian science, everything has a definitive and identifiable cause and a definitive effect. Finding what caused a failure is the goal of the accident investigation. It assumes that physical effects have physical causes.

  • Newton also said that the future can be predicted with complete certainty, if its state at any point was known in all details. So if somebody can be shown to have known, or should have known, the initial positions and movement of the components, then this person could have predicted the failure in the future.

  • Newtonian science can also be used to investigate a trajectory backwards. Evolution can be reversed to reconstruct an earlier state.

  • All the laws of the world can be discovered and all knowledge can be gathered. The more facts are collected, the more realistic is the representation of what is being investigated. It is therefore also possible to just have one version of the truth, which will be the ultimate, complete, perfect truth. This is called ‘the true story’. ‘The truest story’ is the one with the smallest gap between external events and internal representations.

Complexity theory

Complex behavior is the consequence of interactions between the components of a system. The focus in this approach lies not on the individual components, but on their interaction. The complexity is therefore not embedded within a certain component, but is generated by the interactions in the system to react to changing conditions in the environment. Because the knowledge of each component is limited and lays within that component, there cannot be one single component with enough capacity to represent the complexity of the entire system.

Complex systems are formed by separated local relationships. Not one of the components knows the behavior of the system as a whole. The components respond locally to the information that they are given. Complexity is created by huge webs of relationships and interactions, vague boundaries and interdependencies.

Errors, failures and accidents happen as a consequence of relationships between components and not because of individual components. It is the interactive complexity of the system which gives rise to conditions that help cause an accident. Think about a slow loss of control or automation leading to carelessness.

Assymetry and foreseeability

Assymetry, or non-linearity, means that a tiny change in the starting conditions can lead to massive differences later on. This plays a big role in the blaming debate. Decisions that are being made at a certain time can be completely rational given the circumstances in which they were made. They can be made with all the goals, knowledge and attention of the decision maker. However, the interactive complexity of the system makes it impossible to predict the outcome of the system. The relationship of that single decision to the outcome is complex and non-linear and impossible to predict.

Irreversibility and incompleteness of knowledge

Unlike the Newtonian approach, the complexity theory says it is not possible to reconstruct the past. Because the system after the failure is not the same as the system before the failure. Complex systems are constantly changing because of evolving relationships. They constantly need to change with their changing environment. The ‘causes’ of the failure are embedded in many relationships, unwritten routines, implicit expectations and professional judgements.

Also, unlike the Newtonian science, the complexity theory says that we can never obtain all the knowledge and acquire one ‘truth’. Because the observer creates the truth based upon the inputs he is receiving. Different observers will interpret these inputs differently, or might even notice completely different inputs. It is impossible to determine whose view is right.

How to analyze failures?

The writers introduce a post-Newtonian analysis of failure in complex systems. An investigator should try to gather as much information as he can, even though he could never gather all the information possible. This also means that the investigator cannot uncover or discover the one truth. Finally, the investigator should be sure that any conclusion can be revised at any time, if it may become clear that it has flaws.

Executive functions and frontal lobe tasks - Miyake et al., - 2000 - Article

Executive functions and frontal lobe tasks - Miyake et al., - 2000 - Article


People with frontal lobe damage seem to have serious problems controlling their behavior and have problems functioning on a daily basis. They seem to have impairments on several complex frontal lobe tasks. Executive functions (or frontal lobe tasks) are general-purpose control mechanisms that modulate the operation of various cognitive sub-processes and thereby regulate the dynamics of human cognition. Unfortunately, there is not yet a decent theory that can explain how these are organized and what their role is in complex cognition. The scientific world lacks research on how specific cognitive processes are being controlled and coordinated during the performance of complex cognitive tasks.

In this research the focus is on three executive functions (or frontal lobe tasks), namely the shifting of mental sets, monitoring and updating of working memory representations and inhibition of prepotent responses.

Baddeley’s multicomponent model of working memory

Baddeley’s model has three components. One component is specialized in maintaining speech-based, phonological information, also called the phonological loop. A second component maintains visual and spatial information, also called the visuospatial sketchpad. The third component is the central executive, which controls and regulates cognitive processes. It is this third component which is often linked to frontal lobe functioning.

This research

Here will be researched to what extent different functions that are often attributed to the frontal lobes or to the central executive, can be considered unitary in the sense that they are reflections of the same underlying mechanism or ability. There are many correlational, factor-analytic studies that theorize about the organization of frontal lobe tasks. However, there are many problems with these studies. A new and more promising approach is latent variable analysis.

This research

The focus lies on the three frontal lobe tasks, and the extent of unity or diversity of these functions at the level of latent variables will be examined. They will statistically extract what the functions have in common, in order to tap a putative executive function. They will use that latent variable factor to examine how different frontal lobe tasks relate to one another.

The study is meant to provide a stronger assessment of the relationships between the three frontal lobe tasks. The two goals are specifying the extent to which the three executive functions are unitary or separable, and specifying their relative contributions to more complex tests that are often used to evaluate executive functioning.

Shifting between tasks or mental sets

This function concerns shifting back and forth between multiple tasks, operations or mental sets. A very simplistic interpretation of this function is the disengagement of an irrelevant task and changing to active engagement of a relevant task. This function will be referred to as “shifting”.

Updating and monitoring of working memory representations

This function has been chosen due to its supposed association with the working memory and therefore its connection to the prefrontal cortex. The function concerns monitoring and coding incoming information for relevance to the task at hand, and then correctly revising the items that are being held in the working memory by replacing the old and no longer relevant information with newer, more relevant information. It is not passively storing information, but actively manipulating relevant information in the working memory. This function will be referred to as “updating”.

Inhibition of dominant or prepotent responses

This function concerns someone’s ability to deliberately inhibit dominant, automatic or prepotent responses when necessary.


First goal: the extent to which the three executive functions are unitary or separable

The results are very clear: the full three-factor model in which the correlations among the three frontal lobe tasks were allowed to vary freely, was a significantly better fit to the data than any other model that assumed total unity among two or all three of the latent variables. However, the tasks do have something in common, because they are not completely independent. They are indeed separable, but also moderately correlated constructs. This result points to unity

Second goal: specifying the relative contributions of the frontal lobe tasks to more complex tests that are often used to evaluate executive functioning

The results for the second goal are less clear. The frontal lobe tasks that are often used in scientific research are not completely homogeneous, in the sense that the different tasks contribute differentially to performance on these tasks.


Both unity and diversity of frontal lobe tasks should be taken into consideration when developing a theory of executive functions. The results have shown that the tasks are separable, but they also share some underlying commonality. What that commonality might be has to be focused upon in future research. Here, two possible explanations are given.

Common task requirements

It is possible that the different tasks shared some common task requirements, for instance the maintenance of goal and context information in working memory.

Inhibitory operation processes

A second possibility is that the tasks all involve some sort of inhibitory processes to operate properly. In one task for instance, one has to ignore irrelevant incoming information. In another task, one has to forget information which is no longer relevant. Even though these are not conceptually the same, this type of inhibition may be related to more controlled and deliberate inhibition of the prepotent responses. This could possibly explain the moderate correlations between the tasks. Further research is necessary to find out what this inhibition really entails.

Dopamine and working memory - Cools - 2008 - Article

Dopamine and working memory - Cools - 2008 - Article

Dopamine plays a role in cognitive processes, including working memory. Working memory refers to the active maintenance and manipulation of information over a short interval of time. It has often been associated with the prefrontal cortex.

Research suggests an inverted U-shaped relationship between dopamine and working memory capacity. Both low and high levels of dopamine in the prefrontal cortex are bad for the working memory. Working memory capacity is being measured here with the listening span test.


The results confirm that working memory capacity, as measured with the listening span test, is associated with dopamine synthesis capacity in the striatum. A significant positive correlation was found between dopamine synthesis capacity in the left caudate nucleus and listening span. This means that people with a low working memory capacity have low dopamine synthesis rates, and that people with a high working memory capacity have high dopamine synthesis rates.

The results support the hypothesis that the dependency of dopaminergic drug effects on baseline working memory capacity reflects differential baseline levels of dopamine function.

Cognitive training decreases motor vehicle collision involvement of older drivers - Ball et al. - 2010 - Article

Cognitive training decreases motor vehicle collision involvement of older drivers - Ball et al. - 2010 - Article


Older people get into car accidents more often. Research has exposed several risk factors for the increased accidents among older drivers, such as age, being male, bad vision, decreased processing speed, decline in physical abilities, and dementia-related cognitive impairments. Cognitive training can improve the cognitive abilities of older adults and prevent accidents.

Research has shown a strong relationship between processing speed and car accidents among older adults. Because of this it is hypothesized that cognitive speed of processing training will cause a decreased rate of car accidents.

A relationship has also been established between driving outcomes and cognitive reasoning and memory performance. For this reason, training in these areas might also decrease the amount of car accidents among older people.


The results show that cognitive training improves cognitive function of older adults for up to five years. These results were found for processing speed, and reasoning and memory training. This means that improvements in cognitive abilities through cognitive training transfer to everyday functioning.


Even though there is transfer to everyday functioning, the effects were modest and were not found until five years after the intervention. One possible explanation is that there is a lag between cognitive decline and decline in everyday functioning. Because of the training the participants maintained their cognitive abilities and didn’t have a big decline in their everyday functioning. Another explanation can be that participants with suspected cognitive decline were excluded from the training. So the participants who did the training were already more advanced in their cognitive abilities and that delayed the onset of functional ability decline.

Performance of bookkeeping tasks: Computerized cognitive training programs - Lampit et al. - 2014 - Article

Performance of bookkeeping tasks: Computerized cognitive training programs - Lampit et al. - 2014 - Article

One method for improving cognitive performance is through computerized cognitive training. In this chapter the researchers use a classic example of mid-level skilled occupational task (in this case, bookkeeping) to explore the effectiveness of a cognitive training program on work-related task performance. With effectiveness they mean the speed and accuracy of work-related task productivity.


The hypothesis was confirmed and the computerized cognitive training improved the effectiveness. The observed effects were caused by the training and cannot be explained by test-retest and Hawthorn effects. Also, because the performance in the study is very likely very similar to the participants’ actual bookkeeping skills, transfer of learnt skills is very likely. Because computerized cognitive training programs can increase the workforce productivity, it is important this study is repeated in the workplace environment. Improving workforce cognition along the lifespan is very important in maintaining economic prosperity.

Even though the relationship has been established, the underlying working mechanisms stay unclear.

The popular policeman and other cases: Causal Reasoning - Wagenaar & Crombag - 2005 - Article

The popular policeman and other cases: Causal Reasoning - Wagenaar & Crombag - 2005 - Article

Causes can never be seen, it is only an inference. A causal explanation is always a matter of interpretation of events. Every event has a cause, there is no such thing as an uncaused event. However, there are events for which we can’t think of a cause. First people look for a probably cause inside the causal field that we think is the appropriate one (such as animal behavior or politics). If we cannot find an acceptable cause within that field, it might be in another field that you would not immediately choose. The real cause then seems to have such a low subjective probability, that we call it a coincidence.

Important points

  • Every event has a cause. Coincidences are events for which we cannot think of a probable cause.

  • The more an event meets our expectations, the less we worry about its cause.

  • Causal explanation critically depends on our presumed knowledge of the physical and social world.

  • Causal reasoning often proceeds regressively. We tend to reason backwards, from effects to probably causes. We usually see an event, and then try to find out what caused it.

The causal field of intuitive mechanics

Major-event major-cause heuristic

Albert Michotte investigated the causal field of intuitive mechanics. He showed that big effects must have big causes, and this is the core of the intuitive theory of mechanics. In the minds of witnesses, bigger effects require bigger causes.

Robert Zajonc

He said that people observing an event first have a nearly instant emotional evaluation of it, and only then they reason about it. Affective judgements are fairly independent of (and happen before) the sorts of perceptions and cognitive operations that are commonly assumed to be the basis of these affective judgements. Attribution of blame is one of these affective judgements.

‘Cues to causality’

There are five ‘cues to causality’. These are criteria that, if met in a given situation, will lead people to choose something as a cause of an event. The five cues are the following:

  • Precedence (a chosen cause precedes the effect which it causes).

  • Covaration (causes and effects are always together).

  • Contiguity in time and space.

  • Congruity (causes and effects are similar in length and strength).

  • There are only few alternative explanations available.

Multiple causation

Events may not be the result of one single cause. It can be the result of a coincidence of several causes, that come to be under certain conditions. This is often the case with accidents. Accidents differ from premeditated crimes, because they are usually characterized by a complex interaction of different causes.

Even though a single individual may not be responsible for one entire event, he can still be held responsible for his own actions. Three criteria help determine whether or not a person is guilty, and of what:

  • Fatal blow criterion (or causa proxima). This is related to the time order of events. The last contribution of giving the fatal blow is the decisive cause. This is the one for which the responsible person should be held liable.

  • Discarded insight. The individual who had the most complete insight in the causal forces, but still continued to contribute to the event, has to be held responsible.

  • Culpability. This holds responsible any person in the event who had a clear intention to do harm, even when he is not the sole cause.

Intelligence gathering post 9/11 - Loftus - 2011 - Article

Intelligence gathering post 9/11 - Loftus - 2011 - Article

By interviewing many different individuals information can be gathered for intelligence purposes. Not all these individuals want to cooperate though, think of suspects and prisoners. But information can also be gathered from other individuals. While getting information, investigators need to be aware of memory distortion and interrogation influences. Also they need to be able to detect deception.

Interviews and interrogations

At the end of the nineties a distinction was made between interviews and interrogations. Interviews are usually nonaccusatory. The investigator needs to evaluate the accuracy and completeness of the stories. Interrogations are more coercive and can use strategies such as confrontation and minimization. Here, the investigator needs to be aware not to lead to false confessions or erroneous inferences about lying and truth telling.

Three important areas of research that can help maximize the accurate information and minimize inaccurate information are memory distortion, false confessions, and detecting deception.

Memory distortion

Many times after people have already experienced an event they are exposed to new information afterwards. This information can supplement or alter their memory, leading to errors in accurately trying to report what happened. Typical experiments have shown that misinformation can cause very large deficits in memory, such as seeing non-existing items and recalling incorrect details.

Some important questions, such as when people are especially prone to being influenced by misinformation, and if we are all susceptible to misinformation, can be answered much better based upon recent research.

Factors that influence the power of misinformation

  • People are more vulnerable to the influence of misinformation as time passes. The more time there is between the event and the misinformation, the higher the chance that the misinformation will be incorporated into the memory.

  • Also important is the method by which the misinformation is delivered. People are more likely to pick up the information if they get it from another person.

  • Young children and elderly are more susceptible to misinformation.

  • People with dissociative experiences are more susceptible to misinformation, because they distrust their own memories.

The cognitive interview

The cognitive interview was developed in the mid-eighties. It incorporates different techniques derived from basic principles of cognitive and social psychology, and it is supposed to help getting better information about past experiences. This type of interview can bring out a lot more information than more conventional strategies.

To keep in mind during an interview or interrogation

People are constantly exposed to new information after the event has passed. The researcher should look for instances in which this exposure may have influenced the individual’s memory. Especially when the individual doesn’t have a good memory of the event, it is important to remember that that person is more susceptible to misinformation.

If a person makes a claim it is important to explore possible sources of suggestion. Think of the media, films, interrogations, and even self-generated misinformation.

Another important point to keep in mind is that confidence is irrelevant. Even if a person is very detailed and absolutely sure of his story, this does not make it true.

Finally, it is important to be aware of a useful manual for training police on how to gather information.

False confessions

One paradigm to investigate false confessions is the cheating paradigm. Here, the participants are accused of giving help to another person who is solving a problem, after a clear instruction that the two must not work together. Many participants eventually confess falsely. Another paradigm to study false confessions used tampered video evidence to make people admit to an act they didn’t commit.

There are many different reasons as to why someone would confess to something they didn’t do. Some people confess for attention, or to protect someone. Also, bluffing can increase the likelihood of a false confession. In a coerced-compliant false confession, a person confesses even though they know that they didn’t do it, but they think it will lead to less negative outcomes than not confessing. In a coerced-internalized false confession a person confesses after false evidence is supplied (like saying someone failed the polygraph).

Vulnerable groups

Especially children, juveniles, and the mentally challenged are vulnerable to making false confessions.


One important paper suggested that all interrogations should be videotaped. This way, potential suggestion or coercion can be documented.


There are several misconceptions when it comes to false confessions. Such as that false confessions do not happen that often, that only vulnerable people falsely confess, that the study of police interrogation is still at its beginning, and that suspects are sufficiently protected by their rights.

How to verify a confession

Confessions should always be verified. There are several things that can be done to contribute to this verification. First of all, the conditions under which the report was made have to be reported (e.g. was there coercion?). Secondly, the details in the confession need to be compared to what is known about the event. The confession is more valuable if the person has details that only he could have known and that have not been reported in another place. Finally, it needs to be investigated if there were conditions that could have made the person falsely confess (e.g. fatigue, isolation, false evidence).

Detecting deception

Results from experimental studies have shown that many people can tell an untrue story without showing any obvious clues that they are lying, such as gaze aversion or fidgeting. This popular belief can have very negative consequences for cultural and ethnic groups that engage more in gaze aversion in their everyday lives.


Several strategies can be used to detect deception:

  • By using a particular interviewing approach, such as the information-gathering style of interviewing, witnesses are asked open-ended questions. The focus is on gathering information, and not on accusing the witness. This is a very good approach, because it gives the investigator a lot of information which can be compared to the other data.

  • Another good approach is to ask unexpected questions, such as “Who finished their dinner first?”. Liars will more often come up with an answer, because they fear that if they do not know the answer, they look guilty.

  • Withholding event facts from a suspect can be used to trap the suspect in inconsistencies (e.g. not telling the suspect you found his fingerprints, until after he admitted to never have been at the crime scene).

  • Increasing the intensity of the interview can make lying more difficult, because lying takes a lot of cognitive effort.


It has become public that often coercive interrogation techniques are used on individuals that are being suspected of terrorism. These are sometimes called enhanced interrogation techniques, and include the repeated induction of shock, stress, anxiety, and torture. However, there is a lack of evidence that these methods actually work and reveal information that otherwise would not have been revealed. Also, they may just do the opposite of what they are intended to do.

Cognitive performance, lifestyle and aging - Sternberg et al. - 2013 - Article

Cognitive performance, lifestyle and aging - Sternberg et al. - 2013 - Article


Understanding which factors influence cognitive functioning has implications for health and policies. Usually, the experiments can only have a few people. But to actually be able to say something about the entire population, it is necessary to recruit a large number of participants across a wide range of demographic backgrounds.

Lumosity is a web-based cognitive training platform. It is the largest dataset of human cognitive performance. In this training platform users can do cognitive training exercises and assessments. They can also voluntarily provide data concerning their demographic characteristics and participate in surveys about health and lifestyle.

Two questions are being examined based on this dataset. First they search for relationships between lifestyle factors and cognitive performance. After that they examine how learning ability for different types of cognitive tasks changes with age, and how these age-related changes differ for tasks that depend on different cognitive abilities.

Lifestyle and cognitive performance

The focus here lies on two very important lifestyle habits, namely sleep and alcohol consumption. Regarding sleeping habits, about seven hours of sleep shows the highest cognitive performance. Regarding alcohol consumption, the results differ from a previous study. The previous study found that alcohol intake the likelihood of poor cognitive function reduces. However, this was not found this time at higher levels of consumption.

Aging and cognitive performance

Little is known about how the ability to learn different kinds of skills changes over the lifespan. This is where Lumosity is very handy, because the users are very interested in the cognitive training and can train as often as they would like over the course of months or years. To give a first look into this relationship, they examined how a user’s age influences how much he or she improves over the course of the first twenty-five sessions of a cognitive task. Then they compared tasks that rely on abilities linked to fluid intelligence to those that rely more on crystallized knowledge.

The results show that performance decreased in all exercises with increasing age. But this happened to a greater extent for the exercises that rely on fluid intelligence than those that rely on crystallized knowledge.

Insights into the ageing mind: a view from cognitive neuroscience - Hedden & Gabrieli - 2004 - Article

Insights into the ageing mind: a view from cognitive neuroscience - Hedden & Gabrieli - 2004 - Article


Cross-sectional and longitudinal studies find robust declines in abilities such as encoding new memories of episodes or facts, working memory and information processing speed. By contrast, short-term memory, autobiographical memory, semantic knowledge and emotional processing remain relatively stable.

Behavioral research

Life-long declines. Processing speed, working memory and episodic memory showed linear life-long declines with little or no evidence for accelerated decline in the later decades. When performance is plotted as a function of time to mortality, there is an acceleration of cognitive decline that begins 3–6 years before death.

Late-life declines. Most of the adult lifespan is characterized by slight declines in well-practiced tasks or tasks that involve knowledge (also: vocabulary and semantic), with sharper declines observed after the age of 70. One possibility is that older adults use preserved knowledge and experience to form more efficient or effective strategies when performing tasks in which younger adults rely on processing ability.

Life-long stability. Autobiographical memory, emotional processing (also: attribution of mental states to other individuals 'theory of mind') and automatic memory processes seem to be unchanged throughout life. It has been found that automatic feelings of familiarity continue to be relied on even when effortful recollection fails with age.

Age-related neural changes

The brains of older adults tend to have lower volumes of grey matter than do the brains of younger adults, seemingly due to lower synaptic densities, which declines steadily over time. In particular the PFC and medial temporal structures are affected, while the occupational cortex remains relatively unaffected.

Normal and pathological aging. Normal aging involves changes in the frontostriatal system, with decreases in dopamine, noradrenaline and serotonin, and declines in the volume and function of the PFC. Pathological aging involves changes that occur primarily with pathology associated with Alzheimer’s disease, beginning with a loss of volume in the entorhinal cortex, an important relay between the hippocampus and association cortices, and progressively affecting the hippocampus proper.

PFC and striatal circuits. Structures of the PFC undergo the largest age- related volumetric changes in adulthood (decline about 5% per decade >20yr). Declines have also been found in the human striatum, an area that has extensive connections to the PFC and is responsible for a large proportion of dopamine production, and might therefore affect cognitive processes that are subserved by dopamine-dependent circuits (decline about 3% per decade).

Other declines with age: Dopamine concentration, transporter availability, dopamine D2 receptor density and serotonin receptor (5-HT2).

PET and fMRI studies show that older adults tend to exhibit less PFC activity during executive processing tasks than do younger adults

Hippocampus and medial temporal lobes (MTL). Important for declarative memory and relatively slight age-related changes in the absence of Alzheimer’s disease. Hippocampal volume declines are less apparent during normal aging, although declines in functional activations of the hippocampus and surrounding cortex have been observed in healthy older adults. By contrast, pathological processes, such as those that accompany Alzheimer’s disease, severely affect hippocampal regions. Another memory-related structure in the MTL, the amygdala, is less active in older adults than in younger adults in response to emotionally negative stimuli, but exhibits similar activity among age groups to emotionally positive stimuli.

Individual variability. Individual differences might include different life experiences, genetic influences, preferred strategies and susceptibility to neuropathology and variability within individuals across tasks might change with age. Previous longitudinal studies have found remarkable stability before the age of 60, with increased variability occurring only in later life.

Some elderly perform as well or better than younger people. Neuroimaging studies have found that elderly individuals often show greater functional activation of brain regions (usually in the PFC) that are less active in younger adults, and that such additional activations are often seen only in high- performing older adults. Therefore it seems that they show neural compensation, whereas their low-functioning counterparts, who experience failures of inhibition, show decreased activations or non-selective recruitment.

Potential predictors of accelerated cognitive decline in older adults:

  1. Stay intellectually engaged

  2. Stay physically active

  3. Minimize chronic stressors

  4. Maintain a brain-healthy diet (high in poly- and mono-unsaturated fatty acids, vitamin E and polyphenols and antioxidants).


Still a lot of questions are unanswered.

  1. Are age-related declines due to normal or pathological processes?

  2. Do normal age-related differences occur throughout adulthood, or only after some critical age?

  3. To what extent does individual variability in behavioral, genetic and neurobiological markers of cognitive ageing reflect normal and pathological ageing?

  4. What neural mechanisms do age-related differences in anatomy and functional activations represent?

  5. To what extent are strategy changes in older adults responsible for, or a response to, neural changes?

Explaining Neurocognitive Aging: Is One Factor Enough? - Band et al. - 2002 - Article

Explaining Neurocognitive Aging: Is One Factor Enough? - Band et al. - 2002 - Article

Because of developments in the literature on ageing, the classic distinction between generalized and process-specific cognitive changes with old age has changed into a distinction between the frontal lobe hypothesis and more differentiated views of neurocognitive ageing.

There is a very heterogeneous pattern of age effects. The cognitive changes that are caused by old age vary in frequency, direction and extent.

The literature on ageing has seen some big changes, caused by a revolution in neuroscience, sophistication of methodology and more different perspective on cognitive ageing. Three of these new distinctions will be discussed now.

Computation versus control processes

This is a distinction between attributing age effects to computational versus control processes. To perform well on a task requires a combination of low-level processes, supportive processes, and higher level control processes. Age effects on low-level computational processes would contribute to the performance on only one task domain. While age effects on executive control processes contribute to performance on almost all domains. As soon as the task becomes more difficult, the deficient control processes become visible. It is then possible that executive control, perhaps together with functional limitations of working memory, are the underlying mechanism that can explain age-related performance decrease. In most of the results it is indeed seen that older adults have a control deficiency instead of a computational deficiency.

Behavior versus brain

For the same task, older brains can use a different balance between control processes and subordinate processes when compared to brains of younger adults. They might do this to compensate for some loss by raising the pressure at the processing level or by stronger recruitment of frontal coordination.

Qualitative versus quantitative changes

It is becoming less common to think of age effects only as quantitative changes in an invariable cognitive network. But the changing efficacy comes from several qualitative differences. So is speed in older adults less, but accuracy higher. Their episodic memory in information processing is exchanged for semantic memory. And the loss of prospective memory is compensated for by the use of mnemonics. It may seem that the younger adults have brains that can endure more and are more flexible. But the brains of older adults have a lot of experience and work more accurate.

Old and new models

There is a consensus that there is a dominant age effect across processes that is bigger than local effects. Because of this, it is now normal to examine age-related effects only to the extent that they exceed the general effects. To find these age-related effects you have to nullify the a priori differences between age groups. There are a few ways to do this, namely by separating general effects in a covariance model, or by equating task difficulty between subjects, or by transforming data in such a way that multiplicative effects turn into additive effects.

Newer models see frontally mediated executive control as the general mechanism that causes changes across cognitive domains.

The frontal lobe hypothesis

This hypothesis states that because of differential decline in old age of neural tissue in the prefrontal cortex, cognitive functions that are being supported by these areas are more susceptible to age effects than functions that rely more on posterior and subcortical areas. However, a direct link between changes at the cell level and changes in performance has not been found. And to really qualify as a mechanism for cognitive ageing, the biological changes have to be reflected in functional decrements. Research so far has not been very promising, often misleading, lacking construct validity and did not come up with more than some weak correlations.

Executive control

The most important functions of the prefrontal cortex include exerting executive control and adaptively supporting the content of working memory. Many believe that executive control the main role plays in cognitive ageing. However, executive control contains many different skills. These functions are not all completely intertwined and equally sensitive to ageing effects.


Not all cognitive ageing is caused by executive problems, and not all executive functions are responsible for cognitive ageing. Executive control is dependent on the functioning of the subordinate systems. If one of these is not intact, there will be more pressure on coordination and the chance that the coordination fails becomes bigger. Even if control sometimes doesn’t work well for older adults because of difficulties in maintaining a rule, this can still be compatible with the idea that the control mechanism is intact.

Aging, executive control, and attention - Verhaeghen & Cerella - 20020 - Article

Aging, executive control, and attention - Verhaeghen & Cerella - 20020 - Article


General slowing and its consequences for conclusions regarding age sensitivity of specific processes. It has long been known that reaction times of older adults can be well described by a linear transformation of reaction times of younger adults. The effect is called general slowing and is identifiable as the speed-of-processing deficit.


The results of a series of meta-analyses is reviewed, examining age-related differences in selective attention (Stroop-task survey and negative-priming task survey) and in divided attention (dual-task survey and task-switching survey). The four task families all lent themselves to state trace analysis, in which performance in baseline conditions was contrasted with performance in experimental conditions separately for college-aged subjects and for elderly subjects.


Reviewing the five meta-analyses, a pattern in the age outcomes is apparent. At a concrete level, we found that specific deficits did not emerge in tasks that involved active selection of relevant information, such as determining the ink color of words (Stroop), in actively ignoring or inhibiting a stimulus (negative-priming), or in relinquishing attention from one aspect of the stimulus to reattach it to a different aspect (local task-switching). The selection requirement (Stroop) inflated central processing, but the degree of inflation was not greater in older adults than in younger adults.

Age deficits were found for dual-task performance and global task-switching. Unlike selective attention and local task-switching costs, dual-task and global task-switching costs were found to be additive in both young and old subjects, unmodulated by task difficulty. The switching added one or more additional processing stages to the processing stream. The cost was greater in older adults, but was limited to those experimental conditions that activated multiple task sets.

Maintaining Older Brain Functionality - Ballesteros et al - 2015 - Article

Maintaining Older Brain Functionality - Ballesteros et al - 2015 - Article

There are more and more older people in today’s societies. Elderly people suffer from cognitive decline. It is beneficial for governments and businesses to prevent this, because cognitive decline will cause costs and risks. Brain plasticity and the role it plays in brain adaptation to ageing is influenced by comorbidities, environmental factors, personality traits, and genetic and epigenetic factors. Here will be reviewed the ways that age-related cognitive decline can be prevented.


The brain of older people still adapts to physical, cognitive and social environments, while at the same time they are dealing with a decline in sensory-motor and cognitive abilities. Neuroplasticity is the brain response to decreasing physical and cognitive abilities in ageing. It refers to the ability of the brain to adapt to an environmental change by modifying neural connectivity and brain function in response to changing environments. Neuroplasticity does not refer to any improvements, but to preservation of a reduction in the rate of decline.

Theories of ageing on neuroplasticity

Ageing should not be seen as a declining process that eventually leads to cognitive deficits. Instead, it should be seen as a dynamic set of gains and losses. The brain has a certain plasticity and adapts to the losses and makes up for it with certain gains.

Scaffolding theory

The scaffolding theory of ageing and cognition states that the increased frontal activity with age is an indicator of brain adaptation. It is a reaction of the brain to some of the declines that are happening. Through training, ‘scaffolding’ can be strengthened. There is also a STAC-revisited theory, which incorporates lifestyle factors. It introduces two new constructs. Neural resource enrichment takes into account the influences that improve brain structure or functioning (e.g. intellectual and social activities). Neural resource depletion refers to negative influences on the brain structure or functioning (e.g. smoking, diabetes).

Cognitive reserve theory

The cognitive reserve hypothesis explains why individuals engaged in higher levels of mental and physical activity are at a lower risk for developing dementia. Cognitive reserve refers to a shift of task processing. The brain starts using a different part of the brain which is normally not used for that task.

Social engagement theory

Social engagement refers to participation in social activities that maintain and create social ties in real life activities and reinforce meaningful social roles. It is about the person making social and emotional connections. The model will include ‘upstream factors’, which are culture, socioeconomic and political factors, and the structure of the social network ties, their density and direction. It will also include ‘downstream factors’, which are social support, access to resources, social influence, health behaviors, and psychological and physiological pathways.

A lot of research shows that how important social networking and ‘belonging’ are for brain development. Wanting to belong gives rise to a set of neuronal networks that are genetically transmitted as a specific phenotype of social cognition. Social cognition is very important for social functioning and communication.

How to improve cognition

Research shows that training that is aimed at physical activity, cognitive training, and social engagement in older adults are effective when it comes to improving performance in the trained task. This is however only immediately after the training. More research needs to be done to see if the improvements are transferable to untrained tasks, and if the improvements last over time.

Important results

Looking at the different kinds of interventions, the following can be considered to be important results:

  • With regards to exercise training, such as aerobics, the results indicate improvements in cognitive functioning, the ageing brains’ functional efficiency in cognitive networks, memory processing, executive control, controlled processing and processing speed.

  • With regards to complex activities in which physical activity is one of them, such as dancing, the results show improvements in cognitive performance, posture, balance parameters, reaction time, working memory. Also this kind of training can help preserve cognitive, motor and perceptual abilities.

  • With regards to physical activity in a sportive environment, such as martial arts, the results indicate improvements in physical function, postural control, visual-spatial attention, dynamic visual acuity, working memory and information processing speed. They also indicate a reduced risk of falls, depression and anxiety.

  • With regards to computerized brain training the results are mixed. There is evidence showing transfer, and evidence showing a lack of transfer. Memory-training interventions can improve long-term episodic memory in healthy older people, but more research is necessary to investigate transfer of learnt skills.

  • With regards to videogame training the results are mixed. There seem to be small improvements in some cognitive domains, but no changes in executive functions. More research needs to be done to investigate transfer and maintainability.

Human Factors and the Older Adult: Professional Diversity Brings Success - Fiske - 1998 - Article

Human Factors and the Older Adult: Professional Diversity Brings Success - Fiske - 1998 - Article

There are more and more older people, and less and less younger people. This causes many challenges in the public and private sectors. Human factors and ergonomics is a very good field to help solve these age-related problems in safety, mobility and well-being.

One of the main goals of design should be to enhance the daily lives of older individuals. Independence is what older people try to maintain. Using task analysis can be examined which kind of problems older people encounter, why these limitations exist and how they can be dealt with.

Research on human factors and older adults

A previous study showed that more than half of the problems that older people reported could be improved by human factors intervention. Also, almost all of the cognitive challenges could be improved by human factors intervention. For skills not involving complex devices, training alone seemed to be a good approach. For motor or sensory difficulties, redesign seemed best. For learning more complex devices, such as computers, combined redesign and training seemed most appropriate.

Safety and mobility

Transportation very much limited the activities of daily living for older people. They often have problems with bus steps, escalators, and not knowing their way around. A lot of research has already been done, for instance on visual attention, eye diseases and environmental design. Some very important and useful results have been found. For example, one study showed that highway environments can be modified through human factors to suit diverse populations of drivers that use the highway. Another example is the development of training programs that are able to reverse the decrease of the useful field of view.

Movement control and computers

Research indicates that as people get older, their movement control gets worse. This seems to be the case especially for using the computer, in particular controlling the computer mouse. Human factors can contribute to the solution for this problem by easier to use interfaces.

The future

Next steps for the field of human factors:

  • More research to gather information about the needs of older adults and the problems they have when interacting with products, devices, etc..

  • More data on specific aspects or demands of tasks that are problematic for older adults.

  • More research that specifies capabilities and limitations of older adults in terms of implications for system design parameters.

  • A principled approach t technology evaluation from the perspective of the older adult.

  • Specification of design of training programs to ensure that older adults can get the necessary skills to use systems.

Improving the safety of aging road users: A mini-review - Boot et al. - 2013 - Article

Improving the safety of aging road users: A mini-review - Boot et al. - 2013 - Article

Many countries are dealing with population ageing. Older people have a higher chance of getting into a car accident. Older people often have difficulty turning across opposing traffic, driving at night, road-hazard detection and seeing and reading signs. However, driving cessation is not an acceptable solution for several reasons. It places a burden on family and caregivers, and increases the risk of depression, isolation, and a decreased quality of life and health. For this reason, we will investigate alternative means to improve road safety.

Person-environment fit framework

Perceptual and cognitive abilities decline with age. This makes driving more difficult for older drivers compared to younger drivers. There are two ways this can be dealt with. Either the abilities or strategies of the person need to be changed, or the characteristics of the environment need to be changed.

Impairments for older people


Driving is mostly visual. Not being able to see well is associated with driving discomfort, difficulty, and crash risk. Older people often lose the ability to focus their eyes for near vision, their eyes let less light through, and eye diseases can seriously impair visual acuity.


Older people often show hearing declines. Hearing can be very important while driving though. Other people can warn you with their horn or a siren from an emergency vehicle may need to pass. Another very important sound is the click feedback from the lane-change signal.


While driving it is important to be able to scan the visual field for important objects and take the right action. It is often necessary to move the eyes or the head to be able to find all the relevant information. In older people there are declines in the visual search efficiency, the ability to divide attention, and the ability to rapidly switch attention.

Processing speed

Older adults need a lot more time to process information. They often respond much later and can cause dangerous situations.

Disease processes

Older people have more diseases. These diseases and their medications can impair functioning, and have been found to be associated to more car accidents.



Self-regulation can be a possible solution. This means that older adults themselves avoid certain driving conditions, such as driving at night and bad weather. They can also drive slower so that they have more time to oversee a situation and make a decision. Another way to facility safe driving is to design better vehicles and road systems.

Other possibilities

Other possible solutions are offset turn lanes, improving nighttime visibility, introducing advanced street name signs, increasing the text size and changing the perception-action time estimates of older drivers.


We can also improve the abilities of older drivers to help them cope with the demands of driving despite age-related changes. Research has shown that the following types of training seem promising, namely perceptual training, eye scanning training, physical training, older driver education programs, and education plus On-Road training.

Training-induced compensation versus magnification of individual differences in memory performance - Lövdén et al. - 2012- Article

Training-induced compensation versus magnification of individual differences in memory performance - Lövdén et al. - 2012- Article

There is an ongoing debate about whether or not intelligence equals learning efficiency. The questions is if people with higher intelligence benefit more from training. There are two competing sides in this debate.

Two approaches

Magnification view

This approach looks at the increase in adult age differences after mnemonic training, such as instructions and practice. When people get older, their cognitive abilities and their gains from mnemonic training decline. Also, cognitive abilities are positively related to gains from mnemonic training. These results suggest that individual and age-related differences in gains from cognitive training can be explained by initial differences in cognitive resources that are available.

This approach makes the following three predictions:

  • Group differences will be magnified after training (groups starting out higher will gain more).

  • Within groups, gains from cognitive training should correlate positively with cognitive abilities, as well as with initial performance.

  • The magnitude of interindividual differences increases as a function of training (because differences between the high- and low-performing individuals should be greater after training than at baseline assessment).

Compensation account

According to the compensation account, individuals with good assets are already functioning at the optimal levels, and therefore they have less room to improvement. They already use efficient mnemonic strategies, and they won’t benefit much from being taught another efficient strategy.

This approach makes the following predictions:

  • Gains from cognitive training are negatively correlated with cognitive abilities and initial performance.

  • Age differences and other interindividual differences are reduced after training.

Flexibility and plasticity

Neither of these two approaches says anything about the conditions under which they may or may not work. This is where we make a distinction between flexibility and plasticity.


Flexibility refers to the capacity to optimize performance within the limits of the brain’s currently imposed structural constraints. It is about the adaptation of a pre-existing behavioral repertoire. The cognitive system has a lot of representational states available, and the brain needs to constantly adapt to environmental demands by assuming such states.


Plasticity refers to the capacity for changes in the possible range of cognitive performance that is enabled by flexibility. It is about the expansion of the existing behavioral repertoire following structural cerebral change.

Based upon this distinction, new predictions can be made concerning the empirical conditions under which compensation or magnification are more likely to occur:

  • Performance gains primarily acquired by making use of flexibility are likely to have a pattern that is consistent with the compensation model.

  • If extensive training pushes individuals beyond the current range of performance, it induces plasticity. The pattern should then be consistent with magnification, because individual differences in baseline levels of performance and cognitive resources are in part a reflection of past manifestations of plasticity. Under these conditions, baseline performance will be positively related to training gains.


The most important results:

  • Between-person differences in associative memory performance decrease after mnemonic instructions.

  • Baseline performance within age groups are negatively correlated to instruction gains.

  • Age-group differences and between-person differences among children and younger adults increase as a function of extended adaptive practicing.

  • Baseline performance and cognitive abilities are positively related to practice gains for children.

This means that the compensation approach matched the pattern of instruction gains, and the magnification approach matched the interindividual differences in practice gains.

Flexibility and plasticity

The results confirm the distinction between flexibility and plasticity. Flexibility is the capacity to optimize the brain’s performance within current structural constraint by using the available range of behavioral states. Plasticity is the capacity for changes in the possible range of cognitive performance enabled by flexibility.

This can also explain why older adults gained more from instructions than children, and why children gained more from practicing than older adults. Because older adults have a larger knowledge base, and are better in shifting to a more effective mnemonic strategy. Children on the other hand have a more plastic associative memory system.

More creative through positive mood? Not everyone! - Akbari Chermahini et al. - 2012 - Article

More creative through positive mood? Not everyone! - Akbari Chermahini et al. - 2012 - Article

Research has shown that positive affect influences cognitive processing by increasing cognitive flexibility, by increasing the number of cognitive elements available for association, and by defocusing attention so to increase the breadth of those elements treated as relevant to the problem.

The underlying mechanisms are however poorly understood. Some say that the neurotransmitter dopamine may play a big role. There seems to be a strong relationship between phasic changes in dopamine levels, mood changes, and changes in creativity. Improved mood states come with an increase in dopamine. This increase may stimulate switching between tasks and increases cognitive flexibility.

This research

In this research the following three hypothesis are tested:

  • Eye blink rates and cognitive flexibility are impacted more by positive than by negative mood.

  • The amount of mood and eye blink rates changes are systematically linked to the degree of change in cognitive flexibility.

  • The impact of increasing (or decreasing) the individual dopamine level on flexibility depends on the basic level of the corresponding individual.


The relationships between mood, flexibility in divergent thinking, and eye blink rates have been investigated. Eye blink rates are a marker of individual dopamine levels. The following are the most important results:

  • Eye blink rates and mood changes were correlated. Positive mood changes increased eye blink rates. A negative mood had no impact. These results suggest that eye blink rates are a measure of some of the neural processes that underlie mood changes, and probably changes in the dopamine level.

  • Induction of positive mood improved flexibility. Flexibility was not affected by the induction of a negative mood.

  • Eye blink rates increased through the induction of positive mood, but was not affected by negative mood.

  • Positive changes in eye blink rates predicted the increase of flexibility (meaning that cognitive flexibility is systematically affected and possibly driven by changes in dopamine).

  • Mood-induced improvement of flexibility was only found in individuals with a pre-experimentally low eye blink rate (and with probably a low dopamine level).

The results suggest that phasic changes in dopamine levels may underlie the relationship between mood and creativity.

Video game training enhances cognitive control in older adults - Anguera et al. - 2013 - Article

Video game training enhances cognitive control in older adults - Anguera et al. - 2013 - Article


Cognitive control refers to a set of neural processes which allow us to interact with our complex environment in a goal-directed manner. Sometimes humans push their cognitive control to a limit, for instance when they are multitasking. In the current society, people are required to multitask more and more. But when people get older, they become worse at multitasking. This will be confirmed by this research. Participants need to play a videogame so their multitasking performance can be measured. The results indicate a linear age-related decline from twenty to seventy-nine years of age.

Playing the videogame reduced multitasking costs in older adults (sixty to eighty-five years old) and with gains persisting for six months. Also, age-related deficits in neural signatures of cognitive control were remediated by the training. The training caused an increase in performance that extended even to untrained cognitive control abilities (in this case enhanced sustained attention and working memory).


This study shows the positive effects that videogame training can have on cognitive control abilities of older adults. The results indicate an improvement that puts them on the same level as younger adults who play videogames often (with regards to interference resolution, sustained attention and working memory). There is even transfer to untrained cognitive tasks.

These results provide optimism for using a videogame as a therapeutic tool for the people who suffer from cognitive control deficits.

Do action video games improve perception and cognition? - Boot et al. - 2011 - Article

Do action video games improve perception and cognition? - Boot et al. - 2011 - Article


Scientific research indicates that playing videogames can improve cognitive performance on tasks other than those specific to the game. While comparing gamers and non-gamers though, a few things have to be kept in mind. It might be that gamers are good in games, not because of experience, but because of previous abilities. Because they possess these abilities, which make them good at gaming, they started gaming. Another aspect to remember while comparing gamers and non-gamers is that gamers might perform better on the tasks because of the differential expectations for experts. Because they are brought into the experiment because of their expertise in gaming, they might try harder and perform better.

A training experiment: videogame training

To really examine if gaming causes cognitive improvements, an experimental design should be created, a training experiment. So far, no training experiment has been set up in the right way. All previous studies have possible placebo effects across training conditions and outcome measures.

Placebo effects

Because of random allocation of participants to treatment and control groups, they allow causal interferences. The training effect can only be true if the participants don’t know if they are in the experimental or the control group. Good placebo control is not easy however. Because the participants in the videogame training studies know which training intervention they have received.

Another big problem arises when the treatment and the control group produce different placebo effects. If the two groups receive training in two different games, you would assume that they improve on the game they were trained in. However, the perception of what each of the games should improve, might drive the group differences, which is also a placebo effect. So far, no study has explicitly measured the differences in the perceived relatedness of the training to the outcome.

Strategy changes

Videogame training might just reflect shifts in strategy instead of changes in cognitive capabilities. Scientific research has confirmed this.

Anomalous baseline

In most studies the control group does not do the task again after the training. But usually when people perform the same task twice, they perform better the second time, even without training. For this reason, a difference between the experimental group and the control group should be considered a lack of improvement from the control group, instead of an exceptional improvement from the experimental group. To actually make strong conclusions about videogame training, an inadequate baseline.

Future recommendations

Future research should pay attention to the following items. First of all, recruiting should be covert. The videogame players should not suspect that they are there because they are good at gaming. Secondly, researchers should ask if the participants are familiar with research on the benefits of gaming, so they can verify if that knowledge influences their performance. Thirdly, experimental groups and control groups should be equally likely to expect improvements for each outcome measure. Finally, all method details, including the recruiting strategies and the outcome measures included in the study should be fully reported.

Putting brain training to the test - Owen et al. - 2010 - Article

Putting brain training to the test - Owen et al. - 2010 - Article


The widely held belief that commercially available computerized brain-training programs improve general cognitive function in the wider population lacks empirical support.


The central question is not whether performance on cognitive tests can be improved by training, but rather, whether those benefits transfer to other untrained tasks or lead to any general improvement in the level of cognitive functioning.


Six-week online study with 11,430 participants (Age: 18-60). Participants were randomly assigned to one of the two experimental groups or to the control group.


Four tests that are sensitive to changes in cognitive function in health and disease were used for baseline measures: baseline measures of reasoning, verbal short-term memory (VSTM), spatial working memory (SWM) and paired-associates learning (PAL).

The participants practiced six training tasks for a minimum of 10 min a day, three times a week. In experimental group 1, the six training tasks emphasized reasoning, planning and problem-solving abilities. In experimental group 2, a broader range of cognitive functions was trained using tests of short- term memory, attention, visuospatial processing and mathematics similar to those commonly found in commercially available brain-training devices. The difficulty of the training tasks increased as the participants improved to continuously challenge their cognitive performance and maximize any benefits of training. The control group did not formally practice any specific cognitive tasks during their ‘training’ sessions, but answered obscure questions from six different categories using any available online resource. At six weeks, the tests taken for the baseline measurement were repeated to be compared.


When the three groups were compared directly, effect sizes across all four tests were very small. The improvement on the tests that were actually trained was convincing across all tasks for both experimental groups and the control group. In all three groups, whether these improvements reflected the simple effects of task repetition, the adoption of new task strategies, or a combination of the two is unclear, but whatever the process effecting change, it did not generalize to the untrained, but closely cognitive related, four tests.


  1. It is unlikely that the four tests were insensitive to the generalized effects, because these tests were chosen for their known sensitivity to small changes in cognitive function.

  2. The possibility that a more extensive training regime may have produced an effect cannot be excluded.

  3. It cannot be excluded that more focused approaches, such as face-to-face cognitive training, may be beneficial in some circumstances.

Brain Plasticity Through the Life Span: Learning to Learn and Action Video Games - Bavelier et al. - 2012 - Article

Brain Plasticity Through the Life Span: Learning to Learn and Action Video Games - Bavelier et al. - 2012 - Article

Humans are excellent learners and through training they can acquire new skills and alter existing behaviors. However, research shows that learning that emerges through training often does not transfer to other contexts and tasks. Researchers then started focusing on which conditions were necessary to stimulate general learning. These trainings are usually more complex and more similar to real-life situations. Recent research shows that videogame training might be promising in general cognitive training.

Videogame training

Recent research has shown a lot of improvements after videogame training. One reason as to why videogame training seems to promote general learning is that playing videogames incorporates a lot of different tasks and domains, that in laboratory trainings have been separated. Playing action videogames, primarily first person shooters, enhances the spatial and temporal resolution of vision, as well as its sensitivity. Other improvements were in visual short-term memory, spatial cognition, multitasking, some aspects of executive function, reaction time, speed-accuracy trade-off, selective attention, divided attention and sustained attention.

A causal relationship

Some fear that the relationship between videogame training and improvements in cognitive control are actually caused by a population bias. This means that action videogaming tends to attract people with inherently superior skills in those games. The only way to confirm that this relationship is causal, is through well-controlled training studies. In this type of training they take participants who don’t usually play action videogames. They pretest them. Then they randomly select half and let them train in the action videogame. The other half has to play a game as well, but it is a nonaction videogame. They compared these two groups and also test for test-retest effects.

Learning to learn as the common cause

A wide range of tasks seem to improve after training in videogames. Researchers wonder what it is in action videogames that improves performance. According to these authors the common cause is learning to learn. First of all, all the trainings share the same principle, namely that the participants need to make a decision based on a limited amount of noisy data. This taps with most everyday decisions.

Posterior distribution over choices

The posterior distribution is the probability distribution of an unknown quantity, treated as a random variable, conditional on the evidence that has been obtained from an experiment or a survey.

In the posterior distribution over choices, we denote the probability as p(c|e). Here c are the choices, and e is the evidence. Now, the most accurate posterior distribution needs to be calculated, so that the best decision can be made. The main goal of learning is to improve the precision of this probabilistic inference.

Research has shown that game experience led to more accurate knowledge of the statistics of the evidence for the task (or more accurate knowledge of the posterior distribution, p(e|c)). Also, most of the tasks that videogame training enhances can be formalized as instances of probabilistic inference. That includes attentional, cognitive and perceptual tasks.

Resources and knowledge

Players of videogames have increased attentional resources in multiple-object-tracking tasks. This allows for more accurate representation of motion and features, which leads to more accurate tracking and identification. Having more resources may enable the learners to learn faster, because critical distinctions will be more accessible to them.

However, only having more resources is not enough to ensure faster learning. Because the resource allocation needs to be guided through structured knowledge, to help find out where the useful information lies. When speaking of knowledge, they refer to it as the representational structure that is used to guide behavior.

Hierarchical behavior models

Hierarchical behavior models divide tasks into subtasks, which are themselves decomposed into component actions. There hierarchical structures allow for the decomposition of computations into multiple layers, using greater and greater abstraction. Shallow architectures abstain from abstraction and simply focus on finding the right set of diagnostic features. This distinction between the shallow and deep hierarchies can be expressed as the difference between learning a rich generative model that captures hidden structure in the data, versus learning a discriminative model specific to a classification problem.

For action videogames to provide the players with knowledge that they can use in their laboratory task, the games and the laboratory tasks need to share structure at some level of abstraction. Otherwise it won’t have any effect on performance in real-life tasks.

Learning rules

There is some evidence pointing to changes in knowledge or learning rules because of playing videogames. First of all it seems that playing action videogames leads to more accurate probabilistic inference. This suggests the development of new connectivity and knowledge that enable a more efficient hierarchy for the task. Secondly there are a few experiments that show improvement despite little need for resources. Thirdly, an attentional explanation is not always in line with the noted changes.

How does playing action videogames lead to general learning?

A possible explanation is that playing action videogames may enable more generalizable knowledge through various abstractions. This includes the extent to which nontask-relevant information should be suppressed, how the performance needs to be modified to maximize the reward rate, how the data needs to be combines across feature dimensions, and how to set a proper learning rate.

According to the authors, playing action videogames does not teach any particular skill on its own. Instead, it increases the ability to extract patterns or regularities in the environment. Players of action videogames have a higher ability to exploit task-relevant information more efficiently. They are also better in suppressing irrelevant information. They might be better at this because they are better in finding out the underlying structure of the task they need to perform. Because they have more accurate statistical inference over the data that they are experiencing, they perform better on a variety of tasks. This is how playing action videogames stimulates learning to learn.

Exercising your brain: a review of human brain plasticity and training-induced learning - Green & Bavelier - 2008 - Article

Exercising your brain: a review of human brain plasticity and training-induced learning - Green & Bavelier - 2008 - Article

Humans have a big capacity to learn new skills. Skill learning is a change, usually an improvement, in perceptual, cognitive, or motor performance. This has to be the result of training, and persist for several weeks or months (as to distinguish it from effects related to adaptation or other short-lived effects). With the right training, humans can improve in basically any task.

Types of learning

Researches make a distinction between two types of learning. Early, fast learning happens within minutes, while the participant is becoming familiar with the task and stimulus set. Slow learning arises through practice and requires many hours or days to become effective.

General learning refers to learning effects that at the time of retention testing not only had high savings on the trained task, but also transfer to new tasks and contexts.

Problems with learning

Many times the participants only improve on the trained task, and there is no transfer to other tasks. Specificity was found in perceptual learning, in the motor domain and in cognitive trainings. Also the tasks are often boring and don’t stimulate the best results. Finally, task improvement is not always because of the training. Motivation, mood and wanting to please the researcher influence the task score.

Because of these problems, the researchers have formulated a few questions. They want to identify training regimens that lead to performance improvements and that generalize beyond the training context and persists over time. Also they want to find out which factors contribute to a more general learning outcome.

Training regimens

In some training regimens learning seems to be more general. These trainings are usually more complex than the simple laboratory experiments, and they are closer related to real life events (think of videogame training or athletic training). However, it is important to evaluate how they established the causal link between the training and the improved performance. They determine it through a training study in which non-game players are trained on an action videogame. The skill is assessed before and after the training. Then their results are compared with the performance of a control group that played a non-action game for the same amount of time. However, the causal relationship might be caused by population bias or test-retest effects.

Natural training and brain training

Forms of natural training are playing videogames or doing sports. Brain training needs to be distinguished from natural training, because it is specifically designed to train certain skills. Natural training taps many different aspects of cognitive control. But in brain training, the parts are separated. The training is usually broken down into subdomains and the different subdomains are being trained individually. However, research indicates that this type of learning does lead to faster learning, but it can be detrimental during the retention phase. In the end it often leads to less robust retention and lesser transfer across tasks.

Learning mechanisms

Depending on the cognitive domain, the learning mechanisms may vary. There are however some mechanisms of learning that seem to be shared across domains.

The reverse hierarchy theory

This theory states that information flows in a feed-forward manner through hierarchically organized structures. Information at the lower levels of processing decays as information flows through. However, just to have the information at the higher level is not enough to maintain task performance. Feedback searches need to be done, which go down into the hierarchical structure to find the most informative level of representation. Learning is a top-to-down process, and only tasks that are being handled at the high levels of the hierarchy will show transfer of learning. Tasks that are being handled on lower level processing will show less generalization of learning than those tasks that are being handled by higher level processing.

Learning determinants

There are some characteristics that different complex trainings contain that are responsible for the improvement in learning and that make transfer possible. The following characteristics have been thus far identified:

  • Task difficulty. This is about manipulating the task difficulty in the appropriate manner. Participants will learn new skills and techniques by going through different levels. This way they will have learnt something at the end, which they could not have done at the beginning. Learning rate is at a maximum when the task is challenging, but doable.

  • Arousal. Trainings with extremely low or extremely high levels of arousal tend to lead to low amounts of learning. Between these extremes there is a level of arousal which leads to a maximum amount of learning.

  • Variability. Low input variability leads to learning at levels of representation that are specific to the items that are being learned, but they are too rigid to generalize new stimuli. A high variability ensures that the newly learned information are at flexible levels of representation.

Action video game modifies visual selective attention - Green & Bavelier - 2003 - Article

Action video game modifies visual selective attention - Green & Bavelier - 2003 - Article


Playing videogames can influence perceptual and motor skills. This happens because when a person is exposed to a different visual environment, the visual system automatically adapts. There is a lot of scientific research supporting this claim. However, perceptual learning tends to be specific to the trained task. Generalization to another task is hardly ever found.

This study will show that playing action videogames is indeed capable of altering a range of visual skills. The results from four experiments indicate changes in different aspects of visual attention between habitual videogame players and non-videogame players. In the last experiment, playing an action videogame by non-players improved their pre-training abilities.

Flanker compatibility effect

This experimental paradigm can determine whether or not playing videogames produces an overall increase in attentional capacity. The task measures the effect that a distractor has on the target task. The size of the distractor-effect shows how much is left of the attentional resources. Results indicate that the effect of the distractor is large when the target task is easy, but small when the target task is difficult. This may be because when the target task is easy, spare attentional resources are spent on the distractor and thus distract. But when the target task is difficult, there are no attentional resources left and the participant is thus not distracted and the distractor-effect is smaller.

Increased attentional capacity

Flanker compatibility effect

With this task they will examine if playing videogames can increase the capacity of the visual attentional system. If videogame players have more attentional capacity, they should run out of their attentional resources slower than non-videogame players as the task becomes more difficult.

The results indicate that videogame players have greater attentional capacity, because they showed a distractor effect that remained even when the target task was difficult.

Enumeration task

They were also confirmed by the enumeration task, which showed that playing videogames increases the number of visual items that can be remembered.

Useful field of view task

Because it was still unclear whether or not playing videogames also helps processing outside the training range, they performed some extra tests. The results indicate that videogame players have an enhanced allocation of spatial attention over the visual field, even at untrained locations.

Attentional blink task

The fourth experiment examined if the pressure to act quickly on several visual items changes the ability to process items over time. Particularly they examine whether or not there is a bottleneck-effect of attention that often happens in temporal processing. The results indicate that videogame training improves task-switching abilities and decreases the attentional blink. Players of videogames have reduced visual and amodal bottlenecks. This means that they have an increased ability to process information over time. It is however not clear why that is. It might be due to faster target processing or an increased ability to maintain several attentional windows in parallel.

Training experiment

The results may be explained by selection-effects. Meaning that they may have selected videogame players with inherently better attentional skills than non-videogame players. For this reason, the last experiment was set up. The training was successful, and all the participants who got trained in playing videogames improved on the experimental tasks. It shows that only a few hours of training can increase the capacity of visual attention, its spatial distribution and its temporal resolution.

Towards understanding the effects of individual gamification elements on intrinsic motivation and performance - Mekler et al. - 2017 - Article

Towards understanding the effects of individual gamification elements on intrinsic motivation and performance - Mekler et al. - 2017 - Article

What is this article about?

Many professionals now use games’ motivational characteristics and want to apply them to non-gaming contexts to stimulate user engagement. This is called “gamification”, which is defined as: “the use of game design elements in non-game contexts”. Studies have shown that some game elements can stimulate user behaviour in different contexts, but others have cautioned against the use of these elements. For the latter, the argument is that it may diminish users’ intrinsic interest and lead them to stop engaging with the application or service.

Psychological studies have shown that certain forms of rewards and feedback can indeed have a detrimental effect on intrinsic motivation, and this may also be true in gamification. However, if game elements are applied in an appropriate way, this may lead to increased intrinsic motivation of the users, by satisfying their psychological needs for autonomy, competence, and relatedness.

Therefore, to understand the psychological mechanisms underlying gamification better, the effects of individual game design should be studied in relation to motivation. There are only a few studies which examined the effects of individual game elements on motivation and performance. In this article, the self-determination theory (SDT) framework is used to address research gaps. Specifically, this article describes how points, leaderboards and levels affect need satisfaction, intrinsic motivation and performance in an image annotation task. Individual differences will also be discussed.

What is the theoretical background?

Intrinsic motivation, cognitive evaluation, and causality orientation

According to SDT, there are two forms of motivation: extrinsic motivation (doing something to gain a reward such as money or praise), and intrinsic motivation (doing something because it is enjoyable). These are the most frequently discussed types of motivation, but they are empirically rarely studied in gamification research. Both types promote performance, but only the latter improves psychological well-being, creativity, and learning outcomes.

According to the cognitive evaluation theory, the effects of extrinsic rewards are mediated by a person’s perception of these events as informational or controlling. This determines how the events influence the psychological needs for competence and autonomy. Competence is described as the perceived extent of one’s own actions as the cause of the desired consequences. When individuals are given informational feedback (direct and positive), this drives the need for competence. Feelings of competence can drive intrinsic motivation only when there is a sense of autonomy: people must feel that they determined their own behavior, and not someone else. When they feel like they are controlled by someone else, positive feedback may hinder people’s need for autonomy and decrease intrinsic motivation. According to causality orientation theory, a sub-theory of SDT, people differ in the degree of experiencing their actions as self-determined. This influences whether people perceive feedback as informational or controlling. Therefore, a person’s causality orientation is a moderator of the effects of feedback on need satisfaction. Autonomy-oriented individuals are more likely to act according to their own interests and values and are more likely to interpret external events as informational rather than controlling and they thus experience more satisfaction in their competence needs. In contrast, control-oriented people are more likely to act due to external demands and perceive external events as pressuring and experience lower feelings of autonomy.

Need satisfaction and game design elements

Points, levels, and leaderboards are key elements in gamification, because they are related to digital games and are applicable in different non-game contexts. Zagal and colleagues (2005) define these as game metrics: all three are used to keep track of and provide feedback on player performance in games. They can also function as positive, informational feedback and can increase gamers’ motivation because they satisfy the need for competence. However, the discussed studies all took place in form of a group collaboration setting with 50 students per session, and it could be argued that the informational feedback provided in these studies was worded in a manner that could have been perceived as being controlling by some, and informational by others. Also, intrinsic motivation was not measured, and it is unclear as to what affects intrinsic motivation and how this in turn relates to performance.

What is the goal of this study?

In the current paper, the aim is to expand upon existing research by studying the effects of points, levels, and leaderboards on participants’ performance and motivation in an image annotation task.

What are their findings?

The researchers looked at how points, leaderboards, and levels increase performance, competence need satisfaction and intrinsic motivation in an image annotation task. They also took the participants’ causality orientation into account. They found that these game elements did promote user behaviour, especially levels and the leaderboard prompted participants to generate significantly more tags. However, the quality of tags were not affected. The different conditions (plain, points, leaderboard, levels) did not differ in their intrinsic motivation, competence need satisfaction. Participants’ control orientation also did not influence the effects of game elements on performance, need satisfaction or intrinsic motivation. Autonomy-oriented participants did report more intrinsic motivation than control oriented participants, and also produced more tags. Intrinsic motivation was also correlated with autonomy and competence need satisfaction and with tag quality.

Game elements and performance

The goal metrics of points, levels, and leaderboards stimulate performance by communicating how many tags have been generated. This sets explicit goals for participants to aspire to. In their experiment, the authors found that these game metrics did lead to more tags (tag quantity). However, tag quantity was negatively correlated with tag quality. This means that the more tags participant made, the less the quality of these tags were. Participants in the gamified conditions thus created more tags than in the plain conditions, but the quality of the tags were comparable. Overall, it seemed that participants in gamified conditions performed better than participants in the plain condition, who were not presented with any game elements. Furthermore, tag quantity was slightly correlated with intrinsic motivation, but participants’ reported intrinsic motivation did not reflect their performance. This could mean that in the current study, the game elements functioned as extrinsic rewards. This is not necessarily a bad thing: it leads to overall better performance, but as noted, only intrinsic motivation increases the extent and quality of effort that people put into a given task.

Effects on competence need satisfaction and intrinsic motivation

None of the game elements affected intrinsic motivation or need satisfaction, and it was thus also not moderated by participants’ causality orientation. In contrast to expectations, game elements were not perceived as informational, and did not lead to more feelings of competence or intrinsic motivation compared to the plain condition. This means that points, levels, and leaderboards do not satisfy competence need satisfaction, even in a non-controlling setting. The authors explain that this might be, because participants did not receive enough meaningful information that they could use to judge their performance. Even though the elements are informational, there was no explicit indication of what was a “good” performance. With regard to motivation, the current image annotation task was not challenging. Therefore it could be that the game elements only satisfy competence needs for tasks that are experienced as challenging. Furthermore, in this study the game elements were not “juicy”. In games, there is often a lot of this “juicy” feedback, in terms of sounds, visuals, and animations. Also, in the current study, participants were only scored for tag quantity, and did not receive feedback on whether the tag was fitting or not. If this would have been rewarded, this could have increased the challenge of the task.

What can be concluded?

In this study, points, levels, and leaderboards increased tag quality, but did not affect intrinsic motivation, need satisfaction or tag quality. This suggests that they were regarded as extrinsic motivations. However, they did also not impair intrinsic motivation, and this shows that these game elements may be effective means for promoting performance quantity. There is more research needed on why particular game elements act as extrinsic or intrinsic motivators in different contexts, and how this shapes user enjoyment and behavior.

Gamification of task performance with leaderboards: A goal setting experiment - Landers et al. - 2015 - Article

Gamification of task performance with leaderboards: A goal setting experiment - Landers et al. - 2015 - Article

What is this article about?

Gamification is becoming increasingly popular, and people are looking to see whether it can help to increase employee performance. For example, by directing and rewarding employee attention to tasks through goal setting, performance could be improved. There is however little research in this area. There are differences between traditional goal-setting efforts and gamification. In traditional settings, a single goal is set for an employee to achieve. It seems that ‘SMART’ (specific, measurable, attainable, realistic, and time-bound) goals are most motivating. In gamification, points and leaderboards are the goals. However, these are seen as non-optimal, because when there are only points given, there is no specific goal to pursue. A leaderboard presents many possible goals, and represents the prior performance of others. This means that both points and leaderboards lead to that employees have to set their own goals. In this paper, the authors analyze the effectiveness of goal-setting theory to explain changes in task performance resulting from a leaderboard intervention.

What about effects of gamifying with leaderboards?

It is difficult to conclude any effects of leaderboards based on previous studies. This is because leaderboards are experimentally isolated as a gamification technique. When the leaderboards are included in an experimental condition with other game elements such as badges or narrative, then the presence of these additional elements may interaction with leaderboards and  this may be leading to the observed differences.

What about goal-setting theory?

According to Locke (1968), people will be motivated to strive towards goals. This results from a process called self-regulation, and self-regulation is the mediator between set goals and performance. It is defined as: “the modification of thought, affect, and behaviour”. Goal-setting interventions are considered to be the most powerful motivational interventions. Leaderboards can perform similar to the classic goal-setting interventions, because leaderboards provide the user with several potential goals. Therefore, the authors expect that the leaderboard will be effective because it serves as a difficult goal. They hypothesize:

“The leaderboard will function similarly to a difficult goal. Specifically, participants in the leaderboard condition should outperform participants in an easy or do-your-best goal condition.”

What is the role of goal commitment in goal-setting theory?

Goal commitment is another moderator in the relationship between goals and performance. For those with high commitments, the linear relationship between goal difficulty and goal performance is observed. For those with low commitment, there is no relationship between the goal level and performance. People with lower goal commitment are more likely to reject difficult goals and replace them with easier goals. For people with high commitment, performance remains high, even under impossible goal conditions. This is probably because they continue to strive towards the impossible goals rather than revise their goals downward. The authors expect that goal commitment will function as a moderator of the leaderboard-performance relationship, just as it is a moderator of traditional goal setting. They hypothesize:

“Goal commitment moderates the relationship between the use of leaderboards and task performance. Specifically, greater goal commitment will strengthen the effect of more difficult goals and the leaderboard.”

What can be concluded?

The authors found that goal setting can be an effective theoretical framework to explain the success of leaderboards. However, gamification using leaderboards may be more effective for relatively simple tasks: a leaderboard tracking sales performance is likely to be more effective than a leaderboard tracking managerial success. Goal commitment also moderates the success of leaderboards as goal-setting theory would predict: if people do not believe a leaderboard provides worthwhile goals, then leaderboards will not be successful at altering employee behavior. If employees do not believe that the leaderboard is appropriate, it is also unlikely to affect performance. A third finding is that leaderboards are approximately as effective as difficult-to-impossible goals to increase task performance. When individuals are faced with a leaderboard, they are likely to target the top or near-top goals presented on that leaderboard, even when there are no specific instructions to target those goals. However, it remains unknown if the social component of leaderboards is more motivating than simple goal setting. Future research should examine the role of goal-setting theory to explain the success of leaderboards in applied contexts.

The effect of uncertainty on learning in game-like environments - Ozcelik & Cagiltay - 2013 - Article

The effect of uncertainty on learning in game-like environments - Ozcelik & Cagiltay - 2013 - Article

What is this article about?

In schools, one of the biggest challenge is to motivate students. Learning tasks are often rated as to boring, too easy, or decontextualized. To motivate students, learning games have been designed. Computer and video games are also suggested to increase the motivation and engagement of players, because they include elements such as play, fantasy, curiosity, challenge, competition, cooperation, and learner-control. However, there is insufficient research to examine the effect of those individual elements on motivation and learning. One of the features that has not been studied is uncertainty. Uncertainty seems to affect the level of engagement, but no study has looked at how uncertainty impacts learning and what the causal relationship between uncertainty and learning outcomes is. Therefore, in this paper, the authors try to understand this effect of uncertainty on learning in game-based environments.

What does the literature tell us?

In games, meaningful learning occurs when the relationship between the actions of a player and the outcomes of the system in a game are ‘discernible and integrated into the larger context of the game’. Games have been shown to increase motivation and engagement. They are said to produce more effective learning, because they bring about more fun, appeal, and learner-centered environments. But why is this? There are several reasons suggested. First, in order to move to higher levels in the game, the gamers need to use prior knowledge, transfer that information into new situations, apply information in correct contexts, and learn from immediate feedback. Games can help learners to apply, synthesize, and think critically about what they learn through active and social participation.

Games can also provide flow experiences, making individuals feeling absorbed in the game. Flow is defined as ‘a state of consciousness that is sometimes experienced by individuals who are deeply involved in an enjoyable activity’. When people are in an optimal flow experience, they are in such a psychological state that they do not care about the environment. They also lose track of time, surroundings, and the actual environment the are in. Several studies suggest that there is a relationship between learning and excitement / flow in games.

To achieve flow and learning in games, the ARCS model can be used. The ARCS model stands for Attention, Relevance, Confidence, and Satisfaction. This means that attention should first be drawn to the relevant stimuli and should be sustained during instruction. Interesting visuals can attract attention. The second element, relevance refers to how well the learning activities are related to the students’ goals, learning styles, and prior experiences. This means that teachers should discover their students’ interests and needs and incorporate them into their instructions. Confidence is about the level of the students’ confidence and expectancy to be successful. For this, instruction should be designed so that success is attainable with realistic effort and ability. The last element, satisfaction, refers to students’ anticipation and experience of positive feelings about the outcomes of the current learning task. To accomplish this, intrinsic and extrinsic reinforcements should be provided in learning environments,.

Even though different studies have shown that games improve learning, there are not so many examples of such games in higher education. The guidelines for developing such games are also limited. In this paper, the authors try to better understand the effect of uncertainty on learning through games. They developed a game-like environment to teach concepts on Entity-Relationship Diagrams (ERD), which is a main database design tool in relational database systems. There are two versions of the learning environment: one includes uncertainty, and the other includes no uncertainty.

What were the results?

The results in this study showed that the uncertainty group outperformed the certain group. This confirms that the element of uncertainty enhanced learning in the game-like environment is of effect. This was the first study to demonstrate that uncertainty improves learning outcomes by using a game-like environment. The results also showed that there is a positive relationship between uncertainty in a game-like environment and motivation among its players. The results can also explain why playing games can become addictive: uncertain events induce an increase in the release of dopamine. Dopamine is implicated in addictive behaviour. This means that the addictiveness of games may be related to the release of dopamine when the outcomes are uncertain. The effects of uncertainty on learning may be influenced by gender. For example, women compared to men are more averse to risk and uncertainty in all domains, except social risk.

What can be concluded?

The results of this study can have implications for educational designers to create more effective game-like environments. By using design factors such as uncertainty, the positive effect of game-like environments can be improved. This can help to create better game-based learning environments.

What Should Be the Role of Computer Games in Education? Policy Insights from the Behavioral and Brain Sciences - Mayer - 2016 - Article

What Should Be the Role of Computer Games in Education? Policy Insights from the Behavioral and Brain Sciences - Mayer - 2016 - Article

What is this article about?

Computer games for learning are games that have the goal to promote learning. In this paper, the author asks: “Can playing computer games help people to develop knowledge and skills, and if so, how should computer games be used in education?”.

Game visionaries would like to see schools in which children play computer games which help them learn academic content and skills. This might lead to students who are motivated to learn, because they like to play computer games for learning.

Most game advocates feel like contemporary schools are failing, and that video games can help to promote student learning. This idea for an ‘educational revolution’ is based on the idea that computer games lead to higher motivation. If this is correct, then this means that school activities should include more computer games.

However, it is always important to check whether claims are supported by evidence. There have been a lot of reviews of scientific research on games for learning. However, their overall conclusions have not supported the claims of game advocates…

For example, Tobias and colleagues (2011) conclude:

“There is considerably more enthusiasm for describing the affordances of games and their motivating properties than for conducting research to demonstrate that those affordances are used to attain instructional aims . . . This would be a good time to shelve the rhetoric about games and divert those energies to conducting needed research. (p. 206)”

In this paper, the author describes what the current state of scientific research evidence is about games for learning, and what the policy implications of this research is for educational practice.

What does the research tell us?

Research on computer learning games are divided into three categories: value-added research, cognitive consequences research, and media comparison research. Value-added research examines which game features improve learning, the cognitive consequences research studies whether playing games improves cognitive skills, and media comparison research looks at whether games are better than conventional media at promoting academic learning. Observational studies which describe game playing can also add to the understanding of games for learning, but in this article only experimental studies are discussed, because they allow for causal conclusions which are necessary for educational policy recommendations.

What is value-added research?

In this type of research, the learning outcomes of people who play the base version / ‘standard version’ of a game (control group) is compared to those who play the same game, but with one feature added (treatment group). This approach is supported by studies that have shown that adding instructional support to computer games leads to positive effects such as selecting relevant information, organizing it, and integrating it with relevant prior knowledge. This is often based on an effect size of Cohen’s d. This effect size shows how many standard deviations of improvement were caused by adding the new feature. In the educational research, effect sizes of d=0.4 or greater are considered to be important.

As an example, the Design-A-Plant game is used to teach environmental science. Students have to travel toa distant planet that has specific climate patterns (heavy rain, heavy winds). They then must design a plant that will survive there, and therefore they have to select from eight types of roots, eight types of stems, and eight types of leaves. The students get to see how well their plant survived, and there is a local character called Herman-the-Bug who explains the plant functions. Across nine studies, students did better when Herman-the-Bug’s words were spoken (treatment group) instead of printed on the screen (control group). This is an example of a value-added study.

There are five game features that improve student performance on learning outcomes:

  1. Using conversational style (personalization)
  2. Presenting words in spoken form (modality)
  3. Adding prompts to explain (self-explanation)
  4. Adding explanations or advice (coaching)
  5. Adding pregame descriptions of key components (pretraining)

For value-added research, it is also important to determine what does not work. For example, in the Design-A-Plant game, students used VR. They liked it, but it did not improve their performance.

What is cognitive consequences research?

In cognitive consequences research, students’ cognitive skills are measured and are compared using pretest-to-posttest gains. For example, the skills of students who play a normal game versus students who play a different type of game.

Most cognitive research is done using commercial games, but some research is also conducted using specific games that teach cognitive skills.

For example, in first-person shooter games such as Unreal Tournament or Medal of Honor, the players must be alert at all times for attackers. According to Anderson and Bavelier (2011), playing first-person shooter games over extended periods can improve a variety of cognitive skills such as perceptual attention, compared to non-shooter games.

There seem to be two types of games that promote cognitive improvements. The first is the first-person shooter games, which promote perceptual attention skills (useful field of view or multiple object tracking). The second type is playing puzzle games such as Tetris, which improve spatial cognitive skills (mental rotation of Tetris-like shapes). Thus, Tetris improves a specific spatial cognitive skill: mental rotation of Tetris-like shapes. There is thus a more limited impact than playing first-shooter games.

What is media comparison research?

In media comparison research, the learning outcomes of students who learn academic content from a game is compared to students who learn it from books or from face-to-face lectures. However, media comparison research may be confounded by other differences in presented content and instructional methods.

As an example, to teach about wet-cell batteries, players play the Cache 17 in which they have to find lost artwork in an old World War II bunker system.  Other students have to learn the same information about the batteries from a PowerPoint slideshow. A study showed that the conventional group did better than the game group. Thus, when the content was the same in both groups, the game did not improve learning.

What can be concluded?

It seems that many games for learning are ineffective. Therefore, it is very important to choose games based on appropriate criteria. The selection of educational games should depend on the available evidence. It is also important to select games based on an understanding of how learning works, which means that games must have features to maintain motivation and provide sufficient instructions for example in the form of feedback.

There are several policy implications, namely:

  1. Put the revolution on hold. Based on the available research, educational practices should not be revolutionized and changed into practices based on computer games.
  2. Use Games for Targeted Learning Objectives. Even though education does not need to be revolutionized, adding small games could be positive.
  3. Align Games with Classroom Programs and Activities. Targeted games should fit within the existing educational program. This thus means that targeted games should be used to supplement and complement instructions instead of replacing them.
  4. Do not confuse liking with learning. It is important to focus on games that improve learning outcomes, and not focus only on how much students like playing the game. Liking is not necessarily learning! There are a lot of examples in which students liked one version of the game the best, but did not learn the best from it.
  5. Adapt instructional activities to maintain challenge. Games’ features to adapt to the player’s current level of competence are very important, because this produces motivation. Therefore, using well-designed games to create the appropriate level of challenges for each student is a policy implication.
What Should Be the Role of Computer Games in Education? Policy Insights from the Behavioral and Brain Sciences - Mayer - 2016 - Article

What Should Be the Role of Computer Games in Education? Policy Insights from the Behavioral and Brain Sciences - Mayer - 2016 - Article

What is this article about?

Computer games for learning are games that have the goal to promote learning. In this paper, the author asks: “Can playing computer games help people to develop knowledge and skills, and if so, how should computer games be used in education?”.

Game visionaries would like to see schools in which children play computer games which help them learn academic content and skills. This might lead to students who are motivated to learn, because they like to play computer games for learning.

Most game advocates feel like contemporary schools are failing, and that video games can help to promote student learning. This idea for an ‘educational revolution’ is based on the idea that computer games lead to higher motivation. If this is correct, then this means that school activities should include more computer games.

However, it is always important to check whether claims are supported by evidence. There have been a lot of reviews of scientific research on games for learning. However, their overall conclusions have not supported the claims of game advocates…

For example, Tobias and colleagues (2011) conclude:

“There is considerably more enthusiasm for describing the affordances of games and their motivating properties than for conducting research to demonstrate that those affordances are used to attain instructional aims . . . This would be a good time to shelve the rhetoric about games and divert those energies to conducting needed research. (p. 206)”

In this paper, the author describes what the current state of scientific research evidence is about games for learning, and what the policy implications of this research is for educational practice.

What does the research tell us?

Research on computer learning games are divided into three categories: value-added research, cognitive consequences research, and media comparison research. Value-added research examines which game features improve learning, the cognitive consequences research studies whether playing games improves cognitive skills, and media comparison research looks at whether games are better than conventional media at promoting academic learning. Observational studies which describe game playing can also add to the understanding of games for learning, but in this article only experimental studies are discussed, because they allow for causal conclusions which are necessary for educational policy recommendations.

What is value-added research?

In this type of research, the learning outcomes of people who play the base version / ‘standard version’ of a game (control group) is compared to those who play the same game, but with one feature added (treatment group). This approach is supported by studies that have shown that adding instructional support to computer games leads to positive effects such as selecting relevant information, organizing it, and integrating it with relevant prior knowledge. This is often based on an effect size of Cohen’s d. This effect size shows how many standard deviations of improvement were caused by adding the new feature. In the educational research, effect sizes of d=0.4 or greater are considered to be important.

As an example, the Design-A-Plant game is used to teach environmental science. Students have to travel toa distant planet that has specific climate patterns (heavy rain, heavy winds). They then must design a plant that will survive there, and therefore they have to select from eight types of roots, eight types of stems, and eight types of leaves. The students get to see how well their plant survived, and there is a local character called Herman-the-Bug who explains the plant functions. Across nine studies, students did better when Herman-the-Bug’s words were spoken (treatment group) instead of printed on the screen (control group). This is an example of a value-added study.

There are five game features that improve student performance on learning outcomes:

  1. Using conversational style (personalization)
  2. Presenting words in spoken form (modality)
  3. Adding prompts to explain (self-explanation)
  4. Adding explanations or advice (coaching)
  5. Adding pregame descriptions of key components (pretraining)

For value-added research, it is also important to determine what does not work. For example, in the Design-A-Plant game, students used VR. They liked it, but it did not improve their performance.

What is cognitive consequences research?

In cognitive consequences research, students’ cognitive skills are measured and are compared using pretest-to-posttest gains. For example, the skills of students who play a normal game versus students who play a different type of game.

Most cognitive research is done using commercial games, but some research is also conducted using specific games that teach cognitive skills.

For example, in first-person shooter games such as Unreal Tournament or Medal of Honor, the players must be alert at all times for attackers. According to Anderson and Bavelier (2011), playing first-person shooter games over extended periods can improve a variety of cognitive skills such as perceptual attention, compared to non-shooter games.

There seem to be two types of games that promote cognitive improvements. The first is the first-person shooter games, which promote perceptual attention skills (useful field of view or multiple object tracking). The second type is playing puzzle games such as Tetris, which improve spatial cognitive skills (mental rotation of Tetris-like shapes). Thus, Tetris improves a specific spatial cognitive skill: mental rotation of Tetris-like shapes. There is thus a more limited impact than playing first-shooter games.

What is media comparison research?

In media comparison research, the learning outcomes of students who learn academic content from a game is compared to students who learn it from books or from face-to-face lectures. However, media comparison research may be confounded by other differences in presented content and instructional methods.

As an example, to teach about wet-cell batteries, players play the Cache 17 in which they have to find lost artwork in an old World War II bunker system.  Other students have to learn the same information about the batteries from a PowerPoint slideshow. A study showed that the conventional group did better than the game group. Thus, when the content was the same in both groups, the game did not improve learning.

What can be concluded?

It seems that many games for learning are ineffective. Therefore, it is very important to choose games based on appropriate criteria. The selection of educational games should depend on the available evidence. It is also important to select games based on an understanding of how learning works, which means that games must have features to maintain motivation and provide sufficient instructions for example in the form of feedback.

There are several policy implications, namely:

  1. Put the revolution on hold. Based on the available research, educational practices should not be revolutionized and changed into practices based on computer games.
  2. Use Games for Targeted Learning Objectives. Even though education does not need to be revolutionized, adding small games could be positive.
  3. Align Games with Classroom Programs and Activities. Targeted games should fit within the existing educational program. This thus means that targeted games should be used to supplement and complement instructions instead of replacing them.
  4. Do not confuse liking with learning. It is important to focus on games that improve learning outcomes, and not focus only on how much students like playing the game. Liking is not necessarily learning! There are a lot of examples in which students liked one version of the game the best, but did not learn the best from it.
  5. Adapt instructional activities to maintain challenge. Games’ features to adapt to the player’s current level of competence are very important, because this produces motivation. Therefore, using well-designed games to create the appropriate level of challenges for each student is a policy implication.
Tryptophan supplementation induces a positive bias in the processing of emotional material in healthy female volunteers - Murphy et al. - 2006 - Article

Tryptophan supplementation induces a positive bias in the processing of emotional material in healthy female volunteers - Murphy et al. - 2006 - Article

Tryptophan is prescribed as an antidepressant in some countries. Studies have shown a reappearance of depressive symptoms after acute tryptophan depletion and a mood-lowering effect in healthy, non-depressed participants. However, there is research that indicates that it might just work as an adjunct to other antidepressant treatments, and not as a primary treatment.

There is evidence that suggests that tryptophan is involved in emotional biases in the perception of socially relevant stimuli. This points to an induction of a negative perceptual bias in the procession of emotional material.

This research

Previous research has shown that serotonergic antidepressants have the opposite effect on information processing and induce a positive bias on several emotion-related tasks. What is not known however, is whether increasing serotonin synthesis has the same effects on emotional processing to those seen after the inhibition of serotonin reuptake. This will be researched here. The hypothesis is that tryptophan will induce cognitive changes and emotional biases opposite to those found in depression and characteristics of those induced by serotonergic antidepressants. They will also be investigating a gender difference, since previous research has indicated that women are more effected by acute tryptophan depletion than men.


The results show that repeated administration of tryptophan gives a positive bias in the processing of emotional material in women, but not in men. In women, the tryptophan increased recognition of positive facial expressions and decreased recognition of negative facial expressions. It also lowered attentional vigilance towards negative words, and decreased baseline startle responsivity. The results suggest that just like other serotonergic antidepressants, tryptophan can directly modulate the processing of emotional material. However, the results also indicate that it may be a milder manipulation of the serotonergic system than selective serotonin reuptake inhibitors.

The effects of tryptophan were only seen in females. Also the mood-lowering effects were more consistent in women than in men. This may be because acute tryptophan depletion has a greater biochemical effect in women than in men, or because women have a better clinical response to serotonin reuptake inhibitors than men.

Results indicate that tryptophan supplementation has bigger effects on emotional processing in women than men. This should play a role in the therapeutic consideration when tryptophan is being used.

Working memory reloaded - Colzato et al. - 2013 - Article

Working memory reloaded - Colzato et al. - 2013 - Article


Tyrosine is an amino acid. It is also the precursor of two important neurotransmitters, namely norepinephrine and dopamine. Taking in more tyrosine stimulates the release of norepinephrine and dopamine. Previous research has focused primarily on the role of tyrosine as a counteract of conditions that cause a depletion of norepinephrine and dopamine. One of these conditions is stress. The results indicate that tyrosine may perhaps replete cognitive resources, but only under certain demanding conditions.

Executive control and dopamine

Executive control emerges from both cognitive stability and cognitive flexibility. These two functions are related to the prefrontal cortex, which is modulated by dopamine. Research indicates that high levels of dopamine are good for the stability of representations, but they may also reduce the ability to flexibly change cognitive representations. On the other hand, low levels of dopamine may be good for flexibly changing the cognitive representations, but it may lessen the ability to maintain representations.

This research

This research focuses on the acute effect of tyrosine supplementation on the updating and monitoring of working memory representations. Working memory updating was measured by the N-back task. In this task the participants are required to decide whether each stimulus in a sequence matches the one that appeared n items earlier. When n is two or higher, this requires online monitoring and updating of the working memory content. This will be called the N-2 condition. The N-1 condition is the control condition, because the participant can rely on immediate perceptual priming and is not demanding for the working memory.

The hypothesis is that the depletion of cognitive resources affects performance in the N-2 condition more than performance in the N-1 condition. If the repleting effect of tyrosine really is restricted to cognitive challenging conditions, the positive effect of tyrosine should be stronger in the N-2 condition than in the N-1 condition.


Tyrosine supplementation promotes working memory updating. The N-2 condition was more sensitive to the effect of tyrosine. This reinforces the idea that only tasks with high cognitive demands profit from tyrosine. This may be because more demanding cognitive operations are more likely to use all the available cognitive resources, which can then be repleted by tyrosine.

(Practical) implications

On a short term level, consuming tyrosine-rich food is a safe and healthy way to improve cognitive processes. It can be an alternative to cognitive-enhancing drugs, that come with a lot of side-effects, such as Ritalin.

Effect of tyrosine supplementation on clinical populations and healthy populations under stress or cognitive demands - Jongkees et al. - 2015 - Article

Effect of tyrosine supplementation on clinical populations and healthy populations under stress or cognitive demands - Jongkees et al. - 2015 - Article


Amino-acid tyrosine (TYR) levels peak between 1 and 2 h after consumption and can remain significantly elevated up to 8 hours. Once it has passed the blood-brain barrier (BBB) and is taken up by the appropriate brain cells, TYR is converted into L-DOPA. L-DOPA is converted into DA, resulting in an increase in DA level. TYR supplementation seems to have a beneficial effect only in situations that stimulate neurotransmitter synthesis, i.e., situations that are sufficiently stressful or challenging. Once the threshold has been passed TYR would be metabolized rather than converted into L-DOPA. Also of influence is that TYR shares a transporter across the BBB with several other large neutral amino-acids such as phenyl- alanine and tryptophan.

TYR supplementation might be preferable over L-DOPA administration, given the characteristic inverted-U profile of DA, it would be easy for L-DOPA administration to push individuals to the lower right end of the curve, whereas the subtle increase from TYR is far less likely to do so.

Consuming TYR, the precursor of dopamine (DA) and norepinephrine (NE), may counteract decrements in neurotransmitter function and cognitive performance. However, reports on the effectiveness of TYR supplementation vary considerably, with some studies finding beneficial effects, whereas others do not. Here we review the available cognitive/behavioral studies on TYR, to elucidate whether and when TYR supplementation can be beneficial for performance.

Review clinical setting


Not always has the supplementation of TYR been successful. We speculate perhaps those depressed individuals experiencing a lack of motivation, which may result from DA deficiency, are the ones who could benefit most from TYR supplementation. On the other hand, individuals with psychotic depression may have excess DA. Such individuals would be unlikely to benefit from a further boost in DA activity and therefore TYR supplementation may not be recommendable for them.


AMP might be beneficial, but research is scarce. It is important to distinguish between striatal areas, often demonstrating a hyperdopaminergic state in schizophrenia, and extrastriatal, prefrontal regions showing, in contrast, a marked reduction in DA activity. More studies are needed in which samples are larger and heterogeneity is kept as small as possible or response to TYR is distinguished between patients. Also should the moment of application be considered (remission, psychosis, etc.).


The results are not straightforward; sometimes an effect is found. This might be due to the several different risk factors, which needs to be taken in consideration in future research.


Parkinson is characterized by decreased DA levels in many brain areas. Administering TYR to Parkinson's patients raised levels of DA's metabolite homovanillic acid, suggesting TYR effectively promoted DA function. TYR stimulates neurotransmitter production only in already active neurons, yet Parkinson's is associated with a loss of dopaminergic neurons, thereby reducing TYR's site of action. Therefore it is unlikely that TYR administration would be very successful.


Stress induces increased catecholamine activity and turnover rates in the brain, leading to depletion of neurotransmitter levels as well as behavioral depression. However, studies that administered TYR to rats prior to stress exposure have shown neurotransmitter depletion and decrements in performance can be reversed. The researchers speculate that consistent evidence has not been found, because physical performance only benefits from TYR supplementation when it places high enough cognitive demands on the individual that induce catecholamine depletion.

Healthy individuals

TYR can enhance working memory performance, but in the absence of an overt exposure to stress. Improvements were only found under particularly challenging conditions, such as when other tasks were performed simultaneously and in a taskswitching paradigm.


The potential of using TYR supplementation to treat clinical disorders seems limited. The cognitive changes mediating performance improvements after TYR supplementation remain unknown and, unfortunately, most of the literature focuses on short rather than long-term settings. Nevertheless, based on this overview of the literature the authors conclude TYR is very promising as an enhancer of cognition and perhaps mood, but only when (healthy) individuals find themselves in stressful or cognitively demanding situations.

Acute effects of cocaine in two models of inhibitory control: Implications of non-linear dose effects - Fillmore et al. - 2006 - Article

Acute effects of cocaine in two models of inhibitory control: Implications of non-linear dose effects - Fillmore et al. - 2006 - Article

What is this article about?

Stimulant drugs’ effects on performance has been well known for many years. These stimulants can decrease fatigue, increase vigilance, and speed reaction time (RT), prolong effort and generally increase productivity or work output. Specifically, these stimulant drugs have been shown to enhance the ability to inhibit behavioral responses. Different stimulants such as methylphenidate (Ritalin) and d-amphetamine have been shown to improve inhibitory control in healthy adults and children with attention deficit hyperactivity disorder (ADHD).

Tasks that study this effect are based on the stop-signal model. These tasks measure an individual’s ability to inhibit behavioral responses. They require quick, accurate responses to go-signals and the inhibition of these responses during stop-signals. The go-signals often consist of letter pairs (O and X for example) which are presented visually with one go-signal at a time, on a computer screen. The participants can respond to these letters by pressing one of two computer keys. In the stop-signals, the participants need to inhibit their responses. These stop-signals are often tones, sometimes accompanying a go-signal. The stop-signals occur at variable stimulus onset asynchronies (SOAs) with respect to the letter (50 ms or 300 ms) and they occur only at for example 25% of the trials. This means that subjects must overcome their tendency to respond to a go-target when they hear a stop-signal. This indicates their level of inhibitory control. Inhibitory control is often modelled as the mean latency to inhibit responses, called the stop-signal reaction time (SSRT). This is the time that is needed to inhibit the pre-potent response when the stop-signal occurs. The time that participants need to inhibit is less than the time that is required to respond. SSRTs are related to the number of successfully inhibited responses, with longer SSRTs associated with less successful response inhibitions. Longer SSRTs thus suggest weak inhibitory control, which might be due to a slow inhibitory process.

The evidence with regards to the role of stimulant drugs in inhibitory control are not consistent. Some show that stimulants impair inhibitory control in some contexts. The authors suggest that one critical factor in determining facilitation of inhibitory control is dose. Doses that are effective in facilitating one type of behavior may actually be detrimental to other types of behavior. With regard to inhibitory control, some studies report U-shaped dose-response curves after Ritalin.

Research about non-linear dose response effects could help to understand how changes in cognitive functions might maintain or escalate stimulant abuse. Alterations in inhibitory control might be the most likely contributor to abuse potential. When there are impairments in inhibitory control, this could lead to a lower ability to stop drug-acquisition, which in turn leads to a lower ability to stop drug-taking.

Cocaine and amphetamine users may be motivated to self-medicate attentional deficits and hyperactive/impulsive tendencies. However, whether a stimulant drug facilitates or disrupts a cognitive function may depend on the dose.

In the present study, the authors examine the possibility that a U-shaped dose-effect function on SSRT might also be evident in response to an abused stimulant. They used the stop-signal model to examine the acute effects of four different doses of oral cocaine HCI (0,100,200, and 300 mg) on SSRT in a group of adults with a history of cocaine use. They also examined the generalization of the dose-response effects to a different model of inhibitory control. The dose effects on the SSRT measure of inhibitory control were compared to a measure of response inhibition obtained from another task which is also used commonly to study drug effects on inhibitory control, the go-no-go model.

What were the methods used?


There were 12 adult participants, with nine men and three women. They all had a history of cocaine. The mean age was 42 years. There were 8 African Americans and 4 Caucasian participants.

The volunteers had to have a minimum of grade 8 education, reading ability, correct vision, and no self-reported psychiatric disorders. They also had to: score at least 4 on the 14-item, self-report Drug Abuse Screening Test (DAST), a self-report of past week cocaine use, and test positive for the presence of cocaine or benzoylecgonine in their urine.

All of the participants smoked cocaine in the form of crack. No volunteer was in treatment for their substance use.


Stop-signal task

This task has been described before.

Cue-dependent go-no-go task

The cued go-no-go RT task is another measure of inhibitory control. Participants are presented with a cue, that provide information about the target stimulus that is likely to follow. The cues have a high probability of signaling the correct target. The go-cue conditions are the most interesting, since these cues generate the tendency to respond faster to targets. However, sometimes subjects must overcome this tendency and inhibit their response, if this cue is followed by a no-go target. Therefore there are often failures in inhibiting the response if a no-go target is displayed after a go-cue. This effect of the go cue condition is sensitive to the effects of psychoactive drugs, including stimulants and depressants.

Drug effect questionnaire (DEQ)

The DEQ consists of 15 items which are sensitive to cocaine effects. The items were: any effects, active/alert/energetic, bad effects, good effects, high, irregular heart beat/racing, like, anxious/nervous, pay for this drug, rush, shaky/jittery, take this drug again, talkative/friendly, nauseated/queasy, and sluggish/fatigued/lazy. The items were presented on the monitor, and participants rated each item using the computer mouse to select among five responses: not at all, a little bit, moderately, quite a bit, and very much.

What were the findings?

The results in this study show that cocaine improves the ability to inhibit responses, measured by both models (stop-signal and go-no-go task). In the stop-signal model, cocaine reduced the time to inhibit a response. In the cued go-no-go model, there were drug-induced decreases in the number of failures to inhibit responses. The dose-response functions differed depending on the measures.

In the stop-signal task, there was a quadratic dose-response function: 100 mg and 200 mg cocaine produce faster SSRTs compared with placebo. There were no significant speeding effects to the 300 mg dose: in this case, participants’ mean SSRT was nearly identical to placebo.

In the cued go-no-go task, there was a more orderly, linear improvement as a function of dose.

None of the tasks showed any cocaine effects on response activation, which was measured by RT to go targets. This means that the cocaine-induced improvements in response inhibition did not reflect speed-versus-accuracy trade-offs.

The U-shaped dose-response function in SSRT is in line with the studies about methylphenidate in children with ADHD. It seems to be true that the facilitating effects of stimulant drugs are limited to intermediate doses: above these doses there is no improvement and there could even be impairing effects. These findings implicate that intermediate doses of cocaine can help the user to restore cognitive functioning. This might lead the user to repeatedly take the drug. However, as they do this, their inhibitory control can become impaired and this can lead to impulsivity, perseverative responses and binge use of the drug. In line with this, cocaine abusers show patterns of impulsivity and perseverative behavior. They show hypoactivity (lower activity) in their cingulate and dorsolateral prefrontal cortal regions. These areas are associated with inhibitory control, and this could be due to their long-term cocaine use. Cocaine users also show enhanced sensitivity to stimulant drugs. This could also lead to impulsive behavior in response to higher drug doses.

It is hard to generalize the findings to other populations than cocaine users. Cocaine users are characterized by poor inhibitory control, but it is possible that the cocaine-induced facilitation of inhibitory control is specific to individuals with poor baseline levels of inhibitory control. Other studies have shown that d-amphetamine can improve response inhibition on a stop-signal task by speeding SSRT, but the facilitation was only for individuals who displayed slow response inhibition at baseline. Therefore it is not so clear whether facilitating effects of stimulants might be limited to individuals with poor inhibitory control.

What can be concluded?

There is a lot more research needed about the relation between drug effects on cognitive functions and their abuse potential. A delay-based assessment of control that has received little research interest is the delay discounting model. Some studies have shown that d-amphetamine decreased discounting of delayed monetary rewards in healthy adults. It is interesting to see whether there is a similar control-enhancing effect of a stimulant drug. Now, it is still unclear if inhibitory mechanisms might also influence discounting behaviour. Therefore, drug studies that compare stimulant drugs to discounting and other delay-based assessments of control could provide a better understanding of the role of impulsivity in stimulant drug use.

The neurological reaction to amphetamine - Mattay et al. - 2003 - Article

The neurological reaction to amphetamine - Mattay et al. - 2003 - Article


Amphetamine (AMP) and other psychostimulants are among the most effective psychotropic medications. Although it has been well known that there are dose- and behavior-dependent differential effects of psycho- stimulants, there is also considerable evidence that the response to these drugs varies across individuals, even to fixed doses. These variable effects have been difficult to predict a priori and to date no neurobiological explanation for them has been established. The effect of AMP and other dopamimetic agents on the PFC depends on the baseline level of PFC function, which is presumably a reflection, at least in part, of baseline dopaminergic tone (i.e., relative position on the putative inverted U). Relatively poor performers on prefrontal cognitive tasks tend to improve after stimulants, whereas high performers show no response or get worse.

The COMT val allele, presumably by compromising the postsynaptic impact of the evoked DA response, reduces prefrontal neuronal signal-to-noise ratio and makes processing less efficient.


The val158- met functional polymorphism of the COMT gene would influence the effect of AMP on prefrontal cortical function: After AMP, which in the PFC increases DA levels by blocking extrasynaptic uptake at norepinephrine transporters, normal individuals homozygous for the val allele would be shifted to more optimal DA levels, thereby improving their PFC function. Also individuals homozygous for the met allele, who tend to be superior performers on prefrontal cognitive tasks and presumably have baseline synaptic DA levels closer to the peak of the theoretical inverted-U curve, would be more likely to have their DA levels shifted by AMP beyond the optimal range with a resultant decrement in PFC function.

Design & Method


p>Double-blind, counterbalanced crossover design during two fMRI sessions. Participants were divided into three groups based on genotype (val/val, val/met, met/met). The final sample of 27 healthy volunteers (age <45, similar educational backgrounds) underwent blood oxygen level-dependent (BOLD) fMRI while performing the N-back task (PFC dependent) with increasing levels of task difficulty. Before the fMRI sessions, subjects took an executive cognition test, the Wisconsin Card Sorting Task (WCST), as prior work has shown that the COMT genotype affects performance on this task. Mood and anxiety scales were obtained after the fMRI scans on each test day.


Main goal of this study was to explore the impact of the COMT val-met polymorphism on the effect of AMP on prefrontal cortical function. Therefore in analysis was focused on the data from the two extreme genotype groups, i.e., individuals with the high enzyme activity val/val and low enzyme activity met/met genotypes.


The observations are consistent with the hypothesized inverted-U cortical-response curve to increasing DA signaling in the PFC and suggest that the likelihood of a person being on the up or down slope of the inverted U after AMP administration depends not only on the environmental demands (e.g., task conditions), but also on an individual’s COMT genotype. Indeed, val/val individuals on AMP appear in our paradigm similar to met/met individuals at baseline individuals on AMP, however, process the 3-back task more poorly than do val/val individuals at baseline. The researchers suggest that the combined effects on DA levels of AMP and high WM load push individuals with the met/met genotype beyond the critical threshold at which compensation can be made.

D1 receptors in prefrontal cells and circuits - Goldman-Rakic et al. - 2000 - Article

D1 receptors in prefrontal cells and circuits - Goldman-Rakic et al. - 2000 - Article

What is this article about?

A goal of systems neuroscience is to dissect the cellular and circuit basis of behavior in order to complement the insights from studies on normal brain organization in animal models to an understanding of a clinical disorder.

For example, by using this strategy, the organic basis of schizophrenia are elucidated. Many features of schizophrenia represent a failure in the neural mechanisms by which prefrontal cortex stores and processes information in working memory. In this paper, the authors describe the pyramidal and non-pyramidal cells of the prefrontal cortex. The prefrontal cortex is the area of the brain which is most associated with the working memory functions of the brain. They also review their study on dopamine modulation of working memory circuits. The authors want to describe how there is a connection between the disposition of neural transmitter receptors in individual neurons and behavioral symptoms.

What is there to say about dopamine and cognition?

Huntington’s disease, schizophrenia, depression, drug addiction and Parkinson’s disease are all associated with dysregulations in dopamine systems. Dopamine is linked to motivation, reward, affect and movement, and these can all affect performance on cognitive tasks, without affecting the brain’s information processing systems per se. Studies show that there is a direct association between altered dopamine transmission in the prefrontal cortex and cognitive deficits. The cloning of five distinct dopamine receptors, the development of receptor-specific ligands and antibodies, the anatomical precision of immunohistochemistry and in situ hybridization, and the development of sophisticated behavioral paradigms are a few of the major advances that have made understanding dopamine’s role in cognition a reasonable goal.

What about dopamine modulation of mnemonic function in pre-frontal neurons?

In the study of higher cortical function, the cellular basis of receptive field properties is one of the most challenging issues. Previously, neurotransmitter-specific actions on cortical neurons have been studies using in vitro systems. Nowadays, there are methods developed to analyze the pharmacological actions of drugs on neurons as they are engaged in cognitive processes in awake behaving animals. Using this method the researchers showed that the ‘memory fields’ of prefrontal cortex are modulated by neurotransmitters such as dopamine, serotonin, and GABA. The researchers looked at the actions of these neurotransmitters with respect to the localization of the relevant receptors within the cortical micro-architecture. They showed that D1 receptors can modulate excitatory transmission in neurons which are involved in the mnemonic component of the task. At moderate levels of D1 occupancy, the spatial tuning of prefrontal neurons is enhanced. At higher levels of D1 occupancy, this is reduced.

In clinical conditions, this is an important finding. A PET study has shown that the D1 receptor is decreased in the prefrontal cortex of both medicated and non-medicated schizophrenic patients. The density of D1 receptors are positively correlated with performance of the patients on the Wisconsin Card Sort Task. Aging also leads to a decline in dopamine levels and in D1 receptor function and in working memory.

In schizophrenic patients, both the negative symptoms and cognitive dysfunctions may be related to abnormal D1 functioning. These findings signal the need for attention the importance of D1 family receptors for both cognitive processes in normal individuals and for the symptoms of schizophrenia.

What about the D1 receptor in pyramidal neurons?

The receptive field of a pyramidal neuron is established by afferent inputs, such as its lateral inhibitory input. The cortical pyramidal neuron can be assumed to integrate thousands of afferent inputs and control movement and affect through efferent projection. In rhesus monkeys, depletion of dopamine has been shown to produce impairments in working memory performance. It has also been shown that the D1 family of dopamine receptors are 20-fold more abundant than D2 family receptors in the prefrontal cortex. However, the functional effect of dopaminergic neurotransmitters in cortical circuits is not yet fully understood. In striatal slices, dopamine can both inhibit and excite striatal neurons. Stimulation of D1 receptors activates a second messenger cascade that results in a variety of effects, such as enhanced L-type calcium currents, reduction of N- and P-type calcium currents, enhances NA+/K+ ATPase activity, and enhanced NMDA gated currents. The interaction between D1 receptors and glutamergic inputs is of special interest, because the localization of these receptors are adjacent to asymmetric.

What about D1 mechanisms in interneurons?

Interneurons must be as integral to the machinery of cognitive function as are projection neurons. Interneurons have been shown to have ‘memory fields’, similar to pyramidal neurons. The memory fields of interneurons mirror that of their highest pyramidal neurons: their preferred direction of firing in a spatial task if often very similar to that of their nearest neighbour pyramidal neurons. The authors studied the distribution of D1 receptors in prefrontal interneurons. They showed that the D1 receptor is present in GABAergic interneurons and is found in those subtypes of interneurons which provide the strongest inhibitory input to the perisomatic region of cortical pyramidal cells, the parvalbumin containing basket and chandelier cells. The subcellular localization of the D1 receptor in interneurons is analogous to that seen in pyramidal cells: the receptor is located in the distal dendrites of interneurons, adjacent to asymmetric, presumably glutamergic synapses, as well as in presynaptic axon terminals. The functional effects of D1 receptors on cortical interneurons is not yet established, but stimulation of D1 family receptors in the stratium and substantia nigra have been shown to increase the synthesis and also the release of GABA.

What about the feedforward inhibition model of dopamine action versus cognitive circuitry?

The essence of the feedforward inhibition model is that D1 receptor stimulation enhances excitatory inputs to both pyramidal cells and interneurons, but this enhancement is more effective on pyramidal cells. Increasing levels of dopamine stimulation of D1 receptors will result in enhanced pyramidal cell firing, and enhanced working memory performance. However, at some point, the D1 effect on pyramidal cells will plateau and further increases in dopamine levels will result in enhancement of interneuron activity. Then, pyramidal cell delay activity will be limited by D1 mediated feedforward inhibition, resulting in impairment of working memory function.

There are two types of evidence for possible differential effectiveness of dopamine at D1 receptors in pyramidal versus nonpyramidal cells. First, pyramidal cell dendrites have a higher density of close contacts with dopaminergic axon terminals than interneuron dendrites, and are therefore in closer proximity to dopamine release sites than interneurons. Second, D1 receptor acts via as cascade of diffusible second messengers. On pyramidal neurons, the spine may act as a diffusion barrier to maintain a high concentration of second messengers at the associated excitatory synapse for maximal effect. On interneurons, D1 receptor and asymmetric synapses are located on the dendritic shaft, which allows for more diffusion of second messengers and leads to a reduced effect at the adjacent asymmetric synapse. This model thus explains the relationship between D1 receptor stimulation and working memory, but other aspects of the modulatory control of cognitive function have not been studied yet,.

Check page access:
This content is related to:
Boeksamenvatting bij An Introduction to Human Factors Engineering - Wickens e.a. - 2e druk
Work for WorldSupporter


JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

How to use and find summaries?

Online access to all summaries, study notes en practice exams

Using and finding summaries, study notes en practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Starting Pages: for some fields of study and some university curricula editors have created (start) magazines where customised selections of summaries are put together to smoothen navigation. When you have found a magazine of your likings, add that page to your favorites so you can easily go to that starting point directly from your profile during future visits. Below you will find some start magazines per field of study
  2. Follow authors or (study) organizations: by following individual users, authors and your study organizations you are likely to discover more relevant study materials.
  3. Search tool: quick & dirty - not very elegant but the fastest way to find a specific summary of a book or study assistance with a specific course or subject. The search tool is available at the bottom of most pages or on the Search & Find page
  4. Tags & Taxonomy: gives you insight in the amount of summaries that are tagged by authors on specific subjects. This type of navigation can help find summaries that you could have missed when just using the search tools. Tags are organised per field of study and per study institution. Note: not all content is tagged thoroughly, so when this approach doesn't give the results you were looking for, please check the search tool as back up

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study (main tags and taxonomy terms)

Field of study

Quick links to WorldSupporter content for universities in the Netherlands

Follow the author: Vintage Supporter
Comments, Compliments & Kudos:

Add new contribution

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Enter the characters shown in the image.