Summary of Cognitive Neuroscience: The Biology of the Mind - Gazzaniga et al. - 4th edition

Summary of the 4th edition of Cognitive Neuroscience: The Biology of the Mind by Gazzaniga et al

How have neurosciences evolved over the years? - Chapter 1

Cognitive neuroscience got its name in the late seventies. Cognition means the process of understanding and neuroscience studies the organization and function of the nervous system. The major limitation that was present in the past was that there was no method to systematically and experimentally examine the psyche. Willis was the first to relate specific brain damage to behavioral disruptions. He developed a theory that described how information is transported through the brain (neuronal conduction). Thus, Willis stands at the beginning of the discipline of cognitive neuroscience.

What was formerly thought about the function of the brain?


Modern science is essentially concerned with the question of whether the brain works as a whole or in independent parts. In the nineteenth century, Gall and Spurzheim developed phrenology, based on Willis' proposition that isolated brain damage could affect behavior. Proponents of phrenology state that every part of the brain is involved in a specific function, such as language and personality traits. A total of 35 specific functions were distinguished. The fundamental idea of ​​phrenology is that if a function is used more often, the corresponding part of the brain will grow, creating a bump. The personality could then also be determined by externally examining these bumps. Gall called this anatomical personology. He did not test his ideas, but his idea of ​​specific functions and specific brain parts is in line with Willis' idea.

Aggregate field theory

Flourens investigated the work of Gall. Based on animal research, he believed that the entire brain plays a role in behavior. He damaged the brains of animals and discovered that some parts of the brain were indeed responsible for specific functions, but that other functions, such as memory or cognition, were scattered throughout the brain. He concluded from this that the brain functioned as a whole. He called this idea the aggregate field theory.

Hughlings Jackson observed patients with brain damage and arrived at a topographical organization of the cerebral cortex; every physical function in the body was related to activity in a certain part of the brain. This again corresponds to the localization theory. For example, he discovered that lesions on the right side of the brain had more effect on visuospatial processes than lesions on the left. Moreover, he found that the total loss of a function after brain damage was rare. He thus concluded that different regions contribute to certain behavior.

The brain and language

After a stroke, one of the patients of Broca was almost unable to speak, but could still understand language. The brain damage was located in the left inferior frontal lobe, now known as the area of ​​Broca, which is involved in producing language.

Wernicke described a patient whose speech after a stroke was unaffected, but he was not able to speak comprehensively. Moreover, the patient could not understand spoken and written language. This patient also had brain damage in the left part of the brain, but in an area behind Broca’s area. The area of ​​Wernicke is at the point where the temporal and parietal lobes meet.

Cyto architecture

At that time, brain research in humans was only possible after the death of a patient, so people focused on animals. Fritsch and Hitzig discovered that electrical stimulation of certain areas of the brain in a dog led to certain movements. Because the different regions seemed to have different functions, they thought there should be cellular differences. That is why microscopes looked at the cell types by visualizing them with certain substances. As a result, Brodmann determined 52 separate areas in the brain. This form of science became known as cyto-architecture.


Golgi is responsible for a major shift in neuroscience. He developed a substance that turned neurons black, making individual neurons visible. However, he continued to see the brain as a whole of tissue with the same cytoplasm (syncytium). Cajal used the substance that Golgi developed and discovered that neurons are separate units (neuron doctrine) and that electrical signals are transported in one direction, namely from dendrite to axon.

Sherrington also saw the neuron as a separate component and introduced the term synapse to describe the connection between two neurons. Yet at the start of the twentieth century, many scientists continued to hold on to the holistic view that the networks of neurons and interactions integrated functions.

What did psychological research into the brain look like in the past?

Rationalism and empiricism

Donders started research on reaction times in 1869 to determine the differences in cognitive processing. Until the emergence of experimental psychology, philosophers were mainly concerned with issues related to the mind. They had two visions: rationalism and empiricism. According to rationalism, you can come to all knowledge through proper reasoning. Rationalism is often confused with logical thinking, which depends on induction, statistics and probabilities. However, it is a more complex and comprehensive way of thinking. According to empiricism, all knowledge follows from sensory experience. All philosophers emphasized the role of sensory experience. The associative vision that followed was the basis for experimental psychology. It was stated that simple ideas arise through direct experience, but when you associate these with each other, very complex concepts can arise. Psychological associationists believe that the total experience of a person determines his or her mental development.

Observable behavior

Ebbinghaus was one of the first scientists to measure and analyze internal mental processes, such as memory. Thorndike discovered that if a response is rewarded, this response becomes a habit. Without reward, the habit recedes. According to Watson, psychology could only be objective if it was based on observable behavior. He stated that learning was the most important and that it allowed everyone to develop in all sorts of ways.

Montreal procedure and Hebb

Although American psychologists focused on behaviorism, the psychologists in Canada and Great Britain were not. Montreal became an important place for new ideas about the influence of biology on cognition and behavior. The Montreal procedure was started by Penfield, among others. This procedure was developed to treat epilepsy. This procedure destroyed the neurons that caused the epileptic seizures. The determination of which neurons should be damaged was done with electric currents to see how the awake patient reacted. This made a kind of sensory and motor map of the brain.

Hebb was convinced that the brain mechanisms explained behavior and that the psychology and biology of an organism could not be viewed separately. Today this idea has been accepted, but at the time it was not. Hebb wrote a book in which he said that neurons combine into a single processing unit that determines the brain's responses to stimuli. He also said that the brain was always active, and that input from the outside world only modified the already existing activity. Milner was the first to prove that there were multiple memory systems.


Behaviorism, however, continued to dominate until around 1950. The greatest contribution to the turnaround came from Chomsky who discovered that language was dependent on following grammatical rules. Learning language could not be explained from learning theory. This principle was so complex and universal that it had to be innate. Miller, a convinced behaviorist for a long time, abandoned this vision in the same period and showed the limits of short-term memory with his 7 +/- 2 chunks. Not much later, cognitive neuroscience was born.

Prefrontal cortex

It was discovered that cognitive neuroscience depended on many disciplines, such as linguistics, computer science, biochemistry and anatomy. Goldman-Rakic ​​described the networks of the prefrontal cortex and how it was related to the working memory. She also discovered individual cells in the prefrontal cortex that had specific memory tasks. In addition, she investigated the influence of dopamine on the prefrontal cortex.

Which instruments are used in neuroscience?

Changes in electrical impulses, fluctuations in blood flow and changes in the use of oxygen and glucose are the driving forces of the brain. These are the parameters that are analyzed in various methods to investigate how mental activities are supported by the functions of the brain.

Measuring electrical activity and blood flow

The electro-cardiogram (ECG) was originally devised by Richard Canton to measure spontaneous electrical activity of the cerebral cortex. He was the first to do this with monkeys and dogs. The more modern version of it was developed by the Dutch Willem Einthoven. He developed the "string galvanometer". This allowed him to make photographic recordings of the electrical activity. This was later renamed electroencephalogram (EEG).

Later, more researchers started to focus on measuring blood flow in the brain. Angelo Mosso used a device that was placed directly on the skull and discovered that changes occurred locally in pulsations in the brain during mental activities. Seymour Kety developed a method to measure the blood flow and metabolism of the human brain as a whole. His studies with animals showed that blood flow was directly related to the functioning of the brain.

CAT and PET scans

In the 1930s, Vallebona developed tomographic radiography, a technique in which a series of transverse sections is investigated. Hounsfield was the first to perform a successful computerized axial tomography (CAT) scan to get a three-dimensional image of the brain. However, a CAT scan did not show much about the function of the brain. As a result, a new technique was developed, namely the PET scan (positron emission tomography), a non-invasive technique that could provide information about the functioning of the brain. Later the PETT, positron emission transaxial tomography, was developed. Radioactive versions of oxygen, nitrogen and carbon were produced and injected into the bloodstream. These are incorporated into molecules in an organ where the radioactivity decreases. This decrease is measured over time and as a result, conclusions can be drawn about the metabolism. Later, a PET method was also found to measure the amount of glucose consumption.

MRI scan

MRI, magnetic resonance imaging, is based on the principle of nuclear magnetic resonance, which was first described and measured by Isidor Rabi in 1938. Protons in water molecules form a line in a magnetic field. By transmitting radio frequency pulses, a voltage is measured in the receiving coil. This voltage changes over time depending on the function of the proton environment. The voltages can inform us about the fabric. The MRI scan was born when researchers demonstrated that after a contrast material was injected into the blood stream, changes in the blood volume of the brain were triggered. These changes could be measured with an MRI.

fMRI scan

When the PET scan was introduced, the idea existed that increased blood flow to differently active parts of the brain was driven by the brain's need for more oxygen. An increase in the delivery of oxygen allowed more glucose to be metabolized, making more energy available to perform a certain task. Although this idea sounded very reasonable, there was little data available to verify this. It was later found that hemoglobin without oxygen behaved differently in a magnetic field than hemoglobin with oxygen, because hemoglobin without oxygen is slightly magnetic. It was discovered that this contrast depends on the blood oxygen level (BOLD, blood oxygen level dependent). This led to the fMRI technique, functional magnetic resonance imaging. No ionized radiation is used and body images are related to brain function. Moreover, the technology is very sensitive. However, it was difficult to compare the brains of different individuals, since the anatomical locations did not match.

What is the structure and function of the nervous system? - Chapter 2

The nervous system consists of neurons and glial cells. Neurons spread information through the nervous system by adjusting their activity level. Glial cells have different functions in the nervous system, such as support, isolation and modulation of neuronal activity.

What does the structure of neurons look like?

Parts of a neuron

A neuron consists of three important parts: a cell body (soma), dendrites and an axon. The cell body consists of the cellular mechanism for the production of proteins and other cellular macromolecules. Like other cells, a neuron contains a nucleus, an endoplasmic reticulum, ribosomes, mitochondria, a Golgi system, and other organelles. The structures are consolidated in cytoplasm, which consists of the ions of potassium, sodium, chloride and calcium and molecules, such as proteins. The same ions are also present in the liquid outside the cell.

Dendrites receive input from other neurons. They can be complex or simple, depending on the type and location of the neuron. There are small nodules on the dendrites with which signals can be received. These nodules are called spines. Axons from the cell body send electrical signals to the axon terminals.

Transmission takes place at a synapse, a specialized structure where two neurons come into contact so that both chemical and electrical signals can be transmitted from one cell to the other. Some axons form axon collaterals. These can send signals to multiple cells. Many axons are wrapped in layers of a greasy substance called myelin. There are holes in the myelin, which are called Ranvier knots.

How is communication between neurons achieved?

The main purpose of neurons is analyzing and passing on information. The communication between two neurons is called neuron signaling. There is transmission between neurons - this goes through synapses - and within neurons - this goes through changes in the electrical state of a neuron. However, most neurons are both presynaptic and post-synaptic. A neuron is presynaptic when its axon connects with other neurons and postsynaptic when other neurons connect with its dendrites.

The membrane potential

To generate neuronal signals, energy is needed and this energy comes from the electrical potential within a neuron. It is the difference between voltage inside and outside the neuron. The voltages depend on the amount of potassium, sodium and chloride ions and charged molecules inside and outside the cell. At rest, the neuron is more negatively charged on the inside than on the outside, namely -70 millivolts (mV). This is called the resting membrane potential.

The membrane of a neuron consists of lipids, fatty molecules. An extra- and intra-cellular space is created. Since the layer consists of fat, it can retain its shape in moisture and substances dissolved in water (ions and proteins) cannot easily pass through it.

The membrane also contains transmembrane proteins that transport certain molecules in and out of the neuron. Examples of this are ion channels or ion pumps. Ion channels allow ions to move along with the concentration gradient, while ion pumps transport ions against the concentration gradient through active transport, that is, from low to high concentration.

Ion channels are selective for potassium, sodium, calcium or chloride ions. The degree to which a certain ion can traverse a membrane through a certain ion channel is called the permeability. The membrane is more permeable to certain ions than to others. For example, it allows more potassium ions (K +) to pass through than sodium (Na +) and chlorine ions (Cl-). This phenomenon is called selectively permeable and it exists because there are more selective ion channels for potassium than for sodium. Gated ion channels can change the permeability for a certain ion due to changes in voltage or in response to a chemical or physical stimulus. Ion channels that do not have this function are called nongated ion channels.

More K + is present in the cell, while more Na + and Cl- are present outside the cell. This ratio stays that way through active transport via an ion pump. An active transporter such as the sodium-potassium pump uses adenosine triphosphate (ATP) to open and close. The sodium-potassium pump is a protein that can release energy from ATP. With the help of this energy, sodium from the cell and potassium can enter. From one ATP molecule, three sodium ions are pumped out and two potassium ions are pumped in.

The relatively non-permeable membrane ensure that the ions do not return due to the diffusion process. When the membrane is at rest (resting potential), there are more sodium ions outside the neuron and more potassium ions in the neuron. As a result, the power of uneven distribution does its work: sodium wants to enter the cell (from high to low concentration) and potassium wants to go outside. The membrane is more permeable to potassium, so a little potassium goes out. This creates a difference in charge around the neuron, an electrical gradient. Because more potassium goes outside, the outside becomes more positively charged, so that the inside of the neuron tries to hold the potassium. A balance is created in which the electric gradient is equal to the concentration gradient; this is called the electrochemical equilibrium. This leads to a difference in charge, namely the resting potential of -70 mV.

The action potential

The process of information transport starts when excited postsynaptic potentials (EPP) at synapses cause a change in charge in the receiving dendrites. With enough power, they reach the axon terminals. However, this does not happen often, because electronic conduction (or decremental conduction) decreases with distance. Usually a single EPP does not cause the firing of its own cell, let alone sending the signal to another cell. This problem is solved by action potentials. An action potential is the rapid depolarization and repolarization of the membrane at a specific location. An action potential maintains its signal strength because it can regenerate itself due to voltage-gated ion channels, which are present on the cell membrane. These are located in the spike-triggering zone at the axon mound. This zone starts an action potential. The various EPPs sum at the axon mound and the membrane depolarizes via the spike-triggering zone. This depolarization must reach a certain threshold value, namely -55 mV, otherwise no action potential will arise.

When the threshold value is reached, Na + channels open, which cause depolarization, causing even more Na + channels to open. This process is called the Hodgkin-Huxley cycle. After some delay, the same type of ion channels are opened for potassium, causing potassium ions to flow out of the neuron and repolarize the neuron. This ultimately brings the neuron back into the resting potential.

Due to the outflow of potassium ions, the neuron repolarizes even below the resting potential (hyperpolarization) to the equilibrium potential of K +. The equilibrium potential is a voltage in which there is no "net flux" of ions. After hyperpolarization, the K + channels close, and the resting potential returns. After this there is an absolute refractory period, in which the Na + cannot open and therefore no new action potential can arise. The relative recovery period follows. An action potential is possible then, but a larger depolarizing current is needed for it to occur. As a result, only 200 action potentials can be generated per second and the action potential can only move in one direction.

The role of myelin

An important aspect within neuronal communication is the speed at which signals are transmitted. Myeline provides an accelerated transport of action potentials. It makes axons resistant to voltage loss. Myelin also provides electrical insulation so that small currents are transported through the axon over a greater distance. As a result, fewer action potentials are needed, namely only at the Ranvier nodes. As a result, an action potential jumps from node to node. This is called saltatory conduction.

An action potential is an all-or-nothing phenomenon. It always has the same amplitude. A more intense stimulus is therefore not indicated by a higher amplitude, but by an increased fire ratio.

How does synaptic transmission work?

Synaptic transmission is the transmission of signals by neurons in electrical or chemical synapses.

Chemical transmission

Chemical transmission is carried out in a number of steps. First, an action potential must reach the ends of the axon. The ends are depolarized through which calcium (Ca 2+) ions flow in. Vesicles containing neurotransmitters then melt together with the membrane and neurotransmitters are released into the synaptic cleft. The neurotransmitter binds to a receptor on the postsynaptic membrane. This can lead to an exciting or inhibiting potential. Inhibiting potentials are called inhibitory postsynaptic potential (IPSP).


Today, more than a hundred different neurotransmitters are known.

There are four criteria that a substance must meet to be a neurotransmitter:

  1. Synthesis and localization in presynaptic neuron and being stored in the terminal;

  2. Release at action potential that depolarize the terminal;

  3. Postsynaptic neuron must have receptors for them;

  4. With artificial administration they must give the same response as actual stimulation of the presynaptic neuron would have.

There are different ways to classify neurotransmitters, for example biochemically. Neurotransmitters can then be subdivided into: amino acids (GABA, glutamate), biochemical acids (dopamine, norepinephrine), acetylcholine and neuropeptides. Peptides are long spirals and can be divided into five groups:

  • Tachykinins, such as substance P;

  • Neurohypophysis hormones such as oxytocin and vasopressin;

  • Hypothalamic release hormones, such as corticotropin-releasing hormone and somatostatin;

  • Opioid peptides such as endorphins and enkephalins;

  • Other neuropeptides

A neuron can generally deliver one to several neurotransmitters. The effect of the neurotransmitter, exciting or inhibiting, depends on the postsynaptic receptor and the connections of the neuron. Examples of exciting neurotransmitters are acetylcholine, glutamate, catecholamines, histamine, serotonin and some neuropeptides. Examples of inhibitory neurotransmitters are GABA, glycine and some peptides. Conditional neurotransmitters only produce an effect in the postsynaptic neuron if neurotransmitter release coincides with other factors.

After the delivery of neurotransmitters and the uptake of a number of neurotransmitters by the postsynaptic neuron, a number of neurotransmitters remain in the synaptic cleft. These neurotransmitters are removed by being re-absorbed by the presynaptic neuron, by being broken down by enzymes or by spreading. The level of transmitters is regulated by autoreceptors.

Electric transmission

Electric synapses do not have a synaptic cleft, so they are in direct contact with each other. The cytoplasms are kept continuous by means of gap junctions. This makes them isopotential (having equal electric potential). This means that electrical change happens almost simultaneously. Depolarization is lower in the post-neuron. Electric transmission is fast (fight-or-flight reflex) and ensures that groups of neurons can work synchronously, but there are also disadvantages. This way it is less plastic and the signal cannot be amplified.

What are glial cells?

Glial cells occur in both the central and peripheral nervous systems and are in equal numbers to the number of neurons. They have different functions: structural support, forming the blood-brain barrier, accelerating information transport and modulating neural activity.

Types of glial cells

There are three types of glial cells. Astrocytes are large and symmetrical. They surround neurons and are in contact with blood vessels. This allows them to transport ions through the vascular wall and they form the blood-brain barrier, the barrier between the central nervous system and the blood vessels around the brain as protection for blood-borne agents and pathogens. In addition, they also moderate neural activity. Microglia are small and irregularly shaped. They play a role in damaged tissue. They surround damaged cells and dispose of them. The last type of glial cell is the oligodendrocyte; they have the function of forming myelin in the central nervous system. This task is performed by the Schwann cells in the peripheral nervous system. Myelin is a good insulator because it consists of two layers of fat, which themselves are poor electrical conductors.

What are neural circuits?

So far, we have only discussed one or two neurons at the same time. However, neurons rarely work in isolation, but almost always do so in a network. Neuronal communication depends on the patterns of connectivity in the nervous system. Extensive interconnectivity means that each neuron is only a few synapses away from every other neuron, and each neuron contributes to overall functioning to a very small extent. Connections between two cortical regions are called corticocortical connections. The first part refers to the source and the second part to the target. Inputs that originate in subcortical structures, such as the thalamus, are called thalamocortical connections and vice versa are called corticothalamic or corticofugal projections.

Groups of related neurons that process specific parts of information form neural circuits. These have different forms and goals. General similarities between neural circuits are that they record information (the afferent input), evaluate the input on either the synapse or within one or groups of neurons (the local circuit neurons) and that they send results to other neurons, muscles or glands (the efferent output ). A feature of some neural circuits is that they exhibit plasticity, meaning that patterns of activation within a neural circuit can change. This forms the basis for learning during development. Neural circuits can be combined in neural systems.

What are the components of the nervous system?

The nervous system consists of the central nervous system (CNS), which consists of the brain and spinal cord, and the peripheral nervous system (PNS), which consists of the nerves and ganglia outside the CNS. The PNS brings sensory information to the CNS and receives motor information from the CNS. Two systems are important here: the somatic motor system for control of voluntary muscles and the autonomous motor system for control of visceral functions.

The autonomic nervous system

The autonomic nervous system (also known as the autonomic motor system) is involved in controlling involuntary actions of smooth muscles, the heart and various glands, such as reflexes. It consists of two dimensions: the sympathetic part and the parasympathetic part. The sympathetic part uses the adrenaline neurotransmitter and the parasympathetic system uses acetylcholine. The two systems usually have antagonistic functions. The sympathetic nervous system prepares the body for action (fight or flight), while the parasympathetic system provides maintenance in the body.

The central nervous system

The CNS consists of the brain and spinal cord that are protected by bone and cerebrospinal fluid (CSF). It is surrounded with three protective membranes (meninges). The outer membrane is the dura mater, the middle is the arachnoid mater and the inner is the pia mater. The CSF is located between the arachnoid and pia mater (subarachnoid space), in the ventricles, cisterns, sulci and central canal in the spinal cord. In the CNS, neurons are arranged in two ways: in a nucleus or layer. Nuclei contain cell bodies and their connections and are located in the brain and spinal cord. The outermost layer of the brain is the cerebral cortex. This consists of different layers. The gray matter consists of cell bodies of neurons and the white matter consists of axons and glial cells. These axons are grouped in tracts, which run from association tracts to the hemisphere through association fibers. When they cross to the other hemisphere, they are called commissures. The largest connection between the two hemispheres comes from the corpus callosum. Projection tracts run from the cerebral cortex to subcortical structures and the spinal cord.

What parts does the brain consist of?


The top of the brain is superior or dorsal, the bottom is inferior or ventral. Rostral is the front and caudal the back. Anterior and posterior also refer to the front and back of the brain. In the spinal cord refer rostral refers to the side. If we take a sagittal section, we do this from the nose to the tail. If this section is exactly in the middle, it is called a midsagittal or medial section. When it is slightly to the side, it is a lateral section. A section from the top down is called a coronal section. A section from dorsal to ventral is an axial, transverse, or horizontal section.

The spinal cord

The spinal cord takes sensory information from the peripheral sensory receptors, transmits it to the brain and transmits the final motor signals from the brain to the muscles. The spinal cord runs from the brain stem to the cauda equina and is surrounded by the spine. The spine consists of cervical, thoracic, lumbar, sacral and coccygeal sections. The spinal cord consists of 31 segments. Each segment contains a spinal nerve on the right and left side that passes through the spine through foramen (openings). Every nerve carries sensory and motor axons. An afferent neuron carries sensory input through the dorsal root into the spinal cord, and an efferent neuron carries motor output through the ventral root.

The peripheral region consists of white matter. The centrally located part of gray fabric comes in the shape of a butterfly where two separate horns can be distinguished: the dorsal horn and the ventral horn. The ventral horn contains the larger motor neurons and projects to the muscles. The dorsal horn contains sensory neurons and interneurons. The internal neurons project onto the motor neurons on the same (ipsilateral) and opposite (contralateral) sides of the spinal cord and help coordinate limb movements. The gray matter surrounds the central channel that flows from the ventricles and is filled with CSF. CSF maintains pressure in the brain and reduces shocks to the brain and spinal cord. The ventricles consist of the lateral ventricles, which are connected to the third ventricle. The cerebral aqueduct connects the third and fourth ventricle. Choroid plexus produces the CSF, which consists of proteins, glucose, and ions. It circulates through the ventricles and cerebral aqueduct to the subarachnoid space, and is re-absorbed by arachnoid villi in the sagittal sinus.

The brainstem

The brainstem consists of three parts: the midbrain (mesencefalon), the pons and the cerebellum (metencefalon), and the medulla (myelencefalon). The brainstem contains motor and sensory nuclei, nuclei from neurotransmitter systems and white matter tracts. It controls breathing and consciousness (sleep/wake). Damage to the brainstem can therefore be life threatening and have fatal consequences.

The medulla is located most caudally in the brainstem. It contains the cell bodies of twelve cranial nerves and is therefore involved in the motor and sensory control of the face, neck, abdomen and throat, and the motor control of the heart. It is important for vital functions such as heart rate and breathing. Somatosensory information goes from the spinal cord through the gracile and cuneate nuclei in the medulla to the brainstem and then crosses synapses in the thalamus on the way to the somatosensory cortex. In the pyramidal decussation of the medulla, corticospinal motor axons cross from right to left and vice versa.

The pons is anterior to the medulla. Many cranial nerves synapse into the pons. The pons is important for eye movements and for the mouth and face. In addition, it is important for auditory information, modulation of arousal (as part of reticular formation) and rapid eye movement (REM) sleep.

The cerebellum covers the brainstem at the height of the pons and contains many neurons. The cerebellum is the roof of the fourth ventricle and contains cerebellar peduncles, which are important for input and output tracts. The cerebellar cortex receives sensory and motor information for maintaining balance and making controlled movements. From the deep nuclei, the cerebellum sends information to the thalamus and then to the motor and premotor cortex or the brainstem. The cerebellum integrates information about the body with motor commands and thus ensures coordinated movements. However, the cerebellum is also important in cognitive processing.

The midbrain is superior to the pons and surrounds the cerebral aqueduct. It consists of the tectum and tegmentum. The midbrain also contains some ganglia of cranial nerves and the superior colliculi and inferior colliculi. These are respectively important for observing peripheral objects and orienting our gaze, and for locating and orienting to auditory stimuli. The red nucleus is involved in motor coordination. The midbrain also contains the mesencephalic reticular formation.



The diencephalon consists of the thalamus and hypothalamus. The thalamus contains two parts, one in the left hemisphere and one in the right hemisphere, which are connected by the mass intermediates. Above the thalamus are the fornix and corpus callosum, within which the internal capsule with rising and falling axons. All sensory signals, with the exception of smell, go through the thalamus to the different sensory areas. The thalamus also receives input from the basal ganglia, cerebellum, neocortex and medial temporal lobe and returns projections to it. The motor information to the spinal cord often also goes through the thalamus. The thalamus is divided into a number of nuclei. The lateral geniculate nucleus receives information from the retina and transmits it to the primary visual cortex. The medial geniculate nucleus receives auditory information from the ear and sends it to the primary auditory cortex. The ventral posterior nuclei (medial and lateral) receive somatosensory information and send it to the primary somatosensory cortex. The pulvinar nucleus, part of the thalamus, plays a role in the control of attention and integration of information from cortical areas.


The hypothalamus connects the endocrine system with the nervous system. The mammillary bodies, nuclei and fiber pathways of the hypothalamus, receive input from the limbic system. Together with the reticular formation, amygdala and retina, it is important for circadian rhythms. The hypothalamus sends information to the prefrontal cortex, amygdala and spine. It also ensures homeostasis, maintaining the normal body state, through the endocrine system and through control of the pituitary gland. By releasing hormones, the hypothalamus influences hormone production elsewhere and at the same time the hypothalamus is influenced by the hormones in the blood.


The limbic system consists of different structures that surround the brainstem: the cingulate gyrus (which runs above the corpus callosum from front to back), the hypothalamus, the anterior thalamic nuclei and the hippocampus. The limbic system is involved in emotional processes. The classical limbic system also contains the amygdala, the orbitofrontal cortex, and parts of the basal ganglia.

Basal ganglia

The basal ganglia consist of the globus pallidus, caudate nucleus, putamen, subthalamic nucleus, and substantia nigra. The caudate nucleus and putamen are sometimes referred to as the striatum. The basal ganglia receive input from sensory and motor areas, and the striatum of the thalamus. They are important for action selection, motor preparation, timing, fatigue, and task change. The dopamine receptors of the basal ganglia indicate inconsistency between predicted reward and actual reward and are important in motivation and learning. The basal ganglia are also important in reward-based learning and goal-oriented behavior. The integrated sensory and motor information send the basal ganglia to the motor and prefrontal cortex.

What are the different subdivisions of the cerebral cortex?

The cerebral cortex surrounds core structures such as the limbic system, the basal ganglia and the diencephalon. The brain is folded (sulci = grooves and gyri = bumps, which can be seen on the surface of the brain) to save space. Without this structure, our heads would be much bigger. This also means that neurons are closer to each other, which means that neural connections need to be less long and signals can be transmitted more quickly, and the cortical areas are closer together. The cerebral cortex is three millimeters thick on average and contains a relatively large number of cell bodies, making it gray in color (gray matter). However, it also contains dendrites, axons, and blood vessels.

The cerebral cortex can be subdivided in various ways. Below we discuss the anatomical subdivision, the cytoarchitectonic subdivision and the functional subdivision.

Anatomical subdivision

The hemispheres have four lobes, which owe their name to the bone structure above. Every hemisphere consists of the frontal, parietal, temporal and occipital lobe. The frontal and parietal lobe are separated by the central sulcus. The temporal lobe is separated from the frontal and parietal lobe by the lateral fissure, also known as Sylvian fissure. The separation of the occipital lobe with the parietal and temporal lobe consists on the dorsal level of the parietal-occipital sulcus and on the ventral lateral level of the pre-occipital notch.

The left and right hemisphere are separated by the inter-hemispheric or longitudinal fissure. Both hemispheres are simultaneously connected to each other via the corpus callosum, which consists of axons of cortical neurons. The insula is hidden behind the temporal and frontal lobe.

Cytoarchitectonic subdivision

Cyto-architecture divides the brain on the basis of similarities in the shape and organization of the cells. Brodmann divided the brain into 52 regions in this way. Other anatomists have even arrived at 200 different regions, but these are probably not all correct.

In the end, a part of the brain can be identified by the name Brodmann gave it, the cyto-architectural name, the coarse anatomical name, or the functional name. Given the current research techniques, the functional name can change quickly.

The cerebral cortex can also be divided according to the patterns of the different layers: the neocortex, the mesocortex and the allocortex. The neocortex is the largest part and consists of six layers. Each layer contains different types of neurons. Layer 4 receives input from the thalamus and cortical areas. Layer 5 sends output to the thalamus for feedback. The superficial layers project within the cortex. The neurons from different layers form mini columns and circuit columns and thus form a unit. The mesocortex is the description for the para-limbic system that consists of the cingulate gyrus, para-hippocampal gyrus, insular cortex and orbitofrontal cortex. The mesocortex consists of six layers and lies between the neocortex and the allocortex. The allocortex has one to four layers and consists of the hippocampal complex and the primary olfactory cortex.

Functional subdivision

The lobes of the cerebral cortex play a role in various neural processes. Different lobe may be involved in one process, but most of the time the process takes place in one lobe. The most common processes per lobe will be discussed below.

Frontal lobe

The frontal lobe is primarily involved in the planning and execution of movements. The frontal lobe can be subdivided into the motor cortex and the prefrontal cortex. The axons of the neurons in the motor cortex run to the spinal cord and the brainstem and synapses of motor neurons in the spinal cord. The prefrontal cortex is involved in the planning, organization and implementation of behavior, whereby time information must be integrated. The frontal lobe is therefore important in executive functioning. In the event of damage, people may exhibit less targeted behavior, have difficulty in motivating themselves to start actions, modulate their actions or stop them when they have started. The prefrontal cortex consists of the dorsolateral prefrontal cortex, the orbitofrontal cortex, the ventrolateral prefrontal cortex and the medial prefrontal areas, including the anterior-cingulate cortex.

Parietal lobe

The parietal lobe is primarily involved in the integration of sensory information, which is received from the outside world through the thalamus. The information goes to the primary somatosensory cortex, directly caudal to the central sulcus, and then to the secondary somatosensory cortex.

There is a topography for the somatosensory and motor cortexes, which indicates which parts of the body they affect. This topography is also called homunculus. Areas that require finer control are represented by a larger cortical area.

Occipital lobe

The occipital lobe is primarily involved in visual processes. Visual information goes through retina, optic nerve and lateral geniculate nucleus to the primary visual cortex (also called striate cortex). This path is also called retino-geniculo-striate or primary visual path. Paths also run from the retina to subcortical regions, such as the superior colliculus, important for visuo-motor functions. The primary visual cortex consists of six layers that deal with color, light, spatial frequency, orientation and movement. The extrastriate visual cortex is located around the striate cortex.

Temporal lobe

The temporal lobe is primarily involved in auditory processes, where auditory information enters through the cochlea and medial geniculate of the thalamus to the Heschl's gyri, or primary auditory cortex, in the supra temporal cortex. The auditory cortex has a tonotopic organization, which means that low frequencies are best observed on one side of the cortex and high frequencies on the other. The auditory association area surrounds the primary auditory cortex.

The association cortex is the part of the neocortex that does not exclusively deal with either sensory or motor information. The association cortex is involved in higher mental processes and integrates input from sensory and motor parts of the cortex. Every sense has a sensory association area, which is important for the meaning of, for example, visual and auditory information.

How does the development of the nervous system proceed?

Development from birth

After fertilization, the blastula starts specializing. The blastula has three main types of cell lines: ectoderm, mesoderm and endoderm. Ectoderm forms the nervous system, skin, lens of the eye, ear and hair. Mesoderm forms the skeleton and muscles and endoderm forms the digestive system. The brain develops from a neural tube. The tube closes and the cavity forms ventricles. If the tube does not close properly, this can lead to anencefaly or spina bifida. Eventually the anterior part develops into the cerebral cortex and the posterior part into the spinal cord.

How does neuronal development proceed? Neuronal proliferation and migration

Neurons arise from precursor cells in proliferative zones. Cortical neurons arise from the subventricular zone and other parts from cells in the ventricular zone. The precursor cells are not yet differentiated, which means that they can still completely form into a cortical cell. The number of cells within the subventricular zone is growing exponentially. After six weeks, one cell will migrate during cell division and the other will remain in the subventricular zone. The migrating cells form the layered cortex. The remaining cells of the subventricular zone spread through radial glial cells. The cortex is built up from the inside out. The first neurons thus migrate to the deepest cortical layers, and the last to the furthest cortical surface.

Neural determination and differentiation

The timing of neurogenesis determines to which cell a precursor cell develops. Because of this, everything that influences this timeline of neurogenesis has an effect on the final development of the cells. Cells that were far apart in the ventricular zone are also in the cortex. This is called the radial unit hypothesis.

New neurons

New neurons are born in mammals in the hippocampus and olfactory bulb. Neurogenesis in the hippocampus is important for learning and memory. In particular, the dentate gyrus, part of the hippocampus, still produces new neurons in adulthood. The number of new neurons has a positive association with learning and an enriched environment and a negative association with stress. This may also happen in other places, but research is still in its infancy and therefore not all areas of the brain have been thoroughly investigated. It seems that the new neurons also integrate into the already existing functional networks. Research also shows that adults create new neurons.

Postnatal brain development

An important aspect of neural development is synaptogenesis (formation of synapses) and the growth of dendrite connections. After synapses are formed, synaptic elimination, also called pruning, takes place. There is evidence that these processes take place at different times in different regions. Thus it for example starts earlier in the sensory cortex than in the association cortex. Through pruning, certain connections, which are not used, are broken, so that others can be strengthened. Brain volume eventually increases due to myelination and proliferation of glial cells. White matter volume increases linearly with age, while gray matter volume decreases non-linearly with age after adolescence. This decrease in gray matter differs per cortical area.

Which methods are used in cognitive neuroscience? - Chapter 3

In 1979 Francis Crick, a co-discoverer of DNA, made a wish list for neuroscientists. According to him, neuroscientists needed a way to selectively switch neurons on and off and to do this with precise precision. This could indicate how neurons are functionally related to each other to control behavior.

Gero Miesenböck was the one who finally gave the first demonstration of how photoreceptor proteins could control neuroactivity. He was the founder of optogenetics, a procedure in which a genetic manipulation is performed that results in the expression of a photosensitive protein. The researcher can then activate the neurons by exposing the tissue to light. The genetic manipulation can be changed in such a way that the expression of the protein is limited to certain neural regions. Currently, optogenetic methods are used to turn certain cells in various parts of the brain on and off, allowing researchers to manipulate behavior. Cognitive neuroscience is a field that has emerged thanks to the development of new methods. This chapter discusses how these methods work, what information can be obtained with them, and their limitations.

What is cognitive psychology?

Cognitive psychology studies mental activity as information processing. Cognitive psychologists look at observable behavior, but also at internal processing. The assumption here is that we do not directly influence the world, but that perceptions, thoughts and actions are influenced by internal transformations / processes. In addition, there are two important underlying concepts:

  1. Information processing depends on internal representations;

  2. Mental representations are subjected to transformations.

Mental operations

Cognitive psychology assumes that a task consists of different mental operations where a representation comes in (input) and is processed into a new representation (output). In a letter matching task, introduced by Posner, it can be shown that even with simple stimuli the mind distracts different representations. The results of the experiment suggest that we derive multiple representations of stimuli and that one representation is based on the physical aspects of the stimulus. A second representation corresponds to the identity of the letter. A third level of abstraction represents the category to which a letter belongs. So we would first activate physical representations, then phonetic representations and then category representations.

In the letter-matching task, the primary dependent variable response time was the speed at which test subjects make decisions. Response time experiments use chronometric methodology. Chronometrically comes from the Greek words "chromos" (time) and "metron" (measuring). The chronometric study of the mind is of great importance for cognitive psychologists since mental events often occur quickly and efficiently.


The second criterion of cognitive psychology is that our mental representations are subjected to transformations. To achieve your goal, perceptions often have to be converted into action presentations. For example, a sensation can turn into a perception, evoke a memory and ultimately lead to action. But memory also influences how we perceive things. Sternberg has designed a task to investigate how well one can compare sensory information with a representation in memory. Test subjects are instructed to remember a series of one, two or four letters. Subsequently, one by one, letters are shown on the screen where the test subject must indicate whether the letter shown corresponds to one of the letters from the series that was to be remembered.

According to Sternberg, four primary mental operations are involved in this task:

  1. stimulus identification;

  2. comparison of the stimulus with the items from representations;

  3. decision whether the stimulus is equal to one of the items from the representation;

  4. response.

Each operation can have associated operations. According to Sternberg, there are two possible ways in which this recognition process can work. The recognition process can run in parallel, comparing the target with all items in memory at the same time. The process can also run serially, which would mean that there are limits to information processing. This last assumption appears to be correct according to research, because the response time increased with a higher number of items in the memory.

Other mental operations appear to work in parallel, as evidenced by the word superiority effect. Test subjects must respond if a certain letter appears in a series of letters. The reaction time appears to be shorter if the series of letters form a word than with any series of letters. Namely, the representations of letters and the entire word are activated in parallel.

Limitations of information processing

Processing limitations are due to a certain set of mental operations associated with a certain task. The limitations of information processing have been investigated with the help of the Stroop task.

First, test subjects are asked to read names of colors as quickly as possible. Subjects are then shown colored areas and are asked to name the color as quickly as possible. Finally, subjects are shown names of colors, but the color of the ink sometimes differs from the name of the color that is described. Test subjects are asked to name the color of the ink as quickly as possible and to ignore the word. So two representations are needed: one for the color and one for the word.

The fact that test subjects react more slowly in the final phase if the word and the color do not match shows that the representation of the word is automatically activated. Pressing a button is faster than pronouncing the response, which shows that the word recognition process in particular interferes with a spoken response.

What disorders can occur with brain damage?

Although our brains are well protected by a skull and there are enough veins that can supply the various areas with oxygen and nutrients, the brain is still susceptible to disorders. These are described below.

Vascular disorders

The brain needs a constant supply of oxygen and glucose to produce energy. Twenty percent of the oxygen that is inhaled is used by the brain. A continuous supply of oxygen is necessary, because an oxygen deficiency of ten minutes can already lead to neural death. With an angiography you can map the circular systems in the brain. A dye is injected that shows the main arteries and veins.

A cerebral vascular accident (CVA) occurs when the blood supply is interrupted. The most common cause of a stroke is interruption of blood flow by a foreign substance. Another possibility is the build-up of fats in the veins, called atherosclerosis. An embolus is a loose particle of accumulated fatty tissue. This can block the blood flow. If the blood supply is blocked for too long, the cells die due to a lack of oxygen and glucose. Symptoms of a stroke can manifest themselves differently, among other things depending on the location of the plug. For example, a plug close to the brainstem results in loss of consciousness, while in the case of a cortical infarction there may be a loss of speech or understanding.

There are other types of cerebral vascular disorders such as ischemia and an aneurysm. An ischemia is a partial interruption of blood flow. This may be because a plug partially closes the vein, or because a drop in blood pressure ensures that the blood does not reach the brain completely. Too high a blood pressure can lead to cerebral bleeding or can cause the vessel walls to break and the blood to drain. Spasms of the vessels can lead to irregular blood flow and migraine. Other disorders are the result of problems in arteries. Cerebral arteriosclerosis is the narrowing and hardening of the blood vessels, which can lead to persistent ischemia. An aneurysm is blowing up a vein like a balloon. If this vesicle bursts, this can lead to a disruption of blood flow.


Brain damage can also be caused by a tumor, tissue with an abnormal growth that has no physiological function. Most tumors arise in the glia and other parts of the supporting white matter. Tumors arise less often from gray matter or neurons. Malignant tumors will often regrow after removal, benign ones will not. The most important thing to do about a tumor is determining the location and prognosis.

Degenerative and infection disorders

Degenerative disorders are related to genetic distortions, such as Huntington's disease, and environmental influences, such as Parkinson's and Alzheimer's disease. In Huntington and Parkinson, there is primarily pathology in the basal ganglia. With Alzheimer's disease there mainly is atrophy in the cerebral cortex. Progressive neurological disorders can be caused by a virus such as HIV. HIV invades subcortical areas and causes diffuse lesions of the white matter. The herpes simplex virus destroys neurons in cortical and limbic structures. There is probably also viral infection with multiple sclerosis.


Brain damage can also be caused by a trauma (blow to the head). This is called traumatic brain injury (TBI). Also when the skull is not damaged, brain damage may have occurred in the area of ​​the brain where the head was hit (coup) due to the blow of the accident. People speak of open or closed injuries. With open injury, the skull is actually broken, but not with closed injury. Closed head injuries are often caused by car accidents, falls, bomb explosions, and contact sports. Trauma can even cause damage to the opposite side of the brain, because the brain has collided with the skull on the other side (countercoup). With diffusion tensor imaging one can view the anatomical damage of THL. Additional brain damage can occur due to vascular problems or infections due to the trauma. Trauma can break blood flow or increase intracranial pressure. After THL, people are also susceptible to seizures.


Epilepsy is described as excessive and abnormal brain activity. The main symptom is a coincidence, a temporary loss of consciousness. The severity varies per person. Epilepsy is easy to visualize with an EEG. The number and duration of seizures is also very variable

What is the relationship between brain damage and behavior?

Behavioral neurology looked at the effect of diseases on the structure and function of the nervous system. The expectation is that with damage in a certain part of the brain, certain functions will fail or decrease. Nowadays this can be mapped with neuroimaging methods and paradigms of cognitive psychology. We now know that tasks such as vision and language require the integration of different processing operations in separate brain areas. With research, the various operations are associated with the specific areas of the brain.

Nowadays, research is also done with laboratory animals and, for example, pharmacological methods. However, there are many differences between brain structures and functions of animals and humans. The lesion approach also has limitations, because there is a lot of variation between patients. In addition, not only does the function of the damaged brain area change, but also the areas that have connections to it. The consequences of a lesion can also be compensated.

Single and double dissociation

With the lesion approach one can either do research on a population with the same anatomical lesions or on a population with the same behavioral defects. There are two possible experimental outcomes:

  • Single dissociation: the patient group shows limitations on one task, but not on another.

  • Double dissociation: the patient group is limited on one task, while another patient group is limited on the other task. This provides insight into whether two cognitive functions are independent of each other.

A cognitive neuropsychologist wants to design tasks to test specific hypotheses about brain function relationships. Investigations will require thorough use of a control group to compare performance. One must find out whether the limitation on a task is due to damage to a particular mental operation or is a result of a more general disruption. For example, it has been found that many patients have depressive symptoms after neurological damage.

Split brain procedure and deep brain stimulation

Much research has been done in patients before and after surgery in which fibers of the corpus callosum were broken. As a result, connections between the two hemispheres were broken by this callosotomy or split-brain procedure. The purpose of these operations was to reduce epileptic seizures.

The aforementioned operation was a removal, but nowadays attempts are also made to restore normal function. Hereby use is made of brain stimulators that are implanted in the brain, for example the basal ganglia in Parkinson's. This is called deep brain stimulation (DBS). The stimulators give electrical signals. Many improvements have been found in patients. The effect is not about the stimulation of specific cells, but about the way the stimulation changes interactions between cells.

How are experiments that deliberately cause lesions carried out?

Effects of lesions are difficult to determine because the brain makes many connections with other areas. In addition, the brain is plastic and neural function can therefore be adjusted by experience. A lot can be learned from neurological disorders, but they remain correlations. Nowadays, research can also be done with healthy test subjects (without brain damage) whereby certain parts of the brain can be temporarily stopped. This allows one to study the influence of temporary disruptions on the brain. One of these methods is the use of medicines. Pharmacological studies concern the administration of agonistic medication, medication that has the same structure as a particular neurotransmitter and mimics the action of this neurotransmitter, or antagonistic medication, medication that bind to receptors and block or reduce the neurotransmission involved.

Research is being done on people who use chronic drugs, but it is also given to people in a controlled environment to study the effects on cognitive function. A limitation of drug studies where a drug is injected into the blood stream is the lack of specificity. We do not know if the medication makes it to the brain area of ​​interest. In addition, it can have effects on other areas of the body. It is possible in animal studies to inject medication into the brain area of ​​interest. For example, it has been discovered that dopamine input at the D1 receptors in the anterior cingulate cortex is important for making decisions.

Transcranial magnetic stimulation

A brain region can be stimulated by transcranial magnetic stimulation (TMS). In addition, lesions can be simulated. With these techniques, neural functions in a certain part of the cortex can be temporarily interrupted (virtual lesion) and behavior or cognitive functions can be investigated. By forming a magnetic field at a certain part of the cortex, the neurons emit signals. When this field is placed near the area of ​​the hand in the motor cortex, this leads to activation of the muscles in the wrist and fingers. It is a safe and non-invasive method, with short-term changes in neural activity.

Transcranial direct current stimulation

Transcranial direct current stimulation (tDCS) is a brain stimulation procedure in which a certain current is transmitted between two small electrodes, an anode and a cathode, that are placed on the skull. Physiological studies show that neurons are depolarized under the anode. These neurons will therefore have a greater chance of firing. Neurons located under the cathode are hyperpolarized. These neurons are likely to fire less quickly. tDCS has demonstrated the ability to achieve changes in behavioral performance. Anodal tDCS leads to improvements in performance and cathodal stimulation leads to reductions in performance. tDCS can be beneficial in people with neurological disorders. However, effects only last for a short time, but with repeated application the duration can be extended to weeks. With TMS and tDCS, no control group is needed, because the participants serve as their own controls since the incentives are temporary.

Genetic manipulation

The discovery and description of human chromosomes has made a major contribution to science. For genetic diseases such as Huntington's disease, the genetic code has been discovered, which predicts development of the disease. Much research is being done into changing these genetic codes and preventing their functioning, or the development of the disease.

Behavioral geneticists have discovered that parts of cognitive functions are inherited through research in rats. A lot of research into this heredity is done on fruit flies and mice, because these animals can reproduce relatively quickly and numerously. With a knock-out procedure, certain genes are switched off and the effect on behavior is then examined. However, the interaction between genetic material and the environment must also be considered.

Which structural neuroimaging techniques exist?


The Computed tomography (CT) scan provides a three-dimensional representation of the brain and is made up of two-dimensional images. A CT scan displays the amount of X-rays that are absorbed by the tissue. The higher the density of the tissue, the greater the absorption of radiation. This shows a white image. For example, bone has a very high tissue density and blood has a low density. This technique is poor in distinguishing between gray and white matter in the brain, because both have approximately the same tissue density and therefore approximately the same color on the scan. The skull and ventricles are clearly visible on the scan.

MRI scan

The magnetic resonance imaging (MRI) scan provides high resolution images of soft tissue. It forms a strong magnetic field that causes the protons in the tissue to respond in parallel to the magnetic field. This response is measured by radio waves, as is the response when the radio waves are turned off. You can therefore map the organization of hydrogen protons and other magnetic particles in the brain. An MRI scan provides a clearer view than a CT scan and therefore gray and white matter can also be better distinguished from each other. It can display structures smaller than 1 mm.


Diffusion tensor imaging (DTI) is performed with an MRI scan, but unlike traditional MRI, DTI measures the density and movement of water in the axons. It provides information about the anatomical connections between regions. This is based on diffusion principles. Water diffusion in the brain is anisotropic, which means that the water does not go on all directorates uniformly. Anisotropy is most evident in the axons because the myelin creates a greasy layer that limits the flow of water more than in the gray matter areas. This allows white material axon bundles to be displayed.

How can neural function be investigated?

Single-cell recording

With single-cell recording, a thin electrode is introduced into the brains of an animal, so that electrical activity can be measured. Intracellular this would work best, but the cell often breaks down. This means that it is often placed extracellularly, which means that one is less certain that one is actually measuring the activity of one cell.

The aim is to investigate what changes in synaptic activity bring about by correlating the activity of neurons with a certain stimulus pattern or behavior. The purpose of this technique is to investigate whether a manipulation always causes the same change in electrical activity.

Preferably, recordings are made of cells in a certain area of ​​interest. This technique is mainly used in research into the visual system of primates. An electrode is then placed on the cortical area of ​​which a response is expected with visual stimuli. This type of research has shown that cells respond to visual stimuli in a specific region of the visual field. These receptive fields appear to overlap with adjacent receptive fields. Cells form a topographical representation, because neighboring cells have receptive fields from neighboring regions of the external space. This is called retinotopic. The same applies to auditory stimulation.

Multi-unit recording:

You can look at individual cells, but here too the whole is larger than the sum of the parts. So it might be better to look at groups of neurons than at the characteristics of a single neuron. The idea of ​​single-cell recordings inspired psychologists to develop new techniques that enabled humans to make recordings in multiple neurons simultaneously, this is called multi-unit recording.


Few single-cell recordings have been done with people. This is used for epilepsy surgery. Electrodes are then usually placed in the medial temporal lobe (MTL), because this is often the center of the seizures. The neurons in the MTL respond selectively to certain well-known images, for example to Jennifer Aniston, but not to other famous people. The neuron corresponds to a conceptual representation not connected to a certain sensory modality.


Neural activity is an electrochemical process. When several neurons are active at the same time, they form electrical potentials. These can be measured by electrodes on the skull, called electroencefalography (EEG). The representation of these signals is an electroencephalogram (EEG). Different EEG patterns appear to exist for different behaviors, such as different periods of sleep.

An EEG can display abnormalities in brain functions. This makes an EEG very useful, for example, in researching and treating epilepsy.


An evoked-related potential (ERP) is a small signal that is embedded in an EEG and is triggered by a stimulus. They consist of brain responses to events and are measured by small segments of the EEG that are added together with the help of the computer. It is widely used in the clinical field. However, they remain indirect methods, because recordings are made of the skull surface. ERPs are therefore better for conducting research into the course of time in cognition than for localizing brain structures that produce electrical impulses.

Nowadays, much attention is paid to the event-related oscillation activity in an EEG signal. There is a certain rhythm in an EEG signal, which reflects the synchronized and oscillation activity of groups of neurons. The rhythms depend on the frequency of oscillations. Time-frequency analysis refers to the fact that the amplitude of a wave in different frequency regions varies over the course of processing. This allows varying two-dimensional signals to be displayed. The power of activity at an event in different EEG frequencies is measured.


A related technique is magneto-encephalography (MEG). Active neurons also produce small magnetic fields that can be measured and displayed using MEG. The advantage of MEG is that magnetic fields are not disturbed by, for example, the skull. This allows it to be used for localization of the source of electrical activity. The disadvantage of MEG is that only magnetic fields that run parallel to the surface of the skull can be detected. In addition, the magnetic fields generated by the brain are very weak. MEG must therefore be done in a room where there are no external magnetic fields, such as those of the earth.


An electrocortogram (ECoG) is similar to an EEG, except that the electrodes are placed directly on the surface of the brain, namely on the dura mater or below. As a result, an ECoG is only possible for people undergoing neurosurgical treatment. It is used to locate abnormal brain activity and determine the frequency. The electrodes remain in place for a week. Because they are directly on the brain, there is less signal disturbance than with an EEG. This makes the spatial and temporal resolution much better. The electrodes can also be used to stimulate the brain and localize cortical and subcortical neurological functions.

What are examples of structural and functional neuroimaging?

Two relatively new techniques in the field of neuro-imaging are the positron emission tomography (PET) and the functional MRI (fMRI). This can be used to measure metabolic changes associated with neural activity.

PET scan

If the brain needs more oxygen due to activity, the blood supply will have to increase to achieve this. A PET (positron emission tomography) scan can make this blood flow visible by using radioactive elements as markers. These elements lose positrons in the blood stream fairly quickly. The positrons can collide with electrons, creating gamma radiation. The PET scan can be used to determine where the particles have collided. Where there is more blood flow, there is more radiation. During a task, PET can look at increased blood flow in certain areas of the brain and we assume that those areas are more active. So PET measures the relative brain activity. The difference between an experimental task and a control task is referred to as the regional cerebral blood flow (rCBF). PET can watch voxels of around 5-10 mm3.

PET scans can now also be used to measure beta amyloid plaques by measuring PiB that binds to beta amyloid. Beta-amyloid plaques form in Alzheimer's. Therefore, PET can be a new method to diagnose Alzheimer's. It can even be used as a screening tool to see if people have an increased chance of developing Alzheimer's.

fMRI scan

FMRI also uses the fact that activated parts of the brain have an increased blood supply. The protons of hydrogen atoms start swinging through radio waves. Hemoglobin carries the oxygen particles in the blood. After the oxygen has been released, the hemoglobin is paramagnetic, or weakly magnetic in the presence of a magnetic field, and hemoglobin with oxygen is not. fMRI determines the Blood Oxygen Level Dependence effect (BOLD effect), the ratio of hemoglobin with and without oxygen.

It seems logical that with increased brain activity there will be more hemoglobin without oxygen, because the brain needs a lot of oxygen. However, there is an increased ratio of hemoglobin with oxygen, because an active brain area receives more blood. However, the neural tissue cannot absorb all the excess oxygen. The changes in blood flow take place slightly later than the neural activity. The BOLD response peaks six to ten seconds later and is therefore an indirect measure of neuronal activity.

The popularity of fMRI has now exceeded the popularity of PET. This has a number of reasons. First of all, MRI scans are cheaper and easy to use. The spatial resolution of MRI is also better. In addition, no radioactive elements are injected at MRI, making multiple measurements possible one after the other.

Block design experiments are used in PET, because it takes more time to measure a signal. In these experiments the recorded neural activity is integrated over a "time block" in which a participant performs a task or receives a stimulus. This activity is compared to other blocks. Because it takes a lot of time, there is little specificity to correlate activation to cognitive processes.

Functional MRI uses event-related designs. This means that the BOLD response is linked to certain events. This is done by taking an average of the signals over repeated measurements. This allows experimental and control trials to be presented randomly. A block design experiment can detect smaller effects, but an event-related design has more possibilities in terms of experimental setup. In addition, the data can be combined in an event-related design in many ways.

What is a brain chart?

A brain diagram is a visual model of the connections within a certain part of the nervous system. The model is made up of nodes, the neural elements, and connections, the connections between the neural elements. The geometric relationships of the nodes and connections define the diagram and provide a visualization of the organization of the brain. Because people have so many neurons, nodes become clusters of voxels and connections representations of nodes with correlated activity patterns in a brain diagram. This allows us to distinguish hubs, nodes with many connections to neighboring nodes, and connectors, nodes with connections to clusters that are further away. The power of the connections can also be mapped. Brain charts can be used to compare results from experiments using different methods.

How are computer models used in cognitive psychology?

Computers are widely used by cognitive psychologists to simulate cognitive processes. A simulation is an imitation or reproduction of behavior. Artificial intelligence consists of these simulated cognitive processes. Computers receive input and have to process it with internal operations into certain output, often behavior of a robot. If the model's output does not match the human output, the simulation is incorrect.

With computer models, the internal process must be specified exactly. Braitenberg has demonstrated this by two externally identical machines with internally differently running wires. One machine goes to a heat source, the other just turns away from it. The conclusion is that the outcome of an internal process may even change due to different connection patterns. A difference in behavior does not therefore have to be due to different information processing, but can also be due to differences in patterns of connectivity.

Representations with computer models

Models according to neural networks are emerging with one component representing a certain process. The models can generate complex behaviors by simulating certain components in parallel. These models can vary based on the explanation that one wants to demonstrate by means of the model, for example on a system, cellular or molecular level. These models can also simulate brain damage, for example due to artificial lesions in the artificial network.

Testable predictions by models

Computer models can be used to form hypotheses that can be tested in real brains. Kali and Dayan, for example, searched for answers to episodic memory questions. They tried to generate these answers by creating a computer model. They found that the episodic memory was considerably damaged when the hippocampus and cortex were no longer connected. Theories can therefore be developed with computer simulations.

How are different methods integrated?

For example, single-cell recordings can be used to determine regions of interest in an fMRI study. Certain studies can also lead to hypotheses that can then be tested with a different method. Today, imaging, behavioral and genetic methods are often used together.

What is Hemispheric Specialization? - Chapter 4

How did the investigation into hemispheric specialization start?

In 1961, the war veteran W.J. experienced two major seizures a week. He had been suffering from this for ten years. After each attack, he needed at least one day to recover. He experienced no problems in other areas: he had a good sense of humor and was pleasant to deal with. However, the attacks were so unpleasant that he would do anything to get rid of these attacks. Dr. Joseph Bogen, a neurologist, suggested that W.J. would undergo a surgery that would cause his corpus callosum to be broken. The corpus callosum is a structure in the brain that connects the left and right hemisphere. Breaking the corpus callosum would therefore also break the connection between the two hemispheres. This surgery was also performed 20 years ago on a number of patients in New York. None of these patients suffered from side effects, while they all were cured from their seizures! Psychological studies also showed no differences in behavior or brain function after these surgeries. So, splitting the brain of humans seemed to have no consequences. Also, for W.J. the same seemed to be true: at first it seemed as if he did not experience any negative side effects, while his seizures were completely gone.

The strange thing about these findings is that in animals that have undergone the same type of surgery, large differences were found in behavior and brain function. Researchers therefore thought it was strange that in humans there would be no consequences or whatsoever of splitting the brain. To investigate this further, W.J. was asked to participate in research to help solve this 'mystery'. A new research method was now used that was based on the anatomy of the optic nerve. This is a research method that was not applied 20 years ago to the patients in New York who underwent the same surgery. This anatomy means that stimuli presented in the right visual field of the eyes are processed in the left hemisphere and that stimuli presented in the left visual field of the eyes are processed in the right hemisphere.

Before breaking the corpus callosum of W.J., he could well name the objects that were presented to both of his visual fields. Because it was known that the speech center is located in the left hemisphere, it was expected that the objects that W.J. would be able to name objects that he would see in his right visual field. This is because information from the right visual field travels to the left hemisphere. The research done with the patients in New York also showed that the corpus callosum is not essential for the integration of information from both hemispheres. If this were correct, it would mean that even if information is presented in the left visual field and then goes to the right hemisphere, it could also be seen and named by W.J.

To test these hypotheses, a spoon was first shown in the right visual field of W.J. (and therefore processed in the left hemisphere). He then indicated that he saw a spoon. After this, a photo was offered to the left visual field of W.J. (and therefore processed in the right hemisphere). He then said he didn't see anything. These results could mean that W.J. had become blind to stimuli in his left visual field. To investigate this further, the investigator indicated that W.J. was allowed to respond with his left hand (which is controlled by the right hemisphere) instead of having to indicate verbally whether he saw something or not. When stimuli then were offered again to his left visual field, he pressed the button (for which the right hemisphere is responsible). But, while he did press the button, he indicated that he saw nothing (and talking is done in the left hemisphere).

The rest of the studies also lead to striking results. For example, the right hemisphere turned out to be able to do things that the left hemisphere could not do. Also, in a task where W.J. had to put blocks in a certain order did not work out well for him: W.J. was unable to complete the task with his right hand, but he could solve the task with his left hand. So, he could solve the task with his right hemisphere (left hand) and not with his left hemisphere (right hand). These observations lead to the idea that "Mind Left" and "Mind Right" do two very distinct things. In other words, there seems to be lateral specialization of both hemispheres. It was also striking that when W.J. was allowed to use both hands in the tasks, it seemed as if both hands were in competition with each other! For example, "Mind Right" placed the blocks in the correct order and "Mind Left" confused this order again. This observation led researchers to ask questions such as: "Are there two selves?" And which of these two is dominant? Are the two hemispheres in competition with each other? What does each hemisphere decide? Is consciousness in one of the hemispheres? And above all, why does a split-brain (separated brain) surgery have no influence on how patients feel? ” The last question in particular is called the fundamental mystery and led to the emergence of split-brain research which became a new field of research.

The media was also aware of the findings and stated that there was a "left" and right brain that processed the world in different ways. For example, the media stated that the left brain would be primarily logical and analytical, and that the right brain would be especially important in things such as creativity, music, and intuition. Many books have also been written about this ideas. However, this is incorrect: the media has exaggerated the scientific findings. What is true is that it appears that there are many cognitive processes and that every hemisphere can carry out these processes. Yet there are very fundamental differences that are discussed further.

What is the anatomy of the hemispheres?

The Anatomical relationships 

Unilateral brain damage (damage to one side of the brain) in particular caused differences to be discovered between the two hemispheres. The effect of damage to the left hemisphere on language functions seemed particularly large. Juhn A. Wada discovered that the left hemisphere mainly affects language functions using the Wada test. This test is about that a patient is injected with amobarbital on the left or right side. When the injection was given in the left side and thus the left hemisphere was stunned for a short time, patients were found to have language problems. FMRI and PET scans also showed that the processing of language-related information takes place primarily in the left hemisphere.

Macroscopic Anatomical Asymmetries

The large lobes called the occipital lobe, parietal lobe, temporal lobe and frontal lobe are almost symmetrical. The cerebral cortex also appears to be equally distributed throughout the brain. However, the two hemispheres are not symmetrical. For example, it appears that the right hemisphere protrudes at the front and the left hemisphere at the rear. The right hemisphere also contains more volume in the frontal areas. The left hemisphere contains more volume in the occipital areas.

The Sylvian fissure also has a large upward curl in the right hemisphere, but is flat in the left hemisphere. Another finding is that the planum temporale, a brain area in Wernicke's area, is larger in 65% of all brains that have been studied. Wernicke's area has to do with understanding written and spoken language. This asymmetry that is found is an example of an anatomical finding that corresponds to a functional asymmetry. However, there are also criticisms on the research that led to these findings. That is why we also have to look at the neural circuits within the brain regions.

Microscopic Anatomical Asymmetries

Specific neural characteristics such as the number of synaptic connections could explain differences in functions of brain regions. It could also be that areas of the brain with a larger volume contain more neural connections.

To examine specializations in brain areas in more detail, homotopic areas were examined. These are corresponding areas in both hemispheres. Differences were found in neural networks between the two hemispheres in Broca's area and in Wernicke's area.

A number of asymmetries have also been found between the hemispheres. A number of these asymmetries are in the neurons themselves. For example, there are more higher-order dendritic branches in the left hemisphere than in the homologous areas in the right hemisphere. The right hemisphere has more lower-order dendritic branches. Also, columns of cells are further apart in Wernicke's area in the left hemisphere than in the right hemisphere. Also, cells in the columns in the auditory areas in the left hemisphere have less connections to other cells than in the right hemisphere.

Other asymmetries found in the anterior and posterior language areas are differences in size of cells between the hemispheres. There also appear to be differences in neural connections between the two hemispheres: in the left hemisphere, groups of cells appear to be further apart than in the right hemisphere.

The interpretation of these findings could be that these differences lead to the cells in the left hemisphere making less connection with neurons than in the right hemisphere. Another interpretation is that, because the cells in the left hemisphere lay further apart, there are more complex networks and connections. This information can be used for developmental disorders such as autism and dyslexia. It appears that in autism columns of cells are smaller, but that there are more columns.

What is the anatomy of communication?

The corpus callosum

The corpus callosum provides a connection between the two hemispheres. The anterior part of the corpus callosum is called genu, the middle part is called body and the posterior part is called splenium. Almost all visual information is transmitted through the posterior part of the corpus callosum and motor information is mainly transmitted through the central part of the corpus callosum. The corpus callosum also has homotopic and heterotopic compounds. Thus, homotopic connections are about matching areas in both hemispheres and heterotopic connections are about connecting areas that are not similar.

The anterior commissure

The anterior commissure also provides a connection between the two hemispheres, but it is ten times smaller than the corpus callosum. It mainly ensures connections between areas in the temporal lobe. The amygdala is also connected by the commissura anterior. The posterior commissure is smaller than the anterior commissure The posterior commissure is responsible for, among other things, the pupil reflex which means that your pupil enlarges in low light and decreases in much light.

The function of the corpus callosum

Because the corpus callosum is responsible for communication between the two hemispheres, researchers want to know what exactly is being communicated and how this is done. There are several perspectives on this. One perspective is that the corpus callosum combines different stimuli (input). Another perspective on the function of the corpus callosum is that it mainly uses inhibition. An example of this is that when someone wants to do something (such as taking something out of the cupboard), different types of movements are triggered. The corpus callosum would then ensure that only one movement is performed.

It also appears that hemispheric specialization mainly exists because of the corpus callosum: it could be that lateralization of the brain areas (specialization) arose because there were too little connections in the corpus callosum it self.

The anatomical differences in the corpus callosum are related to gender, left and right-handedness, mental limitations, autism and schizophrenia. Interpretation of data with regards to the corpus callosum is difficult because there is little agreement in which methodology should be used.

How does the brain split?

When the brain is separated, no brain tissue is destroyed: it is only about breaking the connection between the two hemispheres. So, to look at the effects of this disconnection, we do not look at how the behavior of the patients changes (because the behavior of these patients is often the same after such a surgery!) Therefore, to look at the effect of these surgeries, researchers examine the hemispheres when they are isolated.

What are methodological considerations when studying split brain patients?

There are a number of issues that play a role in studying patients with a split brain. First, it must be kept in mind that many patients are not neurologically 'normal' before undergoing a split brain operation. They are people who suffer from (severe) epileptic seizures. When comparing these patients before the surgery with after the surgery, it must therefore be taken into account that they did not function 'normally' before they underwent the surgery. Secondly, it must be established whether the transcortical connections have been completely removed, or whether a number of connections have been retained. Thirdly, the way of offering visual stimuli is not always the most appropriate way. There are newer developments which are called 'image stabilization tools' that would work better.

What are the functional consequences of a split brain operation?

For the patients in New York 20 years ago, no noticeable behavioral changes were found after the surgery. For W.J. this was not true: after doing subsequent research, it seemed that the consequences in behavior even resembled what happened with primates who also had a split brain. A number of other functional consequences are that patients could name and describe objects in their right hand, but could not do this with objects in their left hand. The conclusion based on these observations is that the right hemisphere is primarily specialized in visuospatial tasks. Patients could not describe visual stimuli offered to the right hemisphere because the sensory information (looking, listening, feeling, touching) cannot be processed by the left hemisphere. Pay attention; this does not mean that patients do not know what they see or feel. So they cannot just verbally state what they see, but they can point something or represent a function with their left hand!

What are the specific functions of the Corpus Callosum?

It appears that specific areas of the corpus callosum have a specific task. For example, the splenium, the posterior region of the corpus callosum, has the task of passing on visual information between the two hemispheres. Tactile information (tangible) is mainly passed on through an area that is slightly anterior to the splenium, but is still in the posterior part of the corpus callosum. The anterior part of the corpus callosum only transmits the semantic information about a stimulus, but not the stimulus as a whole.

What is hemispheric specialization?

Localization of brain functions began with Franz Joseph Gall in 1800 (frenology). This means that the specific functions of certain brain regions were examined. Studies done with split brain patients show that there is some form of specialization, but not as much as Gall thought.

How do language and speech get processed?

Language is the clearest example of lateralization (related to one hemisphere). In language, a distinction can be made between grammatical and lexical functions. Grammar is about the rules with putting words in order and is especially important for effective communication between people. The lexicon is actually the 'dictionary' which contains all possible words and the meanings of words. The role of memory is important in the difference between grammar and lexicon. Certain sentences are not remembered word by word, but as a whole sentence, for example: "How are you?" Or "Everything okay?" Because when learning a new language you mainly learn these sentences and not necessarily know the grammar perfectly, memory seems to be especially important with the lexicon. That is why it seems that grammar is controlled by certain areas of the brain, and the lexicon seems to be more scattered throughout the brain, because it has to do with memory and knowledge. This leads to the expectation that the grammatical system is discrete and localizable, and that the lexicon is more difficult to find. It also seems that when the brain is damaged, the lexicon does not suffer, because it is more distributed throughout the brain than grammar.

The left hemisphere is especially important in language: about 96% of all people have a left hemisphere dominance for language production. Yet there are also people who use their right hemisphere for language. But, they can only do this to a limited extent and have difficulty with lexical aspects of the language. There appear to be two lexicons: one in the left hemisphere and one in the right hemisphere. Yet these have distinct functions.

Generative syntax (using grammar, words can lead to different sentences or meanings in many different ways) is only present in one hemisphere. People who have a lexicon in their right hemisphere can therefore understand words, but do poorly in understanding verbs, multiples or active-passive differences in language. For example, they cannot distinguish between "the dog chases the cat” and "the cat chases the dog”.

Although in 96% of all people language production takes place in the left hemisphere, there are people who can use both hemispheres. Studies with rare patients who are able to do this show that the right hemisphere on its own makes speech production possible.

This is an example of plasticity of the brain. Sometimes, after ten years of saying nothing, patients start to talk. It is important to note that emotional information is mainly processed by the right hemisphere. For example, the sentence "Zarah, come here" can have many different emotional meanings. This is called emotional prosody.

A patient with damage in the left hemisphere has difficulty understanding words, but is still able to understand the emotional prosody of words! Patients with damage in the right hemisphere have the opposite. This is called a double dissociation and is therefore an indication that the processing of emotional prosody mainly occurs in the right hemisphere.

How does visuospatial processing work?

The functional asymmetries that were found, for example in the block task that W.J/ had to perform (where he had to put blocks in a certain order), appear inconsistent. This means that not every patient experiences exactly the same consequences of a split brain. Other important issues with regard to visuospatial processing is that the right hemisphere is primarily specialized in recognizing faces that are displayed straight ahead. The left hemisphere is not good at distinguishing between familiar faces, but it is good at distinguishing between unknown faces based on features such as blond or brown hair, round or straight nose, etc. For recognizing familiar faces the right hemisphere is therefore dominant. A double association has also been found in recognizing someone's own face and recognizing known others. It appears that the left hemisphere mainly recognizes someone's own face and the right hemisphere mainly recognizes other acquaintances. Both hemispheres also cause spontaneous facial expressions, but to produce voluntary facial expressions the left hemisphere is needed.

How does the interaction between attention and perception proceed?

Based on studies of attention and perception, questions arose about whether both hemispheres could perform different tasks simultaneously. This appears not to be the case. Thus, the attention system of split brain patients is unifocal: only one task can be performed at a time.

It also appears that the sources that the brain uses for attention are limited. So if you are very busy studying, you have less energy to think about what you will do during the weekend. This means that our 'central' sources are limited. Every hemisphere uses these sources, but the way differs. For example, for a task where an answer is needed and a 'cue' is offered, the 'cue’' will mainly be used by the left hemisphere. The right hemisphere employs other automatic processes. For example, it is known that the right hemisphere recognizes faces that are facing straight forward. So when an image with a face is offered, the right hemisphere immediately focuses on where the face is looking at. When voluntarily focusing your attention (like, choosing where you want to look), the left hemisphere seems to be dominant.

What is the difference between global and local processing?

David Nacon has created a model for studying hierarchical structure. Here he made, for example, the letter H consisting of a lot of small letters of F. It then is about the distinction between the perception:  do you see the letter H or the smaller letters F? It appears that the right hemisphere is especially good when viewing 'the whole picture' and will therefore see the letter H sooner and that the left hemisphere is more focused on details and therefore will see the letter F sooner. So, the right hemisphere is more involved in global processing, while the left hemisphere is more involved in local processing.

What is the Theory of Mind?

Theory of Mind is a theory about that people are able to understand what others feel, think and will do in certain situations. For example, people with autism or very young children do not have a fully developed Theory of Mind (yet). Research suggests that this ability is primarily located in the right hemisphere. If this is true, it would be expected be that split brain patients, because they need their left hemisphere for language, would experience many problems in social interactions. However, this appears not to be the case. However, when determining whether or not a particular action is ethically justified. For example: Grace prepares coffee for her boyfriend and puts white powder in it, of what she thinks is sugar. Her boyfriend then drinks this coffee and dies, because the white powder was a very bad drug. Has Grace done anything bad? "Normal patients" would say no, because she didn't know that the white powder was a bad drug. But patients with a split brain mainly focus on the consequences of the behavior and thus say that she has done something very bad, because her boyfriend is now dead. Children also use the same kind of reasoning as these split-brain patients.

What is meant by 'the interpreter'?

It appears that the left hemisphere is especially important when making interpretations of someone's behavior. This unique specialization is therefore called 'the interpreter'. This interpreter comes up with reasons for the behavior that the right hemisphere has performed, because it often does not understand what the right hemisphere did (for the completion of a task). The left hemisphere also tries to explain emotional states and the emotions and mood of the individual, which it also often does not understand. The right hemisphere is sometimes better in tasks, because it uses a ‘easiest-way’ approach, while the left hemisphere always wants to make the best decision. This is not always useful, because sometimes there is no direct cause for something and you should just do something instead of analyzing it.

Research with 'normal brains'

Research is also done with healthy people who do not have a split brain.  Many different techniques have been developed to perform this kind of research. One of these techniques is the dichotic listening task. In this task two different messages are provided to both ears at the same time. The results of this research shows that the right ear remembers words better than the left ear. This also fits with the idea that the left hemisphere is dominant in language.

Nonetheless, there are limitations when doing research on healthy people, such as:

  • Healthy people have an intact corpus callosum, which means that information is passed on very quickly from one hemisphere to another;

  • There is a publication bias, meaning that studies where there is no significant result are not published;

  • Interpretation of the results is often problematic because it is difficult to establish causal relationships.

Other issues in hemispheric specialization

As indicated at the start of this chapter, split-brain surgery can help against epileptic seizures.

Specialized networks are called modules.

Apart from speech production, both hemispheres have an influence in cognitive processes. Hemispheric specialization is useful because humans have only one body and otherwise they would experience a lot of difficulty when they have to choose between two different movements that have the same goal. Hemispheric specialization therefore leads to unification.

There is also much uncertainty about left and right-handedness. Previc suggested that this is mainly determined by the position of the fetus during pregnancy.

Hemispheric specialization in animals

Hemispheric specialization is also found in animals such as birds. Birds also have no corpus callosum. In animals, it is very difficult to interpret the results with regards to brain functions.

Examination in patients with unilateral cortical lesions

When examining patients with unilateral cortical lesions, patients in which parts of the left hemisphere have been removed are compared with patients who have parts of the right hemisphere removed. If lesions in the left hemisphere cause patients to speak unclear, this is an indication of that speech production occurs primarily in the left hemisphere.

The mystery about consciousness

An explanation for the fact that patients with a split brain do not seem to notice this at all, is that consciousness consists of many modules. If the left hemisphere (the interpreter) receives no input from the modules of the right hemisphere, then the left hemisphere can not know that something is missing from the right hemisphere.

How do sensation and perception relate to each other? - Chapter 5

Our senses are our physiological capacities to obtain input from the environment and to pass this input on to our neurological system. We tend to place the greatest value on the sense of "seeing" from a survival perspective, but all the senses are important. Perception starts with a stimulus from the environment, for example a sound or light, which stimulates one of our sensory organs, for example the eye. The input is then converted into neural activity, which is processed by the brain. Sensation refers to the early processing. A perception is the mental representation of an original stimulus. However, many perceptions do not reach our consciousness.

What is sensation?

Different sensory modalities

Every system starts with a kind of anatomical structure for collecting, filtering, and enlarging information from the environment. Each system has specialized receptor cells that translate the environmental stimulus into neural signals. These neural signals go through the specific sensory neural pathways. For example, olfactory signals go to the olfactory nerve and visual signals to the optic nerve. The sensory nerves of the body travel further to the spinal cord and enter the brain through the medulla. From here, all sensory nerves end in the thalamus, with the exception of the olfactory nerve. From the thalamus, the neural connections go to the primary sensory cortex and then to the secondary sensory cortex. The olfactory nerve ends in the olfactory bulb, and goes directly to the primary and secondary olfactory cortexes, rather than via the brainstem and thalamus.

Each sensory modality responds to a limited range of stimuli. This range is not the same for every variety: for example, birds and insects can see certain types of light that we cannot perceive. In addition, the stimulus value must have a certain intensity to activate a perception.


Adaptation refers to how sensory systems remain refined. It is adjusting the sensitivity of a sensory system to the current environment and to important changes in the environment. This can happen relatively quickly. Our sensory systems are also refined in responding to various sources of information in the environment. This is called acuity. This depends on various factors, namely the design of the stimulus collection system and the number and distribution of receptors. If there are more receptors for certain information, this is accompanied by increased cortical representation of that information. This supports flexible behavior.

Physical stimuli are converted into neural signals and further processed in subcortical and cortical brain areas. This can lead to subjective sensory awareness. Every level of processing contributes to the end product, not just the stimulus. Most sensory stimulation ultimately does not reach consciousness.

Multisensory integration

Most people think that sensory processing happens in one direction: information enters through the sensory organs in the brain. However, neural activity is two-way traffic. At all levels of sensory pathways, neural connections go in both directions. The thalamus ensures multisensory integration, because it receives connections and sends projections. This concept will be further discussed later.

How does auditory perception work?

Sounds are important for survival, but also for communication. How does the brain process these sounds and what happens in the ear?

Neural pathways

Complex structures in our inner ear convert sounds into a neural signal. The sound waves enter our ear canal, where the waves are magnified. Then they make our eardrum vibrate. This vibration goes through the middle ear to the hammer, anvil and stirrup (ossicles), causing the oval window to vibrate. The signal then passes on to the cochlea, which is filled with liquid. The liquid vibrates and this stimulates hair cells of the basil membrane.

Hair cells:

The location of the hair cells determines the frequency tuning, or the frequency to which it responds. Near the oval window, the membrane is thick and stiff. This responds to high-frequency vibrations. At the other end of the cochlea (apex) the membrane is thinner and less stiff and this responds to low frequencies. This is a tonotopic folder. Hair cells are mechanoreceptors, because when stimulated, ion channels open, allowing the positive ions of potassium and calcium to enter the cell. With enough depolarization, transmitters are released. A neural signal is produced in this way.

The hair cells are especially sensitive to a frequency between 1000 and 4000 hertz. This includes speech, but also the crying of a baby. The cochlear nerve projects to the cochlear nucleus in the medulla. The left and right olivary nuclei are stimulated via the pons. These nuclei project to the inferior colliculus in the midbrain. Motor structures can be reached from here. Some axons go to the nucleus of the lateral lemniscus in the midbrain, important for timing. From here, the information is sent to the medial geniculate nucleus (MGN) of the thalamus. The thalamus transmits sound to the primary auditory cortex (A1). You will also find a tonotopic organization in the brain. In this case, the rostral part of A1 is responsible for low frequency sounds and the caudal part of A1 for high frequency sounds. These tonotopic folders are also found in the secondary auditory cortex.

The fine resolution of cells is for the precise discrimination for perceiving sounds. This is sharper in humans than in other species. However, the organization is less tidy, with adjacent cells often showing different tuning. Natural sounds contain information of different frequencies and the local organization depends on experience with these sounds.

Sounds have unique resonance properties, which is important in addition to frequency variation when identifying words or music. Auditory perception is not only about identifying the acoustic stimulus, but also about locating the stimulus. Cognitive neuroscience has therefore focused primarily on the "where" in audio research and less on the "what." Integration of both ears determines the "where".

Cues for localization

Auditory perception research often uses an owl because this animal has a very well-developed hearing system. Owls can see well in the dark, but cannot blindly trust their vision; thus they rely on sound. Owls use two types of cues to locate sound, namely:

  • the difference between when the sound reaches the left ear and the right ear (interaural time);

  • the difference in the intensity of the sound between the two ears.

Both cues stem from the fact that the sound reaching both ears is not identical, because sound often does not come in parallel on the main orientation and because the intensity decreases over time. The following research shows that both indications are independent of each other. Due to the interaural time, the owl moves its head horizontally, and based on the intensity of the sound, the owl moves its head vertically. The source of the sound cannot be located if the sound has only reached one ear. Cortical processing is perhaps important for converting location information into action.

How does the sense of smell work?

Odor perception also plays an important role in survival, so animals (and we too) can smell whether food is safe or if, for example, gas is leaking. Smell is also important in social communication. Pheromones that are secreted lead to a social response in other people.

Neural pathways

The odor molecules (odorants) end up in the nasal cavity. This can happen in various ways, for example during normal breathing or when we inhale air. However, they can also enter in a passive way, because the air pressure in the nose is usually lower than outside (pressure gradient). A final way in which the molecules can enter the nasal cavity is through the mouth to the nasal cavity. The molecules attach themselves to the receptors on the membrane in the nasal cavity, namely the olfactory epithelium. The receptors do not respond to all odorants. Probably odorants with the same vibration spectra lead to the same smells, while similar molecules can lead to different smells. When an odorant binds to a bipolar neuron, a signal is sent to the neurons in the olfactory bulb (glomeruli). The axons of the glomeruli form the olfactory nerve that runs to the primary olfactory cortex. Most fibers have a connection with the ipsilateral cortex, but some have a contralateral connection. In addition, the olfactory nerve does not go through the thalamus. The primary olfactory cortex projects to a secondary olfactory region in the orbitofrontal cortex, which has connections to the thalamus, hypothalamus, hippocampus and amygdala. Scents can therefore control autonomous behavior.

Sobel’s research into sense of smell

Until very recently, the sense of smell of humans was not studied at all in cognitive neuroscience. This is because it is difficult to offer scent in a controlled manner, it is difficult to determine whether a scent is still present and many scents are rather subtle, which means that they must be examined by sniffing.

Mr. Sobel investigated our sense of smell. Participants sniffed neutral air, air with vanilla scent or air that smelled of pencils. Participants had to breathe in the air every so often and say whether there was a smell or not. Hereby the brain activity was measured. It was unexpectedly found that smelling did not lead to consistent activation of the primary olfactory cortex, but that the orbitofrontal cortex, a secondary olfactory region, was activated. Activation of the primary olfactory cortex was found to be related to the degree of sniffing. This part of the brain therefore seems to be more related to a motor response than the perception of odor. However, further research showed that the cells in the primary olfactory cortex quickly get used to a certain odor. Sobel adapted his research by creating a strong increase or decrease in odor. Now, activation of the primary olfactory cortex was found during smelling and sniffing. The primary olfactory cortex may be important for discovering a change in external odor and the secondary olfactory cortex for identifying the odor.

Adjustment of our nostrils

It appears that our nostrils adjust in size. During a certain period one nose opening is larger than the other and after a while this changes again. Why are they doing this? You can compare this a bit with our eyes, each eye transmits a slightly different image to our brains, so do our two nostrils. This is done to process more complex information efficiently.

How does taste perception work?

Taste and smell are strongly related and the stimulation by chemicals starts with both. They are therefore also referred to as chemical senses.

Neural pathways

There are papillae on the tongue. Some are involved in taste, others in the secretion of tongue lipase, which helps break down fats. The papillae on the anterior side of the tongue contain taste buds. The taste pores lead from the tongue surface to the taste buds. Each taste bud contains many taste cells. Most taste buds are located on the tongue, but they are also found in the cheek. The basic flavors are salty, sour, sweet and bitter. The basic taste umami refers to what you taste when you eat protein-rich food (meat).

It all starts with eating molecules, which are also called tastant. They stimulate receptors in taste receptors and this leads to depolarization at the synapses of bipolar neurons. Taste cells are in our taste buds. Each basic taste leads to a different kind of chemical signal distribution. The axons of the bipolar neurons run to the chorda tympani nerve, which then merges with the optic nerve. This nerve projects to the gustatory nucleus in the rostral region of the nucleus of the solitary tract in the brainstem. The cudal region of the solitary nucleus receives sensory neurons from the gastrointestinal tract. Integration at this level can result in a rapid response.

The information then passes through the ventral posterior medial nucleus (VPM) from the thalamus to the primary taste cortex. The end station for integration of scents and flavors is at the secondary areas in the orbitofrontal cortex. With irritants, nociceptive receptors, a type of pain receptors, are activated that send output to the trigeminal nerve. This nerve also carries information about position and temperature, which can cause reflex responses after eating something hot, such as coughing or salivation.

There is a lot of individual variation in the number and types of taste buds. The basic tastes tell the brain what type of food is being eaten. Umami indicates that a lot of protein is eaten, sweetness indicates carbohydrates, salt that there must be a balance between minerals and water, and bitter and acid are warning signs. The orbitrofrontal cortex seems important for higher level cognitive processing, such as making decisions and response selection, but also in the processing of pleasure and reward value of food.

How does somatosensory perception proceed?

This is the perception of all mechanical stimuli that affect the body, such as position, temperature, pressure and pain.

Neural pathways

The somatosensory receptors are located under our skin and at musculoskeletal junctions. Touch and pressure are noticed by the corpuscles and pain by the nociceptors. There are three types of nociceptors:

  • thermal receptors: reaction to hot and cold;

  • mechanical receptors: response to heavy mechanical stimulation;

  • Polymodal receptors: response to many painful stimuli.

A painful experience arises because the body releases chemicals after injury. Direct action comes from the axons that are myelinated. The non-myelinated fibers are active after the initial pain.

There are also special cells for the body position, or proprioception. This allows sensory and motor systems to display information about the state of muscles and limbs. The cell bodies of somatosensory receptors are located in dorsal root ganglia. The receptors enter the spinal cord through the dorsal root. Reflexes arise through direct synapses on motor neurons. Other neurons go up to the medulla through the dorsal columns. They cross at the ventral posterior nucleus of the thalamus and then move on to the cortex. Much somatosensory information is also sent to subcortical structures.

The primary somatosensory cortex (S1) receives the information and has a somatotopic representation of the body. The secondary somatosensory cortex forms more complex representations. The secondary somatosensory cortex receives information from both body sides for integration.


Cortical plasticity is the fact that the brain makes new neuronal connections in functional reorganization. A lot of research has been done into this. Thus it has been discovered that in monkeys, the shape and size of sensory and motor cortexes could be changed through experience. With an amputated finger, the finger next to it takes over the cortical area. One can still have the idea of ​​feeling in the amputated limb, which is called a phantom limb sensation. In addition, a cortical area can expand if a certain body part is often used. When this exercise stops, the areas of the brain dedicated to this part of the body become the original format again. This is called the use it or lose it phenomenon.

Plasticity could arise because weak connections that already existed are no longer suppressed and therefore become stronger, because the power of responses to weak exciting input is increased, or because new synapses and axons are growing.

How does visual perception work?

Neural paths of sight

One reason why visibility is so important is that it allows us to absorb information from a distance. This is also called remote sensing or exteroceptive perception. We do not need to be in direct contact with anything to process it. This is in contrast to touch. Being able to process visual information remotely ensures a better assessment of danger.

Visual information comes in through the reflection of light on an object. This light reflects off the object and then enters the eye through the lens and is projected inverted onto the retina. The lower layer of the retina consists of photoreceptors, which contain light-sensitive molecules (photo pigments). When light falls on the photoreceptors, the photo pigments fall apart and the electric current around the photoreceptors changes. This creates action potentials that are released by the photoreceptors. Photoreceptors thus convert an external stimulus of light into an internal neural signal.

There are two types of photo receptors. (1) Rods respond to low levels of stimulation and are therefore especially useful in twilight and in the dark. Rods are evenly distributed over the retina. (2) Cones need larger amounts or stronger light and are therefore active during the day. Cones are also used for seeing colors. They can be subdivided into red for long wavelengths, green for medium wavelengths and blue for short wavelengths. Cones are mainly packed in the middle of the retina, near the optic nerve. This area has the fovea. There are also cones in the rest of the retina, but much less.

Optical processing

The signals from the photoreceptors go through the bipolar cells to the ganglion cells. There are many more photoreceptors than there are ganglion cells, so the information needs to converge. The axons of the ganglion cells form the optic nerve that transports the information to the central nervous system. Before the nerve enters the brain, the optic nerve splits into two parts: a temporal branch, which continues to run ipsilaterally, and a nasal branch, which runs contralaterally. The place of intersection is called the optical chiasm.

The left part of the field of view falls on the right half of the retina in both eyes, so the nasal part of the left eye and the temporal part of the right eye. The reverse applies to the right-hand part of the field of view. Through the splitting of the optic nerve, all information from the left part of the field of vision is sent to the right hemisphere and all information goes from the right part of the field of view to the left hemisphere. Once arrived in the brain, each optic nerve is divided into paths with different final destinations.

Ninety percent of the axons go through the lateral geniculate nucleus (LGN). This path is called the retinogeniculate pathway. The projection to the primary visual cortex (V1) is via the geniculocortical pathway. Ten percent of the axons go to subcortical structures, such as the superior colliculus and the pulvinare nucleus, and play a role in visual attention.

Neurons in the visual system represent space, because neurons only respond to a stimulus in a certain space, called the receptive field. The receptive fields of neighboring cells of the external space correspond to the representation on V1. This is called a retinotopic map of the entire contralateral hemisphere. Receptive fields grow larger as visual information gets further into the cortical path. While the LGN responds to stimuli in a very limited space, cells in the temporal lobe can respond to an entirely visual field.

Cortical visual areas

Outside of the primary visual cortex, there are many cortical visual areas. More than 30 such areas have already been discovered in monkeys. The primary visual cortex (V1) is also called striate cortex. Other visual areas (V2, V3, etc.) are called extra-visual areas. Connections are often mutual.

Various visual areas have been formed in the course of evolution. There are various hypotheses for the reason for these formations. According to one vision, the areas form a hierarchy, with each area adding something to the processing. Simple cells of the primary visual cortex calculate the edges of an object. Complex cells in secondary visual areas use this information for the corners and higher cells determine the final shape of an object with all the information. This simple hierarchy does not seem to match the various connections described above.

Another hypothesis holds fast to the idea that visual perception is an analytical process. Different visual areas are concerned with different characteristics of the visual information. According to this hypothesis, the various areas not only process where an object is, but also process specific information about the object's properties. The processing is therefore divided and specialized in various visual areas. Research supports this hypothesis. For example, cells in the medial temporal lobe (MT or area v5) all appear to respond in the same way to different colors, but they are sensitive to movement and direction. Certain cells are only active if the stimulus falls into their receptive field and are more active if the stimulus is in the right direction.

Different visual areas are therefore specialized in the representation of different visual properties. On the basis of the results of the research in monkeys, the human brain has been studied using neuro-imaging techniques.

Zeki’s investigation into color detection:

To determine which areas are involved in color detection, Zeki performed the following research: In the control condition, test subjects were shown a collage with different shades of gray. The areas that are sensitive to contrasts would then be activated. In the experimental condition, the gray areas were replaced by colors associated with the brightness of the gray shade. The expectation was that neurons sensitive to brightness would be equally active in both conditions. However, the colored stimulus would produce more activity in neural regions sensitive to chromatic information.

Motion recognition has been studied in the same way. Subjects were shown a collage of black and white areas in the control condition. In the experimental condition, these planes moved for five seconds in one direction and five seconds in the other direction. A PET scan was used to make the activity visible.

The V4 and V5 area:

Both studies clearly show that the two tasks activate different parts of the brain. In the color study, color detection activation became particularly visible in the most anterior and inferior areas of the occipital lobe, in both hemispheres. These areas are called V4. In motion research, activation for detection of motion became particularly visible at the separation between the temporal, parietal and occipital cortexes in both hemispheres. These areas are called V5.

When comparing animal and human research, it is important to remember that there are differences between the brains. The most obvious example is that the human brain is way bigger. Differences between human and animal brains mean that you have to be careful when generalizing animal research into human research and vice versa.

The areas involved in motion detection were further investigated with the help of fMRI. This included looking at foveal stimulation and peripheral stimulation of the retina. The areas above the calcarine sulcus, except for the most anterior parts, are active when the rotating stimulus is in the lower quadrant, the reverse applies to the stimulus in the higher quadrant. The primary visual cortex is most posterior, while the secondary visual areas are anterior. Visual acuity appears to be greater with stimulation of the fovea than with stimulation of the periphery.

From sensation to perception

Individual differences in perception

There are many individual differences when viewing illusions. Take the Ebbinghaus illusion. Two circles in the middle of the display on the left and right are compared. Everyone estimates the right circle to be larger, but how much greater differs per person. Some say 10%, others 50%. Neuro-imaging is also used in research into visual illusions. Research shows that people with a smaller V1 area see a greater illusion. One hypothesis for this is that with a larger visual cortex each region gives a better representation. As a result, each area would be less easily influenced by neighboring regions in space.

What forms of damaged visual perception exist?

Before people had access to imaging methods, they mainly learned from lesion studies. Verrey described a patient in 1888 who could no longer see color in the right-hand part of the field of view. Verrey discovered after autopsy that there was a center for chromatic feeling, namely in the lingual and fusiform gyrus.

Achromatopsia: disturbance in the perception of color

People who are colorblind often have a genetic defect that disrupts the functioning of the photoreceptors. The photoreceptors are then insensitive to one or more wavelengths of color. With dichromates there are only two photo pigments present and with anomalous trichromats all three photo pigments are present, but one is abnormally sensitive.

Achromatopsia is a disorder in the central nervous system that causes a disturbance in the perception of color. Patients with this disorder see all kinds of shades of gray instead of color. Other features of visual perception, such as seeing depth, are intact. With achromatopsia, not only the perception of color itself is disturbed, but also the perception of color strength. For example, no difference is seen between dark green and light green.

Achromatopsia is associated with damage to the V4 area and areas anterior to it. However, the lesions are also further spread to neighboring regions in the visual cortex. People with achromatopsia can also have trouble with shape perception. Research has been done on a patient with a lesion at the temporal-occipital border in the right hemisphere. The patient had to indicate when he saw the same color twice. The stimulus was offered on the fovea, but also on the four quadrants of the field of view. In particular, the patient had difficulty indicating the similarity of color when the stimulus was presented in the upper left quadrant.

With different tasks focused on perception of contrast, orientation, and movement, the patient performed equally in all four quadrants and equal to the control group. Only with perception of form, a deficiency was found when offering a stimulus in the upper left quadrant. In the case of brain damage, one must therefore be aware that various functional domains may be damaged. V4 is therefore not exclusively for color perception, but also contributes to shape perception.

Akinetopsia: disturbance in the perception of movement

Akinetopsia is the loss of motion perception. A patient was selectively bothered by this because she only saw static images. She could deduce from this that something had moved, but she had not observed the movement. Her perception of color and shape was intact. She appeared to have bilateral lesions in the temporo-parietal cortexes, including the posterior and lateral parts of the mid-temporal gyrus. These areas are responsible for the perception of motion. The lesions were lateral and superior to V4, and contained V5.

This is probably only the result of bilateral lesions. With unilateral lesions, there are more subtle movement perception limitations. Motion perception remains intact as long as V5 is not damaged in at least one hemisphere. The hypothesis behind this is that the perception of movement happens over time, so that the signals to the damaged hemisphere have time to move on to the undamaged hemisphere.

However, stimulation of V5 by TMS can lead to temporary disturbances in movement perception. TMS can be varied in timing of stimulation, whereby the time of maximum disruption can be determined. Knowing when a disruption occurs can help locate the location. A disruption occurs 100 ms before the start of the target stimulus and 150 ms after the start of the target stimulus. This second disturbance coincides with when activity within V5 is important for integration of motion information to determine the direction of a moving stimulus. The first disruption could be due to a disruption of attention focus or because TMS ensures that the neurons in V5 not only fire, but that the neurons in V1 also fire after a short delay.

Perception without visual cortex

When a patient has damage to his primary visual cortex, you would expect that person to be blind and that is also found in most cases. However, the blindness does not have to apply to the entire field of vision. If the lesion is limited to a certain half of the visual field, this is called hemianopsia. A smaller lesion can make a more specific area fall out (scotoma). Patients who have this cannot see anything within the scotoma.

Animal studies have shown that the superior colliculus plays a role in eye movements. If this is affected, the animal can no longer move its eyes. The direction of eye movements depends on the location of the superior colliculus that is stimulated. Schneider has studied the role and functioning of the colliculus through research on hamsters. The hamsters were trained for two tasks. In the first task, the hamster had to move its head in the direction of a sunflower seed in the hand of the researcher. Since hamsters love sunflower seeds, this task was not difficult for them. For the second task, the hamster had to choose between two courses with a horizontally or vertically striped door at the end. Behind the door that had to be chosen was another sunflower seed. After practice, the hamsters could almost always choose the correct door.

After the training, the hamsters were divided into two experimental groups. The first experimental group received bilateral lesions in the visual cortex. The second experimental group was subjected to a removal of the superior colliculus input. This meant that it no longer had a function. The first experimental group was no longer able to perform the second task because it could no longer distinguish between the two doors. The first task was no problem. The second experimental group could no longer perform the first task. The hamsters were as if they were blind. However, when the sunflower seed was brushed along the whiskers, there was an immediate response. From this research it was concluded that hamsters apparently have a separate system for spatial orientation and for object recognition.

What is multimodal perception?

The senses that we have as human beings are not separate from each other, but form a whole. This is also reflected in the fact that we are more accurate and efficient on sensory tasks, if we can use information from multiple senses. Our expectations also contribute to this. The sounds we hear are influenced by visual signals. This is reflected in the McGurk effect: the perception of speech is influenced by the lip movements that we see.

Multi-modal processing in the brain

The brain combines input from multiple sensory systems about a certain external trait, determines the reliability of each sense, and makes a decision with this information. Multi-sensory integration probably takes place in different brain regions, both cortical and subcortical. By placing an electrode in the brain of an animal, one could investigate this. Stimuli were offered in various sensory modalities.

Cells in the superior colliculus combine information from different sensory channels and integrate this into a whole. The response is even stronger if input is received from multiple sensory organs instead of a single sensory modality. Multi-sensory signals are judged by the brain to be more reliable than input from a single sensory channel. Integration effects require different stimuli to coincide in space and time.

The superior temporal sulcus has many connections to and from the sensory cortexes. The different cells in this area respond to visual, auditory and somatosensory stimuli. Certain areas in the parietal and frontal lobe and the hippocampus are also involved in sensory integration. In a certain task it was seen that disturbance of the visual cortex by TMS led to better auditory capacity. This effect arises, because auditory and visual stimuli can increase perception in each other's sensory modality.

It is possible that the entire neocortex is partly multi-sensory and that the initial integration is subcortical. For example, there is multi-sensory input from the cortex to the thalamus. However, the reasoning that the entire cortex is multi-sensory is not plausible. Another possibility is that multi-sensory integration involves projections that start in modality-specific cortical areas. These projections can run between the sensory areas for rapid modulation, or the projections could run to an area where all information comes together.


Synesthesia is a phenomenon in which the different senses intertwine. Most people hear words like colors, but words can also lead to certain tastes. The phenomenon is idiosyncratic, i.e. personal. Not every synesthete experiences the same thing, maybe someone else experiences the color green with the number 6. Synesthesia seems to have a genetic basis. Synesthesia is tested with a modified syrup task. This shows that there are faster responses if the physical color of the word matches the synesthetic color. In this type of color experiments we see that there is activation in V4, the superior temporal sulcus and the left medial lingual gyrus. The synesthetes also seemed to have larger white matter tracts in the right inferior temporal cortex, the left parietal cortex and bilaterally in the frontal cortex. However, there are also many individual differences.

Synesthesia is a good example of the idea that different sensory areas are related to each other.

Is there a functional reorganization?

To discover whether there are possibilities for functional reorganization, a study conducted a PET scan of people with normal vision and people who were blind from birth. It appeared that the primary visual cortex in people with normal vision was less active during tactile discrimination tasks. This may be because the focus was on a different modality. However, the activity of the primary visual cortex was increased in blind people during tactile discrimination tasks. Also when reading Braille’s research, activity is seen in the primary and secondary visual cortex in blind people. This shows that tissue can develop in a different way if the environmental context has changed. This could be due to a reorganization of corticocortical connections. This reorganization can take place very quickly, within a few days.

What is important in object recognition? - Chapter 6

There are a number of things that are important when talking about object recognition. First, it is important to make a clear distinction in the use of the terms seeing, perception and recognition. Secondly, it must be borne in mind that perception requires more than just observing characteristics or objects. Thirdly, our perceptual capability is extremely flexible and robust. Finally, the product of perception is also closely linked to memory.

What is agnosia?

We call agnosia a neurological syndrome in which perception recognition disturbances cannot be attributed to damage to sensory processes. Agnosia can be limited to a single modality, such as seeing or hearing. The word agnosia comes from the Greek words "a-" (without) and "gnosis" (knowledge). When agnosia is limited to seeing, we speak of visual agnosia. Patients with visual agnosia can often no longer see details as a coherent whole. It may also be that one has meaningful perception, but that the object is only recognizable from a certain angle or that one cannot link the object to a reminder about the function of the object.

How does the cortical processing of vision proceed?

There are two cortical pathways for visual perception from the primary visual cortex. The ventral or occipital-temporal path passes over the inferior longitudinal fasciculus. Object perception and object recognition take place via this path. This makes it known as the "what" path. The dorsal or occipital-parietal path runs over the superior longitudinal fasciculus. This path is involved in spatial perception, the determination of where the object is in space and the spatial configuration between objects. This makes it known as the "where" path.

A number of animals were subjected to bilateral lesions in the parietal lobes, which disrupted the dorsal flow. These animals had difficulty determining where an object was in relation to other objects, but were able to distinguish two objects well. A number of animals were subjected to bilateral lesions in the temporal lobes. These animals had no trouble determining where an object was in relation to other objects, but it did make a distinction between different objects difficult for them. This study reveals the existence of the "what" and "where" paths. This separation of "what" and "where" appears not only to be present in the visual system. The anterior parts of the primary auditory cortex appear to specialize in auditory pattern processing (what?) and the posterior regions in identifying spatial location (where?).

Differences between dorsal and ventral paths

The neurons in the temporal and parietal lobes have large receptive fields. The neurons in the parietal lobe can respond similarly to different stimuli. The neurons are primarily responsive to stimuli in the eccentric parts of the visual field. Forty percent of neurons respond to stimuli in the fovea, sixty percent to stimuli outside. The neurons detect the presence and location of a stimulus. An equal role is reserved for the superior colliculus, a subcortical structure, which is also important for visual attention. The neurons in the temporal lobe, in the ventral flow, have a receptive field that always contains the fovea. Most neurons are activated when a stimulus is presented either in the left or in the right visual field. This is useful for object recognition. The posterior region of the temporal lobe is important in the early processing of simple properties, while furthermore the complex properties are shown in the processing flow.

To investigate this in humans, neuro-imaging research has been done to describe the differences between the "what" and "where" paths. The PET scan was used. With a positional task, an increased cerebral blood supply in the parietal lobe in the right hemisphere is found. With an object task, an increase in cerebral blood supply in the bilateral separation between the occipital and temporal lobes is observed.

Identification versus action

The parietal cortex is important for spatial attention. With lesions in this lobe there are disturbances in the ability to represent the space and to see the spatial relationships between objects.

Patient D.F. has brain damage due to carbon monoxide poisoning. D.F. has a strong object recognition disorder. When she holds an object in her hand, she can recognize it. The agnosia is not a result of anomy, a problem with naming objects, or of reduced visual acuity. In a study there are striking results with D.F. She sees a cylinder horizontally with a slot. D.F. has a card in her hand. She is asked to rotate the card in such a way that it would fit into the slot, without actually inserting the card into the slot (perception condition). She is not able to succeed. Then, D.F. is again asked to turn the card in such a way that it would fit into the slot and asked to also put the card in the slot (action condition). With this, she succeeds effortlessly in one go. When D.F. is asked to repeat the action without the cylinder she also succeeds without problems (memory condition). This points to the distinction between the "what" and "where" systems. They carry similar information, but support different types of aspects of cognition. The "what" system is for identifying an object, and the "where" system not only for discovering the location, but also for interacting with objects.

What and how

This research shows that different visual information is used with different systems. According to the researchers Goodale and Milner, there is a distinction between "what" and "how" instead of "what" and "where". The dorsal visual system is apparently involved in motor systems for how to perform an action. D.F. had bilateral lesions in the ventral flow, which also affected the lateral occipital cortex (LOC). When grabbing for objects, D.F. showed activity in the anterior regions of the inferior parietal lobe.

Patient J. S. could easily pick up objects, but could not identify them. So there is a dissociation between vision for recognition and vision for action. J. S. was found to have damage to the medial part of the ventral occipitotemporal cortex (OTC). It is possible that both the lateral and medial parts of the ventral flow are needed for object recognition. With optical ataxia, where there are lesions in the parietal cortex, object recognition is involved. The visual information cannot be used for action. The eye movements also show a loss of spatial knowledge, because they are not properly focused.

However, the dorsal and ventral flow do not work in isolation. They have a lot of communication with each other. The parietal lobe is also important in selective attention, increasing processing at some locations over other locations.

How is object recognition achieved?

Object perception depends on the analysis of the form of a stimulus. Signals, such as color, texture and movement, contribute to normal perception. Object recognition is created by forming a perception that matches the analysis of form, independent of the color, texture and movement.

Variability with sensory information

Object constancy is the ability to recognize objects in different situations. Object constancy is essential for perception, because it would be very difficult if you only recognized something from a certain angle. Different forms of variability are possible:

  1. The sensory information that comes in depends on the point of view and the position of the object. Both can differ repeatedly. The perceptual system has been adapted to this to be able to continue to recognize objects.

  2. The sensory information that comes in is influenced by differences in exposure. The light can come from different sides. Moreover, there are differences in shade. Recognition of objects seems insensitive to these differences.

  3. Objects are often seen in different environments with many other objects, and not in isolation, but this also does not affect the recognition of the object.

Recognition: dependent or independent of the point of view

You can distinguish two "camps" when thinking about recognition. According to the "view-dependent frame of reference" theories, recognition of an object depends on the point of view. For example, in the side view of a bicycle, the bicycle can be recognized on the basis of the frame, the wheels and the saddle. With a top view of the bike, it can be recognized by the handlebars and the long shape. These different representations of the same object would be stored in the memory. Recognition then takes place by matching the visual information with a stored representation of an object from the memory. In a later phase of processing we recognize two objects as the same object. However, this seems unlikely, because a particularly large number of representations would have to be stored in the memory. Moreover, we can also recognize a new object from a different point of view.

According to the "view-invariant frame of reference" theories, structural information is extracted from the components of an object, as well as the relationship between these components. Recognition takes place because the important properties remain the same regardless of the point of view. In the case of a bicycle, the length of the bicycle is the important axis and the handlebar is a minor axis. A number of axis lines always remain visible from different points of view.

It is possible that the brain uses both frames or references in object recognition. This was investigated in a fMRI study by Vuillemier. In previous studies, it was found that when the stimulus was repeated, a decrease in BOLD response was found in the second representation compared to the first representation of the stimulus. This is called the repetition suppression effect and points to neural efficiency. The neural response to the stimulus is more efficient and faster because the pattern has been activated earlier. A repetition suppression effect was found in the left ventral occipital cortex, regardless of whether the second offering had the same or a different view. This shows that a view-invariant frame of reference is being used. Activation of the right ventral occipital cortex decreased, but only if the point of view was the same in the second offering. This indicates a view-dependent frame of reference.

Shape coding

There is a kind of hierarchy in object recognition. It starts with simple features, which are then combined later in the hierarchy. Perception requires a connection between sensation and memory. Feature extraction takes place in the early phases of visual perception. The memory is important with new or known objects. We see an increased blood flow to bilateral lateral occipital cortex (LOC). The LOC is important for shape and object recognition. We see no differences between new and known stimuli in the posterior cortical regions. In addition, there is insensitivity to certain visual signals, which is called cue invariance. Form is the most present feature.

Grandmother cells

The cells in the inferotemporal lobe appear to respond specifically to complex stimuli. The cells in the early areas of the visual cortex are important for the elemental properties, while the output is combined into higher order properties. Each phase after this leads to more complex combinations, until we get to gnostic units, cells that fire at the recognition of an object. From this comes the term grandmother cells. Upon seeing your grandmother, the corresponding cell is activated according to this approach. However, this theory encounters three problems:

  1. If that particular cell were to die, it would mean that the associated object could not be recognized. This is unlikely.

  2. We come across new objects every day and can recognize them.

  3. The idea does not explain how it adapts to the changing "grandma" over the years

An alternative theory is that of collaborative coding (ensemble theory). Different cells work together for the recognition of different characteristics of the object. So when a number of these higher-order neurons are activated, "grandma" is recognized. This explains why we see similarities between objects and we can be mistaken when it comes to recognition and, for example, we can regard a strange person as someone we know. This is because they both activate many of the same neurons. The loss of some units reduces our ability to recognize objects, but the remaining units might be enough for recognition. It also explains the recognition of new objects, because they have similarities with known objects. The cells in the inferotemporal cortex thus have preferential stimuli to which they respond, but they also respond to stimuli that resemble these.

What are the forms of agnosia?

Patients with visual agnosia have problems with object recognition while the visual information is registered at the cortical level, in contrast to someone who is completely blind. It can be caused by an illness or injury of attention, memory and language. For example, a person with Alzheimer's disease cannot recognize an object if he is holding it or if it is verbally explained, while someone with visual agnosia can. Recognition therefore depends on various sensory modalities. It is not about a general loss of knowledge or vision, but about a loss of knowledge of the visual system or a disruption in the connections between the visual system and the modality-independent storage of knowledge. Visual agnosia can be divided into apperceptive agnosia and associative agnosia. This indicates that object recognition problems can occur at any processing level. This form of agnosia is not easily discovered during neuropsychological examination.

Apperceptive agnosia

With apperceptive agnosia, one can perform normally on form distinguishing tasks and has little difficulty recognizing objects, provided that the most salient properties are visible. The recognition problems start when objects have to be identified on the basis of limited stimulus information or from an unusual perspective, so that they lack the most salient features.

Patients with a right parietal lobe lesion showed poorer performance on a simple perceptual matching task than patients with a left hemisphere lesion or control subjects. Damage to the left hemisphere had little effect on performance. This could mean that the integration of spatial information is disrupted in patients with a right hemisphere lesion. To test this hypothesis, Warrington used the Unusual Views Object Test where objects are shown from a common and unusual point of view. Patients with right posterior lesions had no trouble naming the object from a usual point of view, but they did have them from an unusual point of view. Our perceptual system is able to extract the important properties from an object to identify it. This allows the brain to discover similarities and differences. The ability to achieve object constancy is limited in people with apperceptive agnosia. This is more common with right hemisphere lesions, indicating that this hemisphere is important for object constancy.

Integrative agnosia

People with integrative agnosia are unable to integrate features of an object into a whole. This problem was first officially diagnosed in patient H.J.A. This patient had no problem with matching photos, but with identifying objects that overlap. To recognize a dog, for example, he first had to observe all the dog's individual characteristics (the paws, the characteristic shape of the body and the head) before he could see the whole animal as a whole. If objects overlap, one must therefore not only identify the parts, but also attribute these parts to an object.

Associative agnosia

Associative agnosia involves the loss of visual object recognition, but not as a result of integration problems or perceptual disturbances. Objects are observed, but not understood or given no meaning. It is often caused by non-localized lesions. Patient F.R.A. has had an infarction in the left posterior cerebral vein. He developed dyslexia, which is a limited reading capacity. F.R.A. has no problems with perceptual testing. For different objects that are drawn over each other, F.R.A. is able to color each object with a different color. However, it is striking that he cannot name the objects. When the name of an object is mentioned (verbally), F.R.A. give a description of the object. When a drawing of a dog and a mouse is shown, F.R.A. cannot specify which animal is larger. After hearing the names of the animals, this is no longer a problem. In associative agnosia, there is therefore a lack of access to knowledge of the visual system.

Research was done by using a test in which objects had to be matched for agreement in function. Seeing three images, patients were asked to indicate which two objects had the same function. This is a problem for patients with posterior lesions in the left or right hemisphere. Patients with lesions in the right hemisphere cannot recognize the objects, so they have apperceptive agnosia. Patients with left hemisphere lesions cannot make the functional connection between visual perceptions due to lack of access to conceptual representations.

What is categorical specificity in agnosia?

Associative agnosia is therefore the result of loss of semantic knowledge with regard to visual structures or properties of objects. Early perceptual analysis goes well, but the link with long-term memory is missing and therefore the object is not recognized. Reports with patients reveal a separate phenomenon, it seems that some patients experience problems for specific objects or categories. Take patient J.B.R., who has herpes simplex encephalitis. As a result, he has amnesia, memory loss, and word-finding problems. The diagnosis associative agnosia is made. J.B.R. has particular difficulty with naming living objects. A dissociation is also found in other patients between living and non-living objects.

How is this possible? If you assume that associative agnosia reflects a loss of knowledge about visual properties, then you can assume that category-specific agnosia results from selective loss of this knowledge. Semantic knowledge is structured; we know that rabbits, dogs and cats belong to the category of animals because they share certain characteristics. These properties can be physical or functional.

In patients with brain injury that causes agnosia, connections to semantic knowledge are often spared. Thus it happens that even patients who have a very serious injury can still recognize some objects. It is therefore possible that injury damages certain areas that contain specific knowledge. The fact that category-specific problems are found in patients supports this idea.

Recognizing the difference between living and non-living objects

We see much more often that people have difficulty recognizing living things than non-living things. This could be because the areas that process information about living things are more vulnerable to damage. It could also be due to the differences in how we perceive living and inanimate objects.

Another idea is that non-living objects generate representations that living objects do not have. A pair of scissors not only has an appearance, but also a function, which is accompanied by action. This is not the case for living objects. With a cat, you still know what it feels like to pet it, but for a crocodile or panther this is often not possible. According to this idea, non-living objects are therefore easier to recognize because they can involve multiple forms of representation. Evidence for this also comes from neuroimaging studies. The left ventral premotor cortex, associated with action planning, was active when viewing fabricated objects. This area would respond specifically to action knowledge or knowledge about interaction with objects.

Semantic memory contains our conceptual knowledge of the world, the facts and proposals that arise from experience. It could be that our semantic memory is organized by category membership. So there would be different representation systems for living and non-living things. It is also possible that the semantic memory is organized based on object properties. Non-living things, for example, have kinesthetic and motor representations.

Living things are distinguished by visual appearance and non-living things also by functional properties. So it could be that we have a visual semantic memory and a functional semantic memory. The visual semantic memory is necessary for the correct naming of living objects and the functional semantic memory for non-living objects. This shows that people probably have specific systems for different categories of objects.

How does the processing of faces work?


Recognizing faces is very important, because it allows us to recognize people. This importance can be seen by the enormous capacity of people to recognize faces. For example, remembering a name on a face is more difficult. Prosopagnosia means that people have problems recognizing faces, limiting themselves to visual modality. Prosopagnosia may be congenital, possibly due to a gene mutation with autosomal dominant inheritance. The gene could express itself abnormally during a critical development period, thereby disrupting the development of the white matter pathways in the ventral visual pathway.

One of the hypotheses regarding face recognition assumes a specialized face processing module, formed during our evolutionary past. In the first instance, a facial expression provides information about age, health and gender. In addition, it is also informative about the affective state of the other and provides information about what is important in the environment. However, it seems more logical to assume that there is a general system for recognizing visual input. The evolutionary hypothesis will have to be tested empirically. The following questions can be asked:

  • Is facial perception a physically distinct process? Are there specific areas of the brain that respond to faces?

  • Are the systems functionally independent of each other?

  • Do two systems process information in different ways?

Neural mechanisms for face recognition

Most patients with prosopagnosia also have problems in other areas of object recognition. This does not matter in the discussion with regard to the existence or absence of specialized systems for recognizing faces. Brain damage in humans is never specific and almost never adheres to the limits of certain areas of the brain.

Farah has done literature research on patients with prosopagnosia. Of the 71 patients, the majority of patients appeared to have a bilateral lesion, four patients had a unilateral lesion in the left hemisphere and 21 patients in the right hemisphere. Most lesions were in the occipital and temporal cortexes.

Neurophysiologists have conducted research into specialized cells for recognizing faces in primates. The animals were shown five face stimuli, four of which were primates and one was the face of a researcher. The animals were also shown five stimuli with the most important features of faces. A number of specialized cells responded only by seeing the frontal view of another monkey, other specialized cells responded by seeing all faces, regardless of monkey or human. Upon seeing non-faces, these cells remained silent.

Research shows the existence of two specialized areas for recognizing faces: the superior temporal sulcus and the inferior temporal gyrus. However, it is not certain whether these areas are only involved in recognizing faces, as it is impossible to test all types of stimuli. Despite this fact, research shows that specificity is still good.

Explanations: A number of explanations have been suggested. One of these explanations is that faces induce emotion in us and that the cells react strongly, but stimuli that caused fear in monkeys did not activate the same cells that faces did.

McCarthy did research on people using fMRI. The results showed a greater BOLD response in the ventral surface of the temporal lobe in the fusiform gyrus of faces compared to both random patterns and non-living objects. This response is found primarily in the right hemisphere. This area is also known as FFA, fusiform face area. However, even more regions are important for face recognition. These would further process the information. The FFA could be important in processing invariant visual properties, while the superior temporal sulcus could be important for processing dynamic properties.

When we observe faces, this is perceived in an EEG as a negative response 170 ms after the stimulus. That is why this is called the N170 response. This response also occurs for cars and birds, for example, but in much larger sizes on human faces.

An alternative explanation for the activation of the FFA by faces is the fact that the region would be active if people had to make fine perceptual distinctions between known stimuli. What is not included in imaging studies into the distinction of face and object recognition is the level of expertise. For example the comparison of faces and flowers. Everyone is an expert in sighting faces, but not everyone is sighting flowers. Research shows that the fusiform cortex, which includes more than just the FFA, is more active for objects that have expertise. The fusiform response even increases as expertise develops. However, the activation involves a larger area of ​​the ventral occipitotemporal cortex, outside of the FFA only.

Distinction between face recognition and object recognition

Whether different systems exist for face recognition and object recognition is difficult to say, because both are tested in different ways. However, there have been several people with damage to face perception, without damage to object perception, and vice versa. There are many more different incentives for object recognition than for face recognition. These tests can be seen as between-category discrimination and tests with regard to recognizing faces as within-category discrimination.

Problems with the recognition of faces may lie more in problems in recognizing differences and similarities between the same category of objects. However, the opposite has already been proven. A sheep herder with prosopagnosia was shown faces of famous and unknown people, but also of different sheep. The patient could not recognize faces of famous people, but he could differentiate the sheep from his own herd.

When unknown people and sheep were shown and the shepherd was asked to name the stimuli shown earlier on in the test, he succeeded better with the sheep than with the people.

Are there different kinds of information processing?

With dyslexia there are reading problems, while spoken language is well understood and people can speak well. dyslexia is a within-category defect; the person cannot distinguish between items that are very similar. FMRI studies show that for letter strings, most of the activity was dorsal in the left hemisphere. The amount of activation increased if the letters formed familiar words.

Prosopagnosia and dyslexia are almost never found alone. Often there is also a problem with other forms of object recognition. There seems to be a dissociation between prosopagnosia and acquired dyslexia. Agnosia for objects, however, never seems to occur alone, but always with a lack of word or face recognition or both. So there seem to be two systems for object recognition. But is facial processing different from processing other objects?

Analytical and holistic processing:

Facial perception is more holistic. We recognize someone by the combination of all face parts. With prosopagnosia it could therefore be that one cannot form the holistic representation.

Research into recognizing faces in parts or wholes consisted of the following set-up. The stimuli consisted of a whole face and a whole house. Subjects were then shown part of a face or part of a house and asked to indicate whether this was a part of the face or house previously shown. This gave no problems for the house, but it did for the face. For the face the parts had to be shown together with other face parts.

So there could be two separate systems; one for analysis of parts and one for the holistic approach. With faces, the holistic system must be addressed, but letters must be divided into parts. Objects are somewhat in between. The two processing systems come together to form one perception. So there is a distinction between analytical processing and holistic processing. The left hemisphere would probably be better in analytical processing and the right hemisphere in holistic processing. Thus, lesions in the right hemisphere lead to prosopagnosia and lesions in the left hemisphere to dyslexia.

Category specific systems

If we assume that a specific area is active in face recognition, does this also apply to recognizing other stimuli? It appears that the parahippocampal place area (PPA) is often active when viewing photos of scenes, such as landscapes. It was especially active when making assessments about spatial traits or relationships.

The visual cortex also contains areas specific for recognizing body parts, such as the extrastriate body area (EBA) at the intersection of the occipital and temporal cortexes, and the fusiform body area (FBA). In a study with TMS it was found that positioning over the right occipital face area, participants had difficulty distinguishing faces. When applying to the right extrastriate body area, it was difficult to distinguish between bodies, and when applied to the right lateral occipital (rLO) region, people had difficulty distinguishing objects. The rLO could therefore receive category-independent object recognition processes.

But were we born with these category-specific systems or do they develop through experience? Non-living objects lead to more activation in the medial regions of the ventral flow, while living objects lead to more activation in the lateral regions. The difference in processing systems was also present in people who were blind from birth. Visual experience is therefore not necessary for category specificity to develop.

How can we read thoughts?

Decoding is the idea that we have the ability to see in a person's brain activity what the person sees at that time, so basically a form of thought reading. It is important to keep two issues in mind:

  • Our ability to decode is limited to the resolution of the measurement system. For example, EEG has good temporal resolution and fMRI has good spatial resolution.

  • Our ability to decode is limited by our models of how the brain stores information.

Within and between categories

Early efforts made to read thoughts were inspired by the discovery of category-specific visual areas. As you can read in the previous paragraph, the BOLD signals in the FFA and PPA vary as a function of whether the person is looking at faces or places. This information provides us with a simple coding model. Research with an fMRI scan showed that in 85% of the cases it is precisely possible to predict whether a person imagined a face or a place.

A greater challenge is to be able to distinguish within a certain category and not between categories. This requires a more extensive coding model, namely one that provides us with more information than just the way information is represented in large areas of the cortex, such as the FFA. some research has already been done by Jack Galliant and colleagues. They developed an extensive coding model. However, it remained a model with representations of relatively simple visual properties. That is why they also tried to integrate semantic properties into the model. Then action was also added to the model.

Locked-in syndrome

Even if reading thoughts would be entirely possible, it remains an ethical issue. Reading thoughts could also be a way to interpret dreams or a way to communicate for people with severe neurological conditions, such as people in a locked-in syndrome. With the locked-in syndrome, people are aware of the environment, they understand what is being said, but they cannot make voluntary movements. Reading thoughts can also help in communication by thinking in creative ways and gaining insight into the understanding level of a patient. It is possible that communication with the outside world becomes possible in this way.

What is the function of attention and how does it work? - Chapter 7

Attention influences how we process sensory input, how we store information in memory, how we process it semantically and how we respond to it. The attention control mechanisms determine where and on what our attention is focused. It depends on many brain networks.

What are examples of visual attention?

An example of a disorder in visual attention and awareness is Balint's syndrome. People can only see one object at a time. Thus, they cannot interpret an entire scene at the same time and cannot say where objects are in relation to each other. It is caused by bilateral damage to the posterior parietal and occipital cortex.

There is often information overload in our perceptual system, because there is more information than we can handle. Selective attention is the ability to focus on some things while others are being ignored. This can be driven by our current behaviors and goals. This is called targeted control, or top-down control. However, we also respond to sudden stimuli from the environment. This is stimulus-driven control, or bottom-up or reflexive control. Arousal is a global physiological and psychological state, while selective attention describes what we focus our attention on and what we ignore at a certain specific level.

Which brain networks are involved in attention?

Our attention system uses both subcortical and cortical networks within the brain that interact with each other to enable us to selectively process information in the brain. Subcortical structures that are important for attention control and attention selection are the superior colliculus in the midbrain and the pulvinar nuclei. Damage to these areas leads to disruptions in the open and latent focus of attention. Within the cortex, important areas involved in the attention are certain parts of the frontal cortex, posterior parietal cortex, posterior superior temporal cortex and medial brain structures, including the anterior cingulate cortex, posterior cingulate cortex and insula.

What is the neurology and neuropsychology of attention?


Unilateral spatial neglect, or simply said neglect, occurs when the brain's attention network is damaged in one half of the brain. This damage normally occurs after a stroke and unfortunately occurs relatively often. Both hemispheres can be damaged, but the consequences are most serious in right hemisphere damage. Depending on the severity of the injury, the location and the amount of time that has elapsed since the event of the injury, patients have reduced arousal and processing speeds and also a bias of attention toward the side of their lesion (ipsilateral). Often people are not aware of their limitation. In the case of neglect, there is a disruption in attention and action in the direction that is opposite to unilateral brain damage.

Various neuropsychological tests are used to diagnose neglect. An example is a task in which lines must be crossed out in the middle. For right hemisphere lesions, the lines to the right of the middle are crossed out. Lines on the left can also be missed. People also have to copy objects, often forgetting half of them. Neglect can also influence the imagination and memory.


The difference with blindness in the contralateral visual field is that neglect patients can normally detect stimuli if they are salient and are only presented in the contralateral field, while blind people cannot. Extinction takes place because they can no longer respond to contralateral stimuli at the lesion if these stimuli are presented ipsilaterally with stimuli. This bias against the contralateral field can be overcome if attention is directed to the neglected item locations. That is why it is a bias and not a loss of the ability to draw attention contralaterally.

Neglect compared to Bálint’s syndrome

With Bálint’s syndrome there are three main features:

  1. Simultaneous diagnosis: difficulty perceiving the visual field as an entire scene;

  2. Ocular apraxia: disturbance in making eye movements to scan the visual field, as a result of which the eye movements cannot be controlled voluntarily;

  3. Optical ataxia: difficulty making visually controlled hand movements.

Both with neglect and with Bálint’s syndrome there are serious disturbances in perception, but the patterns of this are different because other areas of the brain are affected. Neglect results from unilateral damage to parietal, posterior temporal or frontal cortex, but can also be caused by damage to the basal ganglia, thalamus and the midbrain. Bálint's syndrome has bilateral occipitoparietal lesions.

Neglect can be based on spatial coordinates with respect to the patient (egocentric reference frame) or with respect to an object in the room (allocentric reference frame). Attention can therefore be focused within space and within objects. Both processes of neglect are probably driven by other processes.

There is competition between sensory input, in this case stimuli shown in the contralateral and ipsilateral fields. With neglect, the stimulus in the field gains an ipsilateral lesion. After brain damage, there is a limited attention capacity, because one is only aware of one stimulus if two competing stimuli are present. All this is not the result of damage to the visual system, because people are not blind.

Observations of brain damage and resulting attention problems raise the following questions:

  • How does attention influence perception?

  • Where in the perceptual system does attention influence perception?

  • How is attention focused in space versus stimulus characteristics and objects?

  • Which neural mechanisms control attention?

What are theoretical models of attention?

You can divide attention into two broad categories: voluntary and reflexive. Voluntary attention is the ability to focus on something, or goal-oriented. Reflex-based attention is bottom-up or stimulus-driven, with a sensory event catching our attention. Attention can also be overt or covert. Overt attention is when you turn your head towards a stimulus. Covert attention is paying attention to something without looking at it.

Hermann von Helmholtz

Helmholtz has researched visual perception with himself as a test subject. On a wall in his office he had a screen attached with letters projected on it. The room was dark and he let a beam of light fall on the screen for a short time. He wanted to investigate aspects of visual processing when stimuli were briefly observed. The screen was too large to see in its entirety without moving his eyes, but he used covert attention. This means that he kept his eyes in the center of the screen, but turned his attention elsewhere. Helmholtz could better see the letters around the area where he focused his attention than the letters that fell just outside of this focus

The cocktail party effect

In a busy environment we are able to focus our attention on one person and have a conversation with that person. We can then understand this person, but not other conversations around us, for example. This phenomenon is known as the cocktail party effect and is a form of selective auditory attention. However, if our name is mentioned in another conversation, our attention automatically shifts to the other conversation. Apparently, the information from other conversations is processed at an unconscious level. If the information is important to us then the information is further processed and admitted to consciousness.

Cherry has investigated this with the dichotic listening task. A test subject received headphones. A different text could be heard for each ear. The test subject was asked to focus on the text in one ear and to ignore the text in the other ear. The subject could not tell anything about the text from the other ear afterwards, except if it was a man or a woman. This has led to the assumption that the input from the other ear is lost. This could be due to information processing bottlenecks. That is, limitations in phases of perceptual processing with limited capacity. What we then process is only the high-priority information. However, it remained unclear when the information could cause the cocktail party effect. Does it happen early, before the perceptual analysis, or late in the process, after the analysis?

Early versus late selection theories

According to the theories of early selection, a stimulus does not need to be processed and analyzed at the time of selection, thereby determining whether the stimulus is further processed or not. According to the late selection theories, all input is processed by the perceptual system, so that attention cannot play a role in perceptual analysis. Selection is made to determine whether stimuli can gain access to consciousness, memory or trigger a response. This may explain why the own name is heard. It is possible that input from the ignored channel is not completely blocked from higher analysis, but only has a greatly reduced signal power.

Quantify attention in perception

One way to investigate the effect of attention on information processing is by looking at how participants respond to target stimuli under different attention conditions. A well-known method is the cuing task. Here the attention is manipulated by giving certain cues.

Spatial attention is the focus of attention on a certain location, whereby other locations are ignored. The attention can be influenced by means of cues where the attention is best directed. This can be done by showing an arrow before a certain image appears. The arrow then announces the stimulus, as it were. It is possible that the cue predicts the target location on most trials. This is called endogenous cuing.

If the clue correctly predicts where the target will appear, it is a valid trial. If this is not the case, the trial is invalid. If the designation does not provide information on where to focus, this is a neutral trial. The difference in reaction times with respect to the processing where attention is focused is described in a cost-benefit analysis, in which the costs extend the reaction time and the benefits shorten the reaction time. The costs and benefits depend on the efficiency of the hidden (covert) attention, the "spotlight" of the attention.

The spotlight of attention could influence reaction times due to the influence on sensory and perceptual processing. Changes in perceptual processing can occur if a participant focuses his attention on a particular stimulus location. However, reaction times remain indirect measurements of neural processing. That is why research is currently being done with cognitive neuroscientific methods in combination with the voluntary cuing paradigm.

What are the neural mechanisms in attention and selective perception?

Selective attention does not only take place in the visual system, but in all sensory modalities. With cognitive neuroscience studies it was discovered that attention did indeed influence the early processing of perceptual stimuli. It was then examined whether attention affected visual processing, and if so, when and where this processing took place. Visual spatial attention is the selection of a stimulus based on its spatial location. This can be voluntary and reflexive.

Voluntary spatial attention

Research is often done with cuing paradigms, in which participants must focus their attention on stimuli in one location and ignore the ones in another location. Event-related potentials (ERP) are measured. Different ERPs are measured:

  • P1: positive wave, which starts 60-70 ms after the stimulus and peaks at 100 ms over the contralateral occipital cortex

  • N1: negative wave with a peak at 180 ms

When a stimulus appears at a location where attention was focused, P1 is larger than when attention is not focused there. Spatial attention has the earliest effects on stimulus processing.

The P1 ERP is sensitive to changes in physical stimulus parameters, such as location and brightness. P1 could be a sensory wave generated by neural activity in the visual cortex. This provides support for the early selection models of attention. However, ERPs represent the summed electrical response of tens of thousands of neurons and not a few neurons.

Moran and Desimone trained monkeys to give covert attention to one stimulus while the other was ignored. A correct response was rewarded. Spatial selective attention influenced the firing of neurons in V4. If a stimulus received attention, then the response was stronger when it was ignored. Further research shows that spatial attention increases the responses of simple cells that are important for orientation and contrasts, but did not influence the spatial or temporal organization of the receptive fields. Research in people with a PET scan and ERP admissions showed that visual spatial attention led to modulation of blood flow in the visual cortex, mainly in the extrastriate visual areas and to a lesser extent the striate cortex.

Biased competition model

Desimone and Duncan developed a biased competition model for selective attention. Their model answers two questions:

  • Why are the effects of attention greater when multiple competing stimuli fall within the receptive field of the neuron?

  • How does attention operate at different levels of the visual hierarchy as neuronal receptive fields change their characteristics?

In the biased competition model, the bottom up signals compete with each other to control the firing of the neuron when different stimuli fall within the receptive field. The model suggests that attention can solve this competition in favor of the stimulus on which attention is focused.

Research shows that without focused spatial attention, stimuli that are close to each other interfere with each other. If attention is focused on one stimulus, the competitive element disappears. This effect is greater in V4 than V1. Spatial attention therefore works differently in early (V1) than late (V4) phases in the visual cortex. It could be because the receptive fields have different sizes.

Subcortical attention effects

The projections of the visual cortex back to the thalamus result in the thalamic reticular nucleus (TRN). This region is also known as the perigeniculate nucleus and is a part of the reticular nucleus that surrounds the lateral geniculate nucleus (LGN).

Attention is paid to either activating or inhibiting signal transmission from the LGN to the visual cortex via the TRN. There is a descending signal from the cortex or a signal from subcortical structures to the TRN neurons. The input to the TRN can activate the TRN, thereby inhibiting information transmission from the LGN to the visual cortex, or the input can suppress the TRN neurons. This increases the transmission from the LGN to the visual cortex. Highly focused spatial attention can thus modulate activity early in the visual system in subcortical relay nuclei in the thalamus. This is in line with the early selection models of attention.

Reflexive spatial attention

Often things in our environment attract attention without our cooperation. This is known as reflexive attention and is activated by stimuli that are striking in a certain way. The more striking the stimulus is, the easier it attracts attention.

The effects of reflexive attention can be demonstrated in so-called reflexive cuing or exogenous cuing tasks. It investigates how task-irrelevant flashes of light somewhere in the visual field influence the speed of responses to task-relevant stimuli. The flashes of light do not predict the location of the target, but people will respond faster if the targets appear in the vicinity of the flash of light. However, this effect is only for shortly after the flash, about 50 to 200 ms.

An important term is inhibition of return (IOR). If a stimulus appears in the vicinity of the flash, but only after 300 ms or longer, the response time is delayed. This effect is there to prevent distraction. The chance that attention is withdrawn to the location is reduced. This effect is exceeded if an event is very important and salient. Both reflexive (stimulus driven) and voluntary (targeted) shifts in spatial attention lead to comparable physiological modulations in early visual processing. The neural networks involved are probably different.

Visual search

If you have lost someone in a crowd of people and start looking, you often focus on a rare feature such as a bright red sweater. However, if it appears that several people in the crowd are wearing a bright red sweater, you will have to add other characteristics to the search task. The search then takes longer. Moreover, our eyes are always drawn to new places and we do not always look to the same place.

Treisman has conducted research into distracting stimuli. In the simple condition, test subjects were asked to respond if a red O was seen between distracting green Os and Xs. The reaction time remained constant with the expansion of the number of distractors. The red O stands out, as it were, because it had a unique characteristic. This is called a pop-out mechanism. In the combined conditions, the distractors consisted of green Os and red and green Xs. The reaction time increased as the number of distractors increased, because the red O now had no unique characteristic. This is called conjunction search, because the target is defined by a conjunction of two or more properties.

To explain why conjuncture tasks take longer, it was suggested that elemental properties, such as color and shape, and spatial frequency can be pre-thoughtfully analyzed and mapped in parallel with specialized property, but that spatial attention is more complicated. Spatial attention must be focused on relevant incentives to integrate properties and this is done in a sequential manner for each item. This is called the feature integration theory of attention. The idea here is that the spotlight of attention must move sequentially from one object to another in order to find the right object.

Research shows that visual search is the fastest if you allow the focus of attention to be directed by visual sensory information, instead of voluntarily and slowly searching for items. The automatic spotlight of attention is therefore fast. Automatic processing involves low-level property directories of the visual world with information about the saliency of objects. Spatial attention is focused on the location of salient objects. The attention spotlight can be focused on a location of interest through which identification takes place to see if it is the object of interest.

Feature-based selective attention

It appears that a cue that points to a visual property improves performance. Attention can therefore be directed to spatial locations and non-spatial properties of target stimuli. Research shows that a different ERP component is present with spatial attention than with color attention. Both spatial and property attention can therefore lead to selective processing of visual stimuli, but this is done through various mechanisms.

It is easier to observe changes in stimuli if we only have to pay attention to one characteristic (selective attention) than if we pay attention to multiple characteristics (distributed attention). The extrastriate cortex is active during selective attention, especially for perceptual processing of color, shape or movement. Selective attention thus changes the perceptual processing of input before the property analysis is complete. If you focus attention on movement, there is activity in area MT / V5 (in the dorsal flow) and if you focus attention on color, there is activity in area V4 (in the ventral flow). Activity in these areas occurred 100 ms or less after the stimulus change.

So feature-based selective attention takes place in relatively early phases of visual cortical processing with relative times after the start of the stimulus. Spatial attention is faster and has an effect. The effects of feature attention often occur with longer reaction times and occur in later phases of the visual hierarchy (namely in the extrastriate cortex instead of the striate cortex or the subcortical visual areas in the thalamus).

The feature selection ERPs are the ERPs for non-spatial, but property-based attention that follow the P1 in time. The N2pc, which is a negative wave on a posterior electrode contralateral to the stimulus, is an ERP for covert focus of spatial attention. Research shows that property-selective attention for visual spatial attention comes when the location of the target is not known in advance. It is possible that property selection drives a shift of attention. If there is competition between properties, spatial attention does not precede non-spatial attention.

Object-based attention

When we search for someone we use object-based attention, or we search for a collection of elementary stimulus traits, which together create an object. This attention system is different from spatial attention, because we can make two perceptual assessments simultaneously about the same object, but not about different objects. There are therefore costs involved in distributing attention between two objects. Objects seem to influence the way in which spatial attention is focused. Several studies show that when spatial attention is not involved, object representations can be the level of perceptual analysis that is influenced by targeted attention control.

Which attention control networks exist?

Control of attention can be targeted or stimulus driven. With targeted attention, signals are sent from executive attention control systems to sensory cortical areas. This increases the response if a stimulus has high priority, and decreases if a stimulus is irrelevant for the current purpose. In stimulus driven attention, the stimulus catches attention. This probably is carried out in a circuit of the sensory system and the attention orientation system. Selective attention probably involves the posterior parietal cortex, the dorsolateral and superior prefrontal cortex, and the pulvinar nucleus of the thalamus.

Current attention control models suggest that there are two separate frontoparietal cortical systems that play a role in performing different attention operations during selective attention: a dorsal attention system, which is primarily concerned with spatial attention, and a ventral attention system, which primarily deals with non-spatial aspects of attention. It appears that these two systems work together to produce normal behavior. These interactions are disrupted in patients with neglect.

Dorsal attention network: frontoparietal attention system

The dorsal frontoparietal network reflects the sources of attention signals in the targeted control of attention. The most important cortical nodes in this network are the frontal eye fields (FEF), the supplementary eye fields (supplementary eye fields, SEF) in the frontal cortex, the intraparietal sulcus (IPS), the superior parietal lobule (SPL) and the precuneus (PC).

The FEF coordinate eye movements and shifting the gaze, which is important for orientation and attention. The dorsal system is involved in generating task-specific, targeted attention control signals. Several studies show that the FEF has an influence on the visual cortex. This goal-oriented influence is task-specific, so that the functional connectivity between the FEF and specific visual areas is increased as a function of the specific state of attention.

The posterior parietal lobe is also important in the frontoparietal attention network. The dorsal network contains the intraparietal sulcus (IPS) and the superior parietal lobule (SPL), while the ventral network contains the ventral areas at the temporoparietal junction. The parietal lobe has many connections with subcortical areas and the frontal cortex and contains multiple representations of the spatial environment. The IPS appears to be important in saccadic eye movements and visual spatial attention. The IPS contains a type of importance or priority map, which indicates how striking a stimulus is in relation to the surrounding stimuli. This map is used by the oculomotor system as a saccade target when a saccade is appropriate. The IPS is therefore involved in the location and importance of objects.

Ventral right attention network

The ventral network involved in attention is stimulus driven and strongly localized in the right hemisphere. It consists of the temporo-parietal junction (TPJ) and the ventral frontal cortex (VFC). The TPJ is strongly involved in target detection. The TPJ is also important for changes in stimuli, independent of the modality of change, and for the detection of new stimuli at an expected location. The TPJ gives a warning signal that interrupts the current focus of attention. Lesions in the TPJ lead to the loss of spatial attention. The dorsal network provides the TPJ with behavioral information about stimuli. The ventral system is involved in stimulus-driven attention, the detection of important targets and the refocusing of attention. It is not necessarily involved in spatial attention. The dorsal and ventral networks work together, whereby attention is directed to relevant locations and potential targets and this attention state is interrupted if a new target appears elsewhere.

Subcortical components of the attention control network

The superior colliculi, structures of the midbrain, are involved in making eye movements. They have many connections with the retina, other sensory systems, the basal ganglia, the cerebral cortex, the thalamus and the motor system. Input from the frontal eye fields helps with the conscious generation of saccades and input from the parietal eye fields in the generation of reflexive saccades. The superior colliculi play a role in attention, and because they are sensitive to salience of an object, the eyes are then focused on it. When the superior colliculus is damaged, this leads to difficulty shifting attention and slow responses to cued targets. In addition, it appears to play a role in visual searching and in the inhibition of return.

Another brain structure that has been investigated is the pulvinar nucleus of the thalamus. It contains visually responsive neurons for color, movement and orientation, and retinotopic maps of the visual world. It has connections with the frontal, parietal, occipital and temporal areas. It is involved in both voluntary and reflexive attention. The pulvinar nucleus is active in covering attention and filtering attention. With damage to the pulvinar nucleus there are defects in attention orientation. It leads to difficulty in focusing attention on a cued location.

What is the importance of action and the motor system? - Chapter 8

In July 1982, first aid patients arrived in San Jose, California who could hardly move, if at all. They could not speak, had a facial expression that looked frozen, and could hardly move their arms. Yet these patients were conscious. All of these patients looked like heroin users, but these symptoms did not resemble the effects of heroin use. Heroin use normally leads to weakness instead of stiff muscles. It was probably a new substance.

The patients had no structural abnormalities in the brain. The symptoms were very similar to those of advanced Parkinson's, only Parkinson's disease starts gradually and the symptoms in these patients had occurred rather suddenly. It was assumed that the new drug used had caused a sudden start of Parkinson's. Parkinson's is caused by cell death in the substantia nigra (part of the basal ganglia). This is the main source of the neurotransmitter dopamine. Hypometabolism of dopamine was found in patients. Treatment with L-dopa, a substance necessary for the production of dopamine, had an immediate positive effect in these patients. The substance responsible for the symptoms resembled the chemical structure of meperidine, a synthetic opioid, and this was called MPTP. This substance is toxic to dopamine cells located in the substantia nigra. This knowledge allowed Parkinson's to be induced in laboratory animals so that new treatment options could be developed.

Which parts does our motor system consist of?

The motor system is hierarchically structured with multiple levels: spinal cord, subcortex and cerebral cortex. The lowest level consists of the motor and internal neurons in the spinal cord. The spinal cord is important for producing reflexes. At the top of the hierarchy are the premotor and association areas of the cortex. These areas are important for planning action based on goals, perceptual input and previous experiences. The primary motor cortex and brain stem lie in the middle of the hierarchy. In addition, the cerebellum and basal ganglia are involved in translating an action goal into a movement. Lesions at different levels lead to different effects on movement.

Muscles, motor neurons and the spinal cord

Every part of the body that can move is called an effector. The elastic fibers of muscles can change in terms of tension and length. Movement is the result of a change in the muscles that control a (group) effector.

The primary interaction between muscles and the nervous system is via alpha motor neurons. Alpha motor neurons invade muscle fibers and produce fiber contractions. Gamma motor neurons are part of the proprioceptive system. In addition, they are important for feeling and regulating the length of muscle fibers. Motor neurons start in the spinal cord and run through the ventral root to the muscle fibers. The alpha motor neurons use the neurotransmitter acetylcholine, which causes muscle fibers to contract. By changing the length and tension of the muscles, alpha motor neurons ensure movement. An alpha motor neuron thus translates a nerve signal into a mechanical action.

The alpha motor neurons receive their input from muscle spindles. These are sensory receptors in the muscles that indicate how much elongation there is in the muscle. Axons enter the dorsal root of the spinal cord via an afferent nerve and synapse onto alpha motor neurons. These connections are important in reflexes, to maintain and protect posture stability. Motor neurons also receive input from spinal interneurons, which receive input from afferent sensory nerves and falling motor fibers. Signals to muscles are an integration of sensory feedback on motor commands. This integration leads to voluntary movement. The signals can be exciting or inhibiting.

Subcortical motor structures

Most neural structures of the motor system are in the brainstem. The brainstem includes the cranial nerves, which are responsible for important movements, such as breathing and eye movement. Through extrapyramidal tracts, the brain stem is directly connected to the spinal cord. This provides primary control over spinal cord activity, as they receive input from (sub)cortical structures. The two subcortical structures that play an important role in motor control are the basal ganglia and the cerebellum.

The cerebellum consists of three areas:

  1. vestibulocerebellum: cooperates with the vestibular nuclei in the brainstem and ensures balance and coordination of eye movements with body movements;

  2. spinocerebellum: receives sensory information from the visual and auditory systems and proprioceptive information from the spinocerebellar tract. The output of the spinocerebellum goes to the spinal cord and nuclei of the extrapyramidal system. It is important for a stable running pattern and balance;

  3. neocerebellum: receives fibers from the parietal and frontal lobes and projects to the primary motor, lateral premotor and prefrontal cortices via the thalamus. Lesions lead to ataxia, problems in sensory coordination of distal limb movements, or disruption of fine coordination. An intention tremor may be present, tested by touching the nose with the finger.

The cerebellum is probably also involved in attention, language processing, planning and more.

The cerebellum sends information via the deep cerebellar nuclei to the brainstem and cortex via the thalamus. The cerebellum receives information via the cerebellar cortex. Because the input and output both cross over to the contralateral side, the cerebellum has an ipsilateral organization; the right side controls movements on the right side of the body.

The basal ganglia consist of five cores, namely the caudate nucleus and putamen (together the striatum), globus pallidus, subthalamic nucleus and substantia nigra. The output is controlled from the internal segment of the globus pallidus and a part of the substantia nigra. The input structure consists of the striatum. The remaining structures modulate the basal ganglia activity. Axons in the globus pallidus project to the motor and frontal cortex via the thalamus. The basal ganglia are important in motor control, especially selection and initiation of actions.

Cortical areas involved in motor control

The cerebral cortex can regulate the activity of neurons in the spinal cord in direct and indirect ways.

The direct connection goes through the corticospinal pathways (pyramidal pathways) to the spinal interneurons or alpha motor neurons. Most start in the primary motor cortex, but some in the premotor cortex or supplementary motor cortex, or even the somatosensory cortex. Each hemisphere controls movements on the other side of the body. At the intersection of the medulla and the spinal cord, 80% of the corticospinal lanes cross over. 10% crosses when they leave the spinal cord.

Organization of motor areas

The primary motor cortex (M1) is posterior in the frontal lobe. It receives input from most cortical areas about motor control. These areas are parietal, premotor, supplemental motor and frontal cortexes and subcortical structures such as the basal ganglia and the cerebellum. The output of the primary motor cortex goes through the corticospinal tract. The neurons of the rostral part end with spinal interneurons, but the neurons in the caudal part with alpha motor neurons. This means that more direct control of effectors is important for voluntary movement.

The body is represented in topographic fashion in motor areas. The size of the representation of a particular body part is dependent on the importance and the degree of control required of the effector (for example, the fingers have a relatively large representation for accurate movement).

Hemiplegia: Primary cortex lesions (M1) often lead to hemiplegia. This involves the loss of voluntary movements on the contralateral side of the body. It is often caused by a bleeding in the middle cerebral vein, perhaps the most characteristic symptom of a hemorrhagic bleeding / stroke, making the patient in question unable to move the damaged limb. This often involves the movement of the fingers or hands. Immediately after the stroke, reflexes are absent. After this, however, they become hyperactive and even spastic. This is due to the fact that the cortex no longer has inhibition control over the reflexes, so that primitive reflexes take over. Recovery from hemiplegia is minimal. A problem is that people automatically no longer use the affected limb, which is called learned disuse. With constraint-induced movement therapy, one must use the affected limb because the other is fixed. This leads to progress in the power and function of paretic upper limbs.

Secondary motor areas: The secondary motor areas (the premotor cortex - lateral - and the supplementary motor area - medial) are involved in the planning and control of movement. The premotor cortex has connections to the parietal lobe for external sensory-driven actions. The supplementary motor cortex has connections to the medial frontal cortex for internally controlled movements in accordance with personal preferences and goals.

Lesions on the secondary motor areas do not lead to hemiplegia. Patients with disorders in these motor areas often have difficulty performing coordinated movements. This disorder, called apraxia, is a condition that disrupts the planning of movement. There is no motor or sensory disturbance, normal strength and tone, and no movement disorders, such as a tremor. The symptoms of apraxia depend on the site of the lesion. With ideomotor apraxia the patient has difficulty performing a certain movement while he is still able to feel the action roughly. With ideational apraxia, the patient's knowledge of the movement is disturbed. This is a more serious form of apraxia.

The parietal cortex is an association area. It is important for the representation of the body and how it is situated in space. This is important for effective movements. Lesions in the parietal lobe can lead to ideational apraxia. It can disrupt the ability to produce movement and lead to difficulty in recognizing the actions of others. Furthermore, the area of ​​Broca and the insular cortex are involved in the production of speech movements, the frontal eye fields are important for eye movements and the anterior cingulate cortex is important for the selection and control of actions, because it evaluates the effort and costs of a movement.

What are computational issues in motor control?

Peripheral control of movement and the role of feedback

The hierarchical organization implies that higher levels can influence the activity of lower levels. Lower levels can produce movement even without processing at a higher level.

Sherrington investigated by cutting the spinal cord in animals whether they could still move without the influence of the higher structures. The results showed that the reflexes were stronger because the inhibition of the brain had disappeared. The animals could walk with their hind legs as long as they received the required sensory information. Brown proved that animals can also move if they can no longer receive information from outside. A central pattern generator is a set of neurons that are capable of a series of actions without external input. This is a powerful mechanism for the hierarchical control of movement. The highest levels in the hierarchy thus give commands for an action, while the lowest levels execute these commands. Central pattern generators are important for survival actions.

Central representation of a movement plan

There are two ways in which a movement can be performed. We could make a plan for the desired movement by comparing the position where you are now with the position where you want to go. It could also be that you only focus on the location of the target and determine the motor commands to get there.

Research by Bizzi supported the idea that the representation can be based on location. Degraded monkeys, monkeys that no longer receive somatosensory signals from the limbs, were trained on a simple pointing experiment. A light appeared at the target location after which the monkey had to move his arm there. A manipulation was performed in which counterforce was given to the movement. This ensured that the arm remained in the starting position. Monkeys did not feel this themselves and because the room was dark they could not see it either. After this force was removed, the monkeys were able to reach the location. This supports the location hypothesis.

Endpoint check is based on location. In addition to location planning, we use distance and route planning for extra flexibility.

Hierarchical representation

Most movements are not simple, but contain a whole sequence of simple movements. Is there a connection here between independent movements or is there a hierarchical representation for the entire sequence? There appears to be a hierarchical structure that integrates the elements. MacKay proposes a model about the hierarchical representation of movement. The representation of the goal is at the highest level, this is a conceptual level. This is followed by the level of the reaction system, in which various ways of responding are represented (for example verbally, or by a certain movement). The lowest level is formed by the implementation of the movement. Each level corresponds to the representation of an action.

The cortex can provide extra regulation for the actions of the lower levels in the motor hierarchy. The cortex also provides actions that are minimally dependent on external signals. There are two levels in the motor systems:

  1. Pattern generators produce fixed action patterns without cortical input. The movements are flexible and not mechanical. These patterns are merged into complex, targeted movements through the input of the motor cortex and brain stem;

  2. At higher levels are the central representations for spatial goals and planning, so these are more abstract components of movement

How do the neural motor paths work?

Activity in the cells of the primary motor cortex correlates more with the direction of movement than with the target location. The cells have a preferred direction. Georgopoulos introduced the population vector. It is a representation of the sum of the activity of the cells of a part of the motor cortex. Each cell has its own vector, which indicates the force with which a cell fires depending on its preferred direction.

The population vector is dynamic and can always be calculated over time. The population vector shifts in the direction of the movement before the movement is produced, so these cells are also involved in the planning of a movement and not just in the movement itself. However, not all cells respond in a certain direction. It may even be that the cell responds to a certain direction before the movement, but that this shifts during the actual movement. We have to see neurons as dynamic, with a movement represented as shifting the state of a group of neurons.

In order to respond well to our environment, we must be able to combine perceptual information with information about the current state of the motor system. This is why many cells in the motor areas contain both types of information. So there is an interactive network of motor areas that represent multiple properties.

How is movement planned and executed?

Neural signals in motor areas are more involved in the goals of an action than with specific muscle patterns needed for the movement to reach the goal. But the motor cortex has more than one option to achieve that goal. We must choose a goal, an option to achieve it, and how we want to perform each intermediate step.

How do we set goals and plan actions?

Cisek drew up the affordance competition hypothesis, in which the functional architecture of the brain was developed to mediate real-time interactions with the world. Affordances are the options for action that are defined by the environment. This used to be important for survival and reproduction. Often, however, there is no time to first evaluate the goals, think about the options and then plan the movements, or serial processing. A better idea is to develop multiple plans in parallel. According to Cisek, the processes of action selection and specification take place simultaneously. The selection and specification is influenced by sensory information from the environment (affordances). The sensory information is limited by our internal states, goals, rewards and costs (competition). Many parts of the motor paths are involved in selection, with the fronto-parietal circuits also being important.

Representation in the motor cortex

Brodmann area 6 consists of the premotor cortex and the supplementary motor cortex. The premotor cortex is connected to the parietal cortex for sensory-oriented action and the supplementary motor cortex (SMC) to the medial frontal cortex for action selection and planning based on goals and experience. The SMC is also important in complex actions. In the event of damage to the SMC, it is no longer possible to perform tasks that require the integrated use of two hands. Damage to the SMC can also lead to the alien hand syndrome, where a limb completes a meaningful action, but where the person denies being responsible for the action.

The parietal lobe is also important in planning related activity. Regions in the intraparietal sulcus are active in eye and arm movements. There is a difference between the posterior parietal cortex and premotor areas, namely the frame of reference for movement. The representations in the parietal cortex are in an eye-centered frame of reference, while the representations in the premotor cortex have a hand-centered frame of reference. Further research has shown that the posterior parietal cortex is important for motor intent and movement goals, and the premotor cortex for the execution of movements. We are only aware of a movement if we intend to implement it. Furthermore, the parietal cortex is responsible for target-based processing and the frontal cortex for motion-based processing.

What is the brain-machine interface?

The process called "brain-machine interface (BMI)" is based on a device that uses the interpretation of neuronal signals to perform desired operations. This is done by a mechanical device outside the body. BMIs are a promising tool for the rehabilitation of people with severe movement disorders. Previous BMI systems needed two phases. In the first phase, neural activity was recorded while the animal produced movement. Certain properties (for example, the preferred direction) were included. In the second phase, output from these neurons was used to create a population vector, which could then control an interface device. Current studies investigate the stability of BMI systems over certain periods of time.

What is the function of the basal ganglia?

The basal ganglia are important in motion initiation. Almost all afferent fibers end in the striatum. In the internal segment of the globus pallidus and the pars reticulata of the substantia nigra, the output starts, for example to the superior colliculus for eye movements or the thalamus.

Delong proposes two routes:

  • An indirect route (the striatum inhibits the external segment of the globus pallidus, which inhibits the subthalamic nucleus, with ultimately inhibition of the output nuclei). This route is slow and leads to inhibition of the cortex;

  • A direct route (from the striatum directly to the nuclei output). This route is fast and leads to excitation of the cortex.

Output from the internal segment of the globus pallidus (GPi) and the substantia nigra pars reticulata (SNR) to the thalamus is also inhibitory, while the thalamus then excites the cortex. There is also a dopamine path from the substantia nigra pars compacta to the striatum. The substantia nigra excites the direct path and inhibits the indirect path. The basal ganglia are important in initiating actions, because they solve the competition of actions. The direct path contains D1 receptors, which are exciting and thus lead to excitatory postsynaptic potentials (EPSPs). The indirect path contains D2 receptors, which are inhibitory and therefore result in inhibitory postsynaptic potentials (IPSPs). As a result, dopamine delivery can lead to promotion of the actions represented in the direct path and discouragement of the actions in the indirect path.

Disorders of the basal ganglia

Lesions in the basal ganglia lead to problems with coordinated movements. The form of the problem depends on the location of the lesion. In Huntington's disease, clumsiness, balance problems and general restlessness develop. Chorea (involuntary movements) occurs, contorted postures and arms, legs, torso and head are constantly moving. The excessive movements are the result of a reduced output of the basal ganglia, so that the thalamus is excited more and the motor cortex is also excited more. Huntington is a gradual disease in which the patient becomes irritable, absent and loses interest in normal activities. Patients often die within 12 years of starting the disease. The pathology is in both cortical and subcortical structures.

Parkinson's disease is caused by the loss of dopamine neurons in the substantia nigra pars compacta (SNC). It leads to disturbances in posture and movement, less voluntary movements (hypokinesia) and slower initiation and execution of movements (bradykinesia). In Parkinson's, the inhibitory activity of the direct path is reduced. The SNC no longer excites the striatum, which increases the inhibitory output from the GPi to the thalamus. The indirect path is inhibited by reduced SNC input. Ultimately, this leads to more inhibition of the thalamus, so that the cortex is less excited. The cortex can still plan movements, but because the basal ganglia are damaged, these movements no longer initiate quickly.

Treatment for Parkinson's disease usually consists of administering L-dopa (a synthetic precursor of dopamine), but this is no longer successful after prolonged administration. It can then lead to movement disorders or hyperkinesia, with excessive, involuntary movements. Patients with Parkinson's disease have also recently been treated by means of deep brain stimulation (DBS). This involves implanting an electrode in a target region. In Parkinson's this is often the subthalamic nucleus (STN). A current is sent through the electrode at high frequency, changing the activity in the STN and the circuit.

What are mirror neurons?

Knowing where perception ends and action begins seems like an impossible task. This is emphasized by the fact that our cells become active when we see that someone is taking a certain action, as if we have performed this action ourselves. The neurons responsible for this are called mirror neurons. Activation of a mirror neuron is correlated to goal-oriented action, regardless of whether you perform it yourself or not and regardless of how the information is received (auditory, visual). It is the activation of the mirror system, this is a neural system spread throughout the brain that is involved in action production and understanding. It is sometimes referred to as embodied cognition; our conceptual knowledge is in our body.

Mirror neurons are not only present in the premotor cortex, but also in the parietal and temporal lobes. In the premotor cortex, the degree and intensity of mirror neuron activation is dependent on your own motor experiences. Mirror neurons are not only important for understanding motor action, but also for imitation and learning new skills and simulating actions of others, so that we understand their intentions. Mirror neurons are in this way a basis for empathy.

How are new skills learned and implemented?

We think that motor learning comes from the lower levels of the hierarchy and that our muscles learn how to respond. However, some aspects of motor learning are independent of the muscle system that is normally used for the action. This way we can all write our name with our weak hand or even our toes. The result is less neat, but the difference is only because certain muscle groups have more experience in performing the action. When we learn a new action, the first effects of learning are at an abstract level. Motor learning can be done by linking certain gestures to each other in a new way or by repeating a learned pattern so that it becomes exactly as you want. The latter is called motor adaptation, which makes a movement automatic. In order to become an expert, we must implement the movement in the most efficient way. This requires cognitive skills such as perseverance, attention and self-control.

Sensory-motor adaptation is adapting to a new environment, such as getting used to standing on a wobbly boat. When a disturbance is introduced, there is an increase in activity in many cortical areas, including the frontal, premotor and motor cortex in the frontal lobes and the parietal, temporal and visual cortex, but also subcortical in the cerebellum and basal ganglia. After exercise, activity in these areas is reduced. With damage to the cerebellum or the prefrontal or parietal cortex, disturbances are seen in learning to move in a new environment. The cerebellum appears essential for learning the new folder, but M1 is important for consolidating the new folder. Obtaining a new motor skill requires dopamine projection from the ventral tegmental region to M1.

Forward model

Your brain operates in a predictive way. The motor system gives commands for movements and generates expectations of anticipated sensory effects of the movements. Errors occur if the current feedback does not match the prediction. The brain then makes adjustments to the current movement and for learning. The cerebellum is very important for learning from mistakes. These errors are used to generate a forward model in which the cerebellum is important. This is a model that generates expectation on the sensory consequences of our actions. This allows this model to be used to generate sensory expectations of a certain movement.

Forward models are also important for online control of movements: selecting the right muscles and activating in the right order. When anticipating a sensory consequence of movement, it helps to compensate for delays. The cerebellum is also important in sensory-motor learning, because it generates predictions that are temporally precise. We also need to know when something will happen in the future. The cerebellum ensures the correct timing of activation of effectors, for example when a foot hits the ground. Lesions in the cerebellum are the most disruptive in extensively practiced movements.

To fully master a skill, practice is necessary. By doing something often, your brain also adjusts to the skill. Expertise can also be related to domain-independent factors. Here, for example, aptitude and motivation by genetic and environmental factors play an important role.

How does memory work? - Chapter 9

Patient H.M. had an untreatable form of epilepsy, which often comes from the medial temporal lobe. A new surgical treatment method was therefore used: temporal lobotomy, whereby the medial temporal lobe was removed bilaterally, including the amygdala, entorhinal cortex and hippocampi. H.M. seemed to recover well from the operation, but he had problems forming new long-term memories since then. He still knew who he was, could remember his personal history, as well as the facts learned at school, language, how to do things, social events and people up to a few years before the operation. In recent years, however, he could not remember his new experiences. H. M. was a special patient because he had no other cognitive deficits. He had normal intelligence, normal perceptions and no mental illness. His short-term memory and procedural memory were intact. The transport of information from the short-term storage to the long-term memory was disrupted. He was still able to learn new procedures and motor skills.

What types of memory are there?

Learning is the process in which new information is acquired. You can see memory as the outcome of the learning process. The memory is thus created by learning information, this can be achieved through a single exposure to the information, but also through repeated learning. There are different types of memories, namely:

  • Sensory memory: short memories from milliseconds to seconds

  • Short-term memory or working memory: memories for seconds to minutes

  • Long-term memory: memories that will last for decades

Learning and memory consist of three major processing stages:

  1. Encoding: processing new information to save. Encoding is divided into acquisition and consolidation. Acquisition includes taking in part of the input from the sensory buffer. Consolidation includes the formation of stronger representation over time for a long-term reminder.

  2. Saving: the retention of information as a result of acquisition and consolidation.

  3. Retrieval: the use of stored information for the purpose of conscious representation or execution of acquired behavior.

Involved in memory are the basal ganglia, for reinforcement learning, the cerebellum, for trial-and-error learning, the amygdala, for fear learning, and the hippocampus and surrounding structures.

What is amnesia?

Amnesia is the collective name for all memory disorders, which can result from physiological and psychological traumas as well as operations or illnesses. Amnesia is a type of memory damage that affects all senses. The loss of memories of events that occur after the brain injury is known as anterograde amnesia. This results in the inability to learn new things. The loss of memories of events that occurred before the brain injury is known as retrograde amnesia.

Sometimes retrograde amnesia is time-limited, for example, only events of a few minutes or hours are forgotten. In more severe cases, there is a memory loss of almost all memories that occurred before the lesion. Retrograde amnesia appears to be the greatest for the most recent events. This effect is known as the temporal gradient or Ribots law.

The memory limitations depend on the amount of damage to the medial temporal lobe. Only bilateral removal of the hippocampus leads to severe amnesia, not unilateral removal.

What are the different memory mechanisms?

Sensory memory

If someone talks to you while you're not paying attention, you can often still remember the last part of the story. This "echo" is known as sensory memory traces. You can subdivide the sensory memory into echoic memory, memory for sound; and iconic memory, memory for sight.

Sensory memory research has looked at the duration and location of storage in the brain. Persistence of an auditory sensory memory track is accompanied by a characteristic ERP known as MMN, electrical mismatch negativity, with the magnetic counterpart MMF, mismatch field. After hearing the same tones, an MMN / MMF is created as soon as another tone is heard. The echo memory track for sound, measured by means of a mismatch, can sometimes last for ten seconds. The echo memory track for visibility only remains active for 300 to 500 ms. The echoic and the iconic memory both have a large capacity.

Short-term memory

In contrast to the sensory memory, information can be stored in the short-term memory for a longer period (seconds to minutes), but the capacity of this type of memory is still limited.

Atkinson and Shiffrin have designed a modal model for memory. The model is a strictly serial model, which means that fixed stages are followed. Information enters the sensory memory, can be transferred to the short-term memory and then to the long term memory. The transfer of information from the sensory memory to the short-term memory is influenced by attention. The transfer of information from the short-term memory to the long-term memory is influenced by repetition. Information can be lost at any stage due to decay, interference or a combination of these. There is a discussion about the strict order of the model: Does information really always past through the short-term memory before it is stored in the long-term memory? And which brain systems are involved in the processes?

There is evidence for the existence of short-term memory (STM) from cognitive neuroscience. Shallice and Warrington examined a patient with brain damage in the left perisylvian cortex and found that the patient's digit span was reduced from five to nine items to two items. The patient could, however, form new memories in the long-term memory (LTM). So this shows a dissociation between STM and LTM, but it also shows that you don't necessarily need the STM to store information in LTM. This therefore does not correspond to the previously described model.

A more recent example comes from Markowitsch. He described the patient E.E., who had a tumor in the left angular gyrus. This tumor affected the inferior parietal cortex and the posterior superior temporal cortex. After removal of the tumor, the capacity of the short-term memory decreased, but the long-term memory remained normal. E.E. had normal speech ability and normal understanding of speech and reading. Problems with the short-term memory were visible for abstract verbal information.

It has been concluded from these and other studies that the short-term memory cannot be the gateway to the long-term memory, as is the case in the modal model. Patient H. M. was no longer able to form long-term memories, but had an intact short-term memory. There is therefore a double dissociation.

Random access memory

Working memory is a limited storage for holding information over a short period of time (maintenance), so that mental operations can be performed about it (manipulation). The information in the working memory can come from the sensory memory, but also from the LTM.

Baddeley has proposed the three-part main memory system:

  1. The central executive mechanism is commissioning and controlling the interaction between short-term memory and long-term memory.

  2. The phonological "loop" contains the acoustic coding of information in the working memory. Research shows that the acoustic remembering of letters (sound) rather than form plays a role in repetition. A list of words that sound the same is harder to remember. The phonological "loop" consists of the short-lived acoustic storage and the articulation component that plays a role in acoustic repetition. The phonological loop can therefore only perform operations on auditory information.

  3. The visual-spatial notepad runs parallel to the phonological "loop" and is responsible for the storage of purely visual and visuospatial information. Research into the acoustic and visuospatial code reveals no interference from both, indicating that they are two independent subsystems.

Cognitive neuroscience shows evidence of working memory. Each part of the working memory can be affected separately by damage. Lesions in the left supramarginal gyrus lead to problems in the phonological working memory. The auditory verbal memory span then decreases. The repetition process in the phonological "loop" is related to the left premotor area. The left hemispheric network, which contains the lateral frontal and inferior parietal lobes, is activated for the phonological working memory.

Brain damage in the parietal-occipital areas in both hemispheres has an effect on the visuospatial sketch pad. Damage in the right hemisphere leads to more problems in the visual spatial short-term memory. Lesions in the right parietal-occipital area lead to problems with non-verbal visuospatial tasks for the working memory. These lesions in the left hemisphere lead to problems in the short-term memory for visually presented linguistic material.

Imaging research also supports this idea; Through the use of PET research, Smith and his colleagues found a dissociation between areas of the brain when performing a spatial versus a verbal task. Healthy participants were given either a spatial remembering task or a verbal one. Participants had to judge whether the target matched the previously shown image. While performing a spatial task, there was mainly activation in the right hemisphere regions (inferior frontal, posterior parietal and extrastriate cortex) and in the verbal task activation of the left hemisphere in the inferolateral frontal cortex. The left hemisphere was also active in the spatial working memory, probably for a verbal storage of non-verbal stimuli.

Long term memory

The long-term memory can be divided into two parts: the declarative memory and the non-declarative memory.

Declarative memory is the memory for events and facts, both personal and general. Deliberate access to the declarative memory is possible, which is why it is also called the explicit memory. The declarative memory is divided into episodic memory and semantic memory. The episodic memory is the personal and autobiographical memory and includes the awareness of past events. The semantic memory contains knowledge about the world, which can also be retrieved in situations that are different from the situation in which the knowledge was acquired.

The non-declarative memory comprises knowledge to which conscious access is not possible, which means that it is also referred to as implicit memory. The non-declarative memory is divided into procedural memory, priming, conditioning, habituation and sensitization. The procedural memory is involved in learning cognitive and motor skills through a lot of repetition. There is no explicit knowledge of the learned skill. The procedural memory depends on other brain systems than those required for episodic memory. The basal ganglia can be involved in learning motor skills.

The perceptual representation system contributes to the imprinting of the structure and shape of objects through previous experiences. With classical conditioning, the conditioned stimulus is combined with the unconditioned stimulus, which leads to the same response. Non-associative learning includes habituation and becoming sensitive to events that repeat themselves.

Another form of non-declarative memory involves perceptual priming. Priming refers to a change in a response to a stimulus, or the ability to identify a stimulus, after previous exposure to the same stimulus. Perceptual priming works within the perceptual representation system (PRS). In the PRS, the structure and shape of objects can be primed by previous experiences and this effect sometimes lasts for months. The PRS is not dependent on the medial temporal lobe. Conceptual priming is also possible and depends on the lateral temporal and prefrontal regions. Another form of priming is semantic priming where the prime and target words are different, but are semantically related to each other.

Classical conditioning is a kind of associative learning. It is also sometimes referred to as Pavlovian conditioning. A conditioned stimulus is paired with an unconditioned stimulus and is associated with it. The conditioned stimulus will lead to a conditioned response, similar to that normally triggered by the unconditioned stimulus. There are two forms of classical conditioning:

  1. Delayed conditioning: the unconditioned stimulus starts while the conditioned stimulus is still present;

  2. Trace conditioning: there is time between the unconditioned and conditioned stimulus, so a memory trail is needed for the association between the two stimuli. It is disturbed by damage to the hippocampus (delay conditioning remains intact).

Non-associative learning is about simple forms of learning, such as habit, where a response to an unchanged stimulus decreases over time, or sensitization, where a response increases with repeated presentations of the stimulus. Non-associative learning concerns the sensory and sensory-motor paths.

Which parts of the brain are involved in memory?

Human memory, brain damage and amnesia

The medial temporal area consists of the amygdala, the hippocampus and the surrounding entorhinal cortex and parahippocampal and perirhinal cortical areas. In 1953 it was thought that patient H.M. had the hippocampus removed bilaterally. An MRI scan was done in 1997 because the clamps in H. M. brains were not ferromagnetic. It turned out that half of the posterior areas of the hippocampus were still intact, but that the other half was damaged by atrophy.

Patient R.B. had suffered brain damage after an ischemic episode. As a result, less blood entered a certain part of the brain. As a result, R.B. anterograde amnesia, he could not form new memories. He also has retrograde amnesia for a period of one to two years before brain damage. R.B. was found to have only hippocampal lesions in the CAI pyramidal cells.

From these cases, it emerges that the hippocampus is crucial for forming new long-term memories.

Information can also be obtained from patients with transient global amnesia (TGA). This is often caused by physical exertion in men and by emotional stress in women. This disrupts normal blood flow in the brain. This often happens with the vertebral-basilic vein system, which gives blood to the medial temporal lobe and the diencephalon. This creates a temporary ischemia, which can later return to normal. The temporary amnesia can occur due to lesions in the CA1 of the hippocampus.

Patients with Korsakoff syndrome, caused by alcohol abuse, initially have no damage to the medial temporal lobe, but there is probably a disruption of the connections between the anterior and dorsal-medial diencephalon and medial temporal lobe, which disrupts the circuit involving the hippocampus.

In people with Alzheimer's there is neuronal decline in the parietal lobe structures of the retrosplenial cortex, posterior cingulate, precuneus and angular gyrus. Research shows that the hippocampus in these people deteriorates faster compared to healthy people who get older. The plaques and tangles build up in the medial temporal area. The hippocampus is declining in size. Moreover, it was found that atrophy in the medial temporal lobe is strongly related to the decline in episodic memory experienced by Alzheimer's patients. The difficulty in forming new episodic memories is probably due to a defective connectivity between the hippocampus and the prefrontal cortex, due to a decrease in acethylcholine-producing cells.

Animal models for memory

A lot of animal research has been done on the role of the medial temporal lobe in memory. Mishkin has done research on monkeys with lesions in the hippocampus, the amygdala, or both. For this, the delayed non-matching example task was used. The monkey was in a cage, with a two-hole table in front, with the door closed. Without the monkey being able to see it, an object was placed over the well with food. After opening the door, the monkey lifted the object to grab the food. After closing the door, the same object was placed over an empty well and another object over the food well. The monkey was then trained for the new object and measurements of learning and memory were performed.

The investigation showed that the memory was only damaged if there were lesions in both the hippocampus and amygdala. The amygdala therefore seems to play an important role in memory. However, this does not match reports from patients with brain damage, in which damage to the hippocampus alone led to anterograde amnesia.

Zola and Squire have investigated more selective lesions. Hippocampal and amygdala lesions result in the strongest memory deficits if the surrounding cortex was also subjected to a lesion. Without lesion of the surrounding cortex, the lesion of the amygdala would not have an additional effect on memory. The amygdala could play a role in a system that supports obtaining a long-term reminder.

Zola has further investigated lesions in the surrounding cortex in the perirhinal, entorhinal and parahippocampal areas. These lesions led to poorer performance in memory tasks from the delayed non-matching example task. Lesions in only the parahippocampal and perirhinal cortices in humans result in significant memory deficits.

The parahippocampal and perirhinal areas are known to receive information from the visual, auditory and somatosensitive associative cortex and transmit it to the hippocampus. The operation of the hippocampus depends on these areas. Lesions involving these areas result in stronger amnesia. This is because the areas are involved in relatively much processing. The amygdala is important for emotional memory. The medial temporal lobe is important in organizing and consolidating long-term memories, which are permanently stored in the neocortex.

The hippocampus is not involved in stimulus-response learning, but is involved in spatial navigation learning. The hippocampus could also be involved in merging contextual information into a complex contextual reminder and in retrieving long-term reminders. The hippocampus would no longer be necessary to retrieve a contextual reminder if the details are lost and the reminder has become more general.

What do we know about imaging research into memory?

Subsequent memory paradigm

fMRI research has shown that the hippocampus is active in encoding new information. The subsequent memory paradigm is often used for this. In this paradigm, participants receive items that they must remember. Their brain activity is measured while encoding the items. After this a memory test is done. Various studies using this method show that the hippocampus and the posterior parahippocampal cortex are active in encoding the information, which is later correctly remembered. Damage to the hippocampus could therefore lead to problems when encoding new information.

Perirhinal cortex

The hippocampus also appears to be involved in the retrieval of information from long-term memory. When information is correctly remembered, the hippocampus is activated. The hippocampus is therefore important for the collection of episodic memories, but not for memories based on familiarity. Information of properties of items from unimodal sensoric areas of the neocortex comes from the anterior parts of the parahippocampal region, namely the perirhinal cortex (PRC). Information from the polymodal neocortex about "where" something is localized comes from the posterior parts of the parahippocampal cortex. Both project on the entorhinal cortex, but only come together within the hippocampus.

The binding of items and contexts (BIC) model suggests that the PRC represents information about specific items, the parahippocampal cortex represents information about the context of the items, and the hippocampus connects the representations. This combination is called relational memory. To recognize something as familiar, you only need the PRC, but to retrieve everything related to it, you need the hippocampus and posterior parahippocampal cortex.

Retrieval of different types of information

Wheeler et al investigated which parts of the brain are involved in retrieving different types of information. Participants learned a set of sounds or pictures, with each sound or picture being labeled. After two days, a memory test was done while the participants were in an fMRI scanner. The researchers found that during the retrieval of images, various regions of the neocortex that were active during perception were active again. This also applied to the sounds. The neocortex is therefore specific for sensory information. The visual processing areas in the inferotemporal cortex were found to be involved in preparing visual information retrieval, and the dorsal parietal and superior occipital activity was related to the process of searching the memory for the correct item.

Retrosplenial lesions, in the parietal lobe, can lead to retrograde and anterograde amnesia. Research showed that the posterior parietal cortex (PPC) was more active if an object previously seen was correctly recognized than if it was correctly stated that it had not seen an object before. The activity was extended to the inferior and superior parietal lobes and the medial structures of the precuneus to the posterior cingulate cortex (PCC) and retrosplenial cortex (RSC). Successful retrieval of a reminder is associated with activity in the lateral PPC and RSC. These areas are not active in encoding, unless the items are self-referential or emotional. The RSC appears to be well-connected with the parahippocampal cortex (PHC).

Distinguish left and right hemisphere

Cabeza and Nyberg conclude on the basis of research that encoding is related to the left frontal areas and that retrieval is related to the right frontal areas. The left frontal cortex, however, is involved in semantic encoding and retrieval. The left hemisphere would be more involved in processes of linguistic representations and the right frontal cortex with object and spatial memory formation.

Two systems

Ranganath and Ritchey propose a model with two systems:

  1. Anterior temporal (AT) system: contains the perirhinal cortex and the connections with the anterior hippocampus, amygdala, ventral temporopolar cortex and the lateral orbitofrontal cortex. The system would be involved with memory for items, familiarity-based recognition, associative properties of objects and making fine perceptual and semantic distinctions. It assesses the significance of units.

  2. Posterior medial (PM) system: contains the PHC and RSC, mammary bodies, anterior thalamic nuclei, subiculum and default mode network (active with mind wandering). The system is involved in recollection-based memories and the preparation of mental representations of relationships between units, actions and outcomes.

The parietal cortex is involved in maintaining information in the working memory. Activity in the parietal cortex may indicate integration of multiple types of information.

What is consolidation?

Consolidation is the process that stabilizes the memory for a certain time after it is first obtained. In most models, consolidation consists of an initial fast consolidation process followed by a slower permanent consolidation process.

Theories about consolidation

The medial temporal lobes, especially the hippocampi, are essential for the rapid consolidation and initial storage of information for episodic and semantic memories. However, the mechanisms of slow consolidation remain controversial.

There are two important theories about this. One theory is the standard consolidation theory, which states that the neocortex is crucial for storing fully consolidated long-term memories, in which the hippocampus plays only a temporary role. The memory becomes independent of the hippocampus through many repetitions, making connections with the cortex and being stored there. This would apply to both episodic and semantic memories.

The other theory is the multiple trace theory, which states that only the long-term storage for semantic information depends on the neocortex, while some aspects of episodic memory continue to depend on the hippocampus. Every time a reminder is retrieved, a new trail is created. If there are many spores, it is more resistant to damage to the hippocampus. According to this theory, episodic memories fade over time and become semantic memories.

The temporal neocortex outside the medial temporal lobe is important for permanent consolidation of semantic information. Lesions to the lateral cortex of the anterior temporal came may lead to severe retrograde amnesia. With isolated retrograde amnesia, one can still form long-term episodic memories. The lateral and anterior regions of the temporal lobe could be a storage place for long-term memories, or important for retrieving long-term memories.

What is the cellular basis of learning and memory?

Hebb's law means that if a synapse is active while the postsynaptic neuron is active, the synapse becomes stronger. This is also called "Hebbian learning".

Long-term potential and the hippocampus

Due to the role of the hippocampus in memory, it was always thought that the neurons in this area should be plastic, or capable of altering synaptic interactions.

The three major neural components that stimulate the hippocampus are:

  1. The perforant path: this forms stimulating connections between the entorhinal cortex and the granule cells of the dentate gyrus.

  2. The mossy fibers: connect the granule cells of the dentate gyrus with the CA3 pyramidal cells.

  3. Schaffer collaterals: connect the CA3 pyramidal cells with the CA1 pyramidal cells.

Long term potential

Research by Bliss and Lømo in rabbits shows that stimulating the axons in the perforant path leads to larger and longer-lasting EPSPs, excitatory postsynaptic potentials. This means that the synaptic response is greater when the path is stimulated again. This effect is known as the long-term potential, LTP. The LTP has also been found for the other two paths and can take hours to days.

When a weak input and a strong input reach a cell at the same time, the weak synapse becomes stronger. When a second weak input reaches the cell at the same time but is not active with the strong input, no LTP takes place for the second weak input.

There are three rules for the associative LTP:

  1. Cooperative: more than one input must be active at the same time;

  2. Association: the weak input only becomes stronger if it takes place together with a stronger input;

  3. Specificity: only the synapse that is stimulated shows this effect.

The NMDA receptors, for glutamate, play a role in the development of an LTP, but not in the retention of an LTP. The retention of the LTP is probably dependent on the other glutamate receptors, the AMPA receptors.

Blocking LTP in the hippocampus in mice results in problems with spatial memory and learning. The exact role of LTP in learning and memory is not clear due to conflicting results and conclusions from various studies. It is in any case present at the cellular level and NMDA receptors play an important role in it.

How does emotion work? - Chapter 10

Patient S.M. was 20 years old when she found out she had atrophy on her amygdala. The damage turned out to be bilateral. This rare genetic disease, called Urbach-Wietze, causes an atrophy of the amygdala.

S.M. did standard measurements of neuropsychological functioning. Everything seemed normal. Her intelligence score also showed no abnormalities and she had no perceptual and motor problems. When S.M. she asked to judge faces on emotion, she did not score differently from healthy people, but she did differ on one emotion: fear. She was able to properly describe the emotional fear and the situations in which people experience fear, so it was not a conceptual problem. However, she was unable to draw a frightened face on paper, while this was successful with other emotions. She also had a very diminished experience of anxiety. Because of this, she did not shy away from anxiety-provoking experiences, because she did not realize that she had a defect.

You can draw three lessons from this case. The first is that the amygdala must play an important role in the emotion of fear. The second is that it is possible that the understanding of other emotions remains intact when the amygdala is damaged. And the third is that due to a lack of fear, she could no longer avoid dangerous situations. Emotions modulate and bias our behavior and our actions.

What is an emotion?

The definition of the word "emotion" is still a debatable subject. The definition of the author of the book starts with: "an emotion is a feeling that you get when ...". However, many researchers state that a feeling is the subjective experience of an emotion, but not the emotion itself. Emotions are neurological processes that guide our behavior in such a way that they increase survival and reproduction.

Current models state that emotions are responses to external stimuli and / or internal mental representations that:

  • concern changes across multiple response systems (experiences, behaviors, physiologically);

  • are separate from moods, in the sense that they often contain identifiable objects or triggers;

  • can be both an unlearned response to stimuli with intrinsic affective traits and a learned response to stimuli with acquired emotional value;

  • can contain multiple types of valuation processes that assess the significance of incentives on current goals;

  • depend on different neural systems.

Most psychologists agree that an emotion consists of the following three components:

  1. a physiological response to a stimulus;

  2. a behavioral response;

  3. a feeling.

Which neural systems are involved in emotion processing?

The sensory systems are involved when emotions are triggered by an external stimulus. Sometimes emotions are triggered by episodic memories, in which the memory systems are involved. The physiological response of an emotion depends on the autonomic nervous system, part of the peripheral nervous system. Arousal is also part of theories about emotions. This is regulated by the reticular activating system.

The limbic system

Papez appointed a neural network that is involved in emotion. This system includes the hypothalamus, anterior thalamus, cingulate gyrus, and the hippocampus. MacLean expanded this idea. He proposed a model of the limbic system that turned out to be more descriptive than functional. In addition to the structures that Papez had appointed, he also stated that the amygdala, orbitofrontal cortex and parts of the basal ganglia were part of the limbic system. MacLean described this as an "emotional brain." We now know through various studies that not all structures of the limbic system are involved in emotion and that structures are also missing. There appears to be a complex interaction between different neural networks for the analysis of emotional stimuli. This contains the thalamus, somatosensory cortex, higher order sensory cortices, amygdala, insula and medial prefronal cortex. The network involved in an emotion depends on the situation or the task.

How can we categorize emotions?

William James was of the opinion that emotions were not basic, nor that they could be found in dedicated neural structures, but that they were the result of psychological processes stimulated by evolution.

Although there is largely agreement on the various components that make up an emotion (see above), no consensus has yet been reached on the underlying systems. The theories differ on the timing of the three components and whether cognition plays a role. An emotional stimulus is a stimulus that is highly relevant to the well-being and survival of the observer.

In an attempt to gain some order in the definition of emotions, researchers have focused on three primary categories of emotions:

  • basic emotions: a small set of emotions, each with unique characteristics, obtained through evolution, that are reflected by facial expressions

  • complex emotions: combinations of basic emotions, some of which are socially or culturally learned, that can be identified as long-term feelings

  • dimensions of emotions: emotions that are fundamentally the same, but differ on one or more dimensions

Basic emotions

According to Ekman, facial expressions of emotions do not differ much between cultures. There would be some basic emotions: cheerfulness, sadness, fear, disgust, anger and surprise. Each of these emotions has a characteristic facial expression. There are three characteristics of emotions that most scientists agree on: they are innate, universal and short-lived. According to Ekman, there are more criteria that a basic emotion must meet, namely:

  • distinctive universal signals;

  • present with other primates;

  • distinctive physiology;

  • distinctive universals in prior events;

  • fast start;

  • short term;

  • automatic valuation;

  • unsolicited appearance.

Complex emotions

Ekman distinguished complex emotions from basic emotions in the following way: "Parental love, romantic love or jealousy last for longer periods of time - months, years, or even throughout life". Complex emotions, such as love and jealousy, are long-term cognitive versions of basic emotions that are culturally specific or individual.

Dimensions of emotions

Another way to tackle categorization of emotions is to see an emotion as a dimension. Two factors are important here: the intensity of arousal and the valence (positive - negative). Davidson made another dimensional distinction, namely between approach and withdrawal in response to emotion. None of the above approaches is complete in including the emotional experience.

What are different theories of generating emotions?

James Lange theory

William James stated that emotions were the perceptual results of somatovisceral feedback from body responses to an emotion-provoking stimulus. He stated that you are not running from a bear because you are scared, but that you are scared because you become aware of the physical changes when you run. An equal argument was put forward by Carl Lange (hence the name James-Lange theory). They therefore hypothesize: the bear (stimulus perception) leads to a physiological response (adrenaline is released, resulting in an increased heart rate and breathing, sweating and fight-flight response). This leads to an automatic, unconscious interpretation of the physiological response ("my heart is pounding hard, I am running, so I must be afraid" = subjective emotional feeling). So before the emotion was felt, there must have been a physical reaction.

Cannon-Bard theory

Philip Bard thought that physiological responses were not separate enough to distinguish between fear, anger, and sexual attraction, for example. Cannon and Bard also believed that the neuronal and hormonal feedback processes are too slow to precede an emotion. Cannon was of the opinion that the sympathetic nervous system coordinated the reaction at the same time as the generation of the emotional feeling through the cortex. Bard and Cannon thought that an emotional stimulus was being processed by the thalamus and was simultaneously sent to the neocortex and hypothalamus to produce a peripheral response. The neocortex thus generated the emotional feeling and the periphery the slow emotional response. They therefore hypothesize (in terms of the bear story): the bear (perception of the stimulus) activated the thalamus and led to an interpretation of the cortex, namely that there was a dangerous situation and that people should be scared. At the same time, the hypothalamus became active, triggering a slow emotional response through the sympathetic nervous system, such as fight or flight.

Appraisal theory

The "appraisal theory" is a group of theories in which emotional processing depends on the interaction between characteristics of the stimulus and their interpretation. Richard Lazarus proposed a version of appraisal theory in which emotions are a response to the ratio of disadvantage versus benefit when a person comes into contact with something. He stated that the cognitive appraisal comes before the emotional response / feeling. According to him, this cognitive appraisal was automatic and unconscious. He therefore proposes (in terms of the bear story): someone sees the bear, which leads to a cognitive assessment, such as a quick risk-benefit assessment: a dangerous animal comes up to me and he shows his teeth. I see high risk and no benefit in this, so I am in danger. This leads to feeling the emotion (fear) and then to a response (fight or flight).

Singer-Schachter theory of cognitive interpretation of arousal

This theory was drawn up by Singer and Schachter and is a mix of the James-Lange theory and appraisal theories. Singer and Schachter stated that emotional arousal (arousal) and then reasoning are needed to value a stimulus before the emotion is identified. They therefore suggested (in terms of the bear story): someone sees the bear, which leads to a physiological reaction (arousal). This leads to a cognitive assessment of the situation, after which the emotion of fear is felt.

Constructivist theories

Constructivist theories state that emotion comes from cognition and that this is modulated by culture and language. One of the most recent theories in this category is that of Lisa Barrett. She suggests that emotions are concepts to get an idea of ​​the sensory input of the body and the world. We form a mental representation of physical changes, which is called core affect. This representation is classified according to language-based emotional categories. These categories vary within the experiences and culture of a person, so there are no empirical criteria for assessing an emotion. Constructivists therefore hypothesize (in terms of the bear story): sensory input (she sees the bear) leads to a physiological reaction (arousal). The brain passes all previous times that it has seen a bear and then categorizes the current response based on all previous experiences suggested by its culture and language. This leads to an emotion, which she can call fear.

Evolutionary psychological approach

Evolutionary psychologists Cosmides and Tooby stated that emotions are conductors of an orchestra of cognitive programs that need to be coordinated to produce successful behavior. They suggest that emotions are a program that controls cognitive subprograms and their interactions. They therefore hypothesize (in terms of the bear story): someone sees the bear, which means the situation is detected and a program is activated that controls the subprograms. The reaction is: perception and attention shift automatically, goals and motivations change from a picnic in the forest to staying alive, information collection programs are activated: we look for a tree as protection, communication changes, interpretation systems are activated, learning systems continue, physiology changes and behavioral rules are activated. Eventually the person runs to the tree.

LeDoux’s high road and low road

Joseph LeDoux stated that people have two emotion systems that work in parallel. One system is a neural system for our emotional responses that is separate from a system that generates a conscious sense of emotion. The emotion-response system is enhanced by evolution to produce rapid responses that increase the chance of survival and reproduction. Conscious feelings are irrelevant to these responses and are not acquired through evolution, but learned through experience. He therefore proposes (in terms of the bear story): someone sees the bear which leads to a quick fight / flight response and at the same time slow cognitive assessment, which leads to an emotion (fear).

How are the amygdala involved in emotion?

The amygdala are located near the hippocampus in the medial temporal lobe. The amygdala consist of three groups of nuclei:

  1. Basolateral nuclear complex: lateral, basal and accessory basal nuclei. The basal nucleus is a gatekeeper of amygdala input, which receives input from all sensory systems. It is important for mediating instrumental behavior

  2. Centromedial complex: central and medial nuclei. The medial nucleus is important for innate emotional responses

  3. Cortical nucleus: olfactory part of the amygdala, because it receives input from the olfactory bulb and olfactory cortex

The first researchers to assume that the amygdala were involved in emotion are Klüver and Bucy. They came to the conclusion that monkeys, whose amygdala had been damaged, no longer exhibit a fear reaction on objects that normally cause fear. This is called the Klüver-Bucy syndrome. Today, the amygdala is believed to play an important role in general emotional processing and learning, memory, and attention to emotionally significant stimuli.

What are the interactions between emotion and other cognitive processes?

Implicit emotional learning

If a stimulus, which was previously neutral, is linked to an unpleasant event, you will start to feel uncomfortable the next time you encounter that neutral stimulus. This is also called fear conditioning. Much use is made of this paradigm in investigating emotional learning, because it works the same in many species (people, rats, etc.).

To explain it in terms of the principles of conditioning: an unconditioned stimulus (for example a light) is always preceded by a conditioned stimulus (for example a loud sound). Over time, the animal will show the same response to hearing the loud sound (conditioned response) as to the light (unconditioned response). The conditioned reaction can also be unlearned (extinction) by offering the conditioned stimulus a few times without the unconditioned stimulus.

Damage to the amygdala ensures that the patient or rat is no longer capable of a conditioned fear response. Amygdala lesions block the ability to form and express a conditioned response to a neutral conditioned stimulus, accompanied by an aversive unconditioned stimulus. The lateral nucleus of the amygdala forms the associations that underlie such aversive conditioning, by combining information from different brain regions. Cells in the superior dorsal lateral amygdala change rapidly when connecting the conditioned stimulus with the unconditioned stimulus. If the cells reset, the association remains. The lateral nucleus is connected to the central nucleus of the amygdala. This connection initiates an emotional response when needed in the event of a threat.

Two routes of information processing to the amygdala:

LeDoux proposes two different routes on which information can reach the amygdala. In the fast, inaccurate route, sensory information runs directly through the thalamus to the amygdala, without filtering the conscious check. It is also called the low road. In the slow, accurate route, the sensory information runs through the thalamus to the sensory cortex for further analysis and then on to the amygdala. It is also called the high road. The low road allows us to prepare for a rapid response, which takes place when the high road indicates that it is indeed the conditioned stimulus. The amygdala is particularly sensitive to biologically important stimuli, such as predators.

The role of the amygdala in learning to respond to stimuli linked to aversive events is called implicit, because we express this indirectly through a behavioral or physiological response.

Patients with amygdala damage do not exhibit anticipatory anxiety despite the fact that they are aware of the upcoming aversive stimuli. In other words, the explicit memory is intact, the implicit memory is disrupted. If the hippocampus is damaged, this pattern is reversed. This shows that the amygdala play an important role in implicit learning and expression. However, the amygdala play no role in all forms of learning and memory. The hippocampus is important for the explicit knowledge of the emotional properties of a stimulus.

Explicit emotional learning

In the event of damage to the amygdala, fear associations can still be learned and reported explicitly. The startle response is no longer shown with the anxious stimuli, so there is no emotional response. The amygdala are important for implicit emotional responses to stimuli, the meaning of which is explicitly learned. The activity of the amygdala is influenced by hippocampal-dependent declarative representations about the emotional properties of stimuli. Amygdala activity modulates indirect emotional responses.

Arousal appears to strengthen the storage of explicit memories. If there is damage to the amygdala, this reinforcement of memories is blocked by arousal, and not the acquisition of a memory itself. The role of the amygdala is modulating. The amygdala mainly play a role in preserving the memory and not in changing the first encoding. It reinforces the hippocampal consolidation of memories. The basolateral nucleus of the amygdala plays an important role in this. The amygdala also work together with the hippocampus during the encoding phase.

The right amygdala appears to be the most important for retrieving autobiographical emotional memories with negative valence and high arousal. In addition, the more active the amygdala, the stronger the memory. During the retrieval of emotional information, there is an increased effective connectivity between the amygdala and hippocampus. Arousal influences how quickly we forget something.

Acute stress can facilitate memory. Chronic stress or extreme arousal have a negative influence on the performance of the hippocampal memory system. This is due to the influence of too many stress hormones.

Emotion, perception and attention

We are extra aware of and pay more attention to emotionally salient stimuli. The attentional blink phenomenon means that we often miss a second target stimulus, which is presented immediately after a first stimulus in the early lag period. However, if a second word is emotionally significant, that is, distinctive, arousing, and with a positive or negative valence, the word is noticed. With damage to the left amygdala, the second word is not recognized even if it is emotionally significant. When attention means are limited, only emotional stimuli reach consciousness. The amygdala are important in strengthening our attention with emotional stimuli.

There are two theories about this:

  1. emotional learning involves a long-term change in sensory cortical tuning;

  2. emotional learning produces a more temporary change in attention thresholds. Attention and awareness have no impact on the amygdala's response to fearful stimuli, because even though you are thinking about something else, you are still startled by a loud sound, for example. The amygdala receive information about emotional significance and influences attention and perception through the sensory cortex. The amygdala are also more active when seeing new stimuli.

Emotion and decisions

Research with someone with damage to the orbitofrontal cortex showed that the person was unable to prioritize solutions based on the ability to solve the problem. He also made poor professional and social decisions. Anticipating the consequences of actions and learning from mistakes has also deteriorated. Because the orbitofrontal cortex is important for emotional functions, it was discovered that emotion is needed to make the right decisions. The emotion is established in our body as somatic markers, or changes in the physiological arousal. The orbitofrontal cortex, together with the compounds, could lead to learning associations between situations and somatic changes. The orbitofrontal cortex and other brain areas evaluate potential behavioral responses and chance of reward based on previous situations that had led to similar somatic responses.

Three types of emotions influence the making of decisions:

  1. the current emotional state;

  2. anticipatory emotions that arise before the decision;

  3. personal experience, the emotion you expect to feel after making the decision.

Extinction of a conditioned response depends on the interaction between the amygdala and the prefrontal cortex. Damage to the orbitofrontal cortex (OFC) leads to an inability to respond to changing patterns of reward and punishment. There is no reversal learning and there is no learning from experience. The lateral OFC is activated upon punishment, while the medial OFC is activated upon reward. The degree of activity depends on the magnitude of the reward or punishment. With damage to the OFC, there are normal emotional reactions to profit and loss, but there is no regret. Individuals with this lesion also do not learn from regret-indicating decisions and also do not anticipate negative outcomes of their choices.

Parts of the brain that are involved in regret are the medial OFC, anterior cingulate cortex and anterior hippocampus. After several trials, people became risk aversive, and the medial OFC and amygdala were active. The same circuits therefore mediate the experience and anticipation of regret. The feeling of regret has a strong influence on making decisions.

Emotions have four roles in making decisions:

  1. they provide information;

  2. they reflect a common currency between different inputs and options;

  3. they focus attention on new information that can guide the decision;

  4. they motivate approach or evasion decisions

Emotion and social stimuli

With damage to the amygdala, there is mainly a limitation in recognizing facial expressions. The OFC seems mainly involved in recognizing angry faces. Patients with amygdala damage have particular difficulty in recognizing fearful faces. The amygdala is also active if test subjects are not aware of a fearful face. To recognize facial expressions, control people mainly look at the eyes. However, people with amygdala lesions do not look at the eyes, while this is an important characteristic of the anxious expression. The amygdala are also involved in observing and interpreting emotion and sociality in many stimuli, even inanimate objects. It plays a role in our ability to humanize.

The amygdala are also activated when people are categorized. Although this can offer benefits, it can also lead to racial stereotyping. Racial bias is often measured indirectly with the implicite association test (IAT). Participants are then shown photos of white and dark people, which they must classify and respond to positive and negative words. Research has shown that the amygdala are more active for dark faces than white faces in a short presentation of the photo. If the photo is displayed longer, controlled processing takes place and the amygdala activation does not differ between the varieties. However, more activity was seen in the right ventrolateral prefrontal cortex with darker than white faces. The amygdala are probably activated if a face differs from the average face and the average face depends on your origin.

How does cognitive control of emotions work?

Emotion regulation

Emotion regulation refers to the processes that affect the types of emotions we have, when we have them, and how we express and experience them. Emotion regulation can intervene at various points in the emotion generation process, such as in attention, evaluation or response processes. Sometimes it is conscious and controlled, sometimes unconscious and automatic.

Emotion regulation is very variable within and between individuals and depends on the interaction of frontal cortical structures and subcortical brain regions. This can be measured with an EEG. People with more left frontal activation at rest than right frontal activation appear to be better at voluntarily suppressing negative emotion. The frontal lobe appears to be important in automatically regulating emotions.

The role of the PFC

Gross did research on reappraisal, where an emotionally charged stimulus is reinterpreted in non-emotional terms, and suppression, where we suppress emotionally expressive behavior in an emotionally aroused situation. It turned out that reappraisal and suppression both emotionally reduced expressive behavior, but only reappraisal reduced envy. Suppression increased sympathetic activation, making participants more excited and this continued even after the watched movie. Emotions are therefore partly under conscious cognitive control. The prefronal cortex (PFC) appears to modulate emotional activity in subcortical structures, such as the amygdala.

The left lateral PFC and dorsal anterior cingulate cortex (dACC) were active in evaluating and "deciding" cognitive strategy, imagining exacerbation or improvement of a situation. The dorsal medial PFC was active in both down-regulation and up-regulation. The upright medial PFC and the PCC were active in upregulation, but the right lateral and orbital PFC in downregulation. Different reappraisal targets therefore activate equal and sometimes different PFC areas.

The role of the amygdala

Amygdala activity increased if one had to increase a negative emotion and decreased if one had to reduce a negative emotion. The amygdala have a more flexible role in processing relevance of stimuli, depending on current goals and motivation. This is called affective flexibility. However, Amygdala processing has a negativity bias.

Other areas involved in emotions

Different areas of the brain are associated with the processing of different emotions. The insula appears to play a major role in the integration of affective and cognitive processes, but also appears to be important in the emotion of disgust. However, other researchers say that the anterior insula plays a more general role in representing main affective feelings in consciousness. There also appears to be a significant correlation between the activity of the insula and the perception of internal body states, this is called interoception.

The emotion "happiness" in particular activates the dorsolateral PFC, the cingulate gyrus, the inferior temporal gyrus and the cerebellum. These results reinforce the role of the limbic system and its connections in the processing and expression of positive emotions. However, happiness is very difficult to investigate.

"Love" mainly comes from an activation of subcortico-cortical areas involved in reward, motivation, emotions, and cognitions. These regions are dopamine-rich brain regions, namely the insula, the caudate nucleus, the putamen, the ventral tegmental region, the anterior cingulate cortex, the bilateral posterior hippocampus, the left inferior frontal gyrus, the left middle temporal gyrus and the parietal lobe. Each type of love activates a specific brain network. Active with maternal love is the subcortical periaqueductal gray matter, which is not active with passionate love.

However, it is important to keep in mind that the perception and experience of emotions is regulated by interactions of various neural structures. The anterior cingulate cortex is probably important for the general processing of emotions.

What is language? - Chapter 11

How can language be defined?

Hank, a war veteran, suffers from anomia after damage to his left hemisphere and remembers and recognizes objects, but is not able to name them. This is an example of that object recognition is not the same as the naming of objects. Also, Hank’s example shows that producing speech is not the same as understanding speech.

Language is very typical for humans and is the key thing that distinguishes humans from other species. Language can be seen as ‘natural’ because it originates from the brain. In other words: it is produced by the brain. It can also be written, spoken or portrayed and it can be used to communicate information about the past, present or future between people and therefore it is regarded as social.

The research in language started in 1960. Patients with language issues were examined with the goal to develop a better picture of the brain. Psycholinguistics focused mainly on the cognitive aspects of language. Later a combination of these types of research developed, named cognitive neurosciences. 

What is the anatomy of language?

A huge part of language processing happens in the left hemisphere mainly around the Sylvian fissure. Also Wernicke’s area and Broca’s area are located in the left hemisphere. All the language areas and the connections between the areas are called the left sylvian language network. Some areas in the right hemisphere also play a role in language processing, such as processing the prosody of language and metaphors. And lastly, motor skills and timing are also important for efficient communcation. So there are also areas that control motor skills.

What is the effect of brain damage on language?

Aphasia often develops after lesions and has a big impact on understanding and producing language. It can also lead to problems in speech because of reduced muscle control, which is called dysarthria. Apraxia is an acquired oral motor speech disorder that also leads to problems in speech, especially in articulation, due to less motor control.

Broca's aphasia

Broca’s aphasia is one of the most studies forms of aphasia. Other names for broca’s aphasia are anterior aphasia, non-fluent aphasia, expressive aphasia and agrammatic aphasia.

This disorder can lead to that patients are only able to speak one word. But, there is a lot of variability between patients in their complaints. Broca’s aphasia often leads to telegraphic speech , which children also use. Examples are: “Give cake’, “Want icecream”. It also leads to a lot of effort to even produce language and there may also be problems in understanding syntax and grammar (which is called agrammatic aphasia).

Wernicke's aphasia

Wernicke's aphasia is also called posterior aphasia or receptive aphasia. Patients with Wernicke's aphasia mainly experience problems in understanding language. They also often speak fluent fluently and use prosody and grammar, but produce meaningless sentences. For example, they could say: "Did the dog throw his plate on the house today?" Wernicke's area is located near the area in the brain that processes auditory information. Lesions in Wernicke's area alone do not lead to Wernicke's aphasia. Research shows that Wernicke's aphasia mainly occurs when there is damage not only to Wernicke's area, but also in the areas around it or when there is damage to the underlying white matter that connects the language areas in the temporal lobe with other areas in the brain.

Conduction Aphasia

According to Lichtheim, there is an area that stores conceptual information about words. For example, he suggested that when someone think of a word, all the words that are associated with this word would be activated. He described a model called the classical localization model which is based on this idea. The following applies: A = Wernicke's area (contains linguistic information and words are stored in it), M = Broca's area (where speech planning takes place) and B = contains conceptual information about words. He states that A, B and M are located in different areas of the brain that are connected to each other through white matter. According to the classical localization model, damage in the connections would lead to speech problems

Fasciculus arcuatus is the name for the white matter from Wernicke's area to Broca's area. Damage to this white matter can lead to conduction aphasia. Conduction aphasia means that an individual understands language, but makes mistakes in producing it. People with conduction aphasia also notice that they are making mistakes, but is unable to correct his or her mistakes. Patients with conduction aphasia also have difficulty producing speech and repeating speech, and sometimes use words incorrectly. Transcortical aphasia means that there is damage in the connection between B (conceptual information) and M (Broca's area), but that there is a good connection between A (Wernicke's area) and M. The symptoms here are similar to those of Broca's aphasia, but the difference is that these patients are able to repeat sentences that they hear. But, it can also be the case that there is compulsive repeating of sentences, which is called echolalia. Finally, there can also be global aphasia that leads to major limitations in both the production of language and the understanding of language. This often occurs as a result of much damage in the left hemisphere. Lichtheim's localization model is no longer followed. Contemporary researchers focus primarily on the idea that there is a network of brain regions that are interconnected.

How is language represented in the brain?

The brain has several ways to store words and concepts. For example, it contains a mental lexicon in which semantic information (meaning of words), syntactic information (combining words into sentences) and details of words (spelling, sounds) are stored.

After the perceptual analysis of words (simply speaking: seeing words) there are three consecutive responses:

  1. Word access: this is about the perceptual analysis of words activating the mental lexicon in the brain;

  2. Word selection: this means that the most suitable word is selected in the mental lexicon;

  3. Word integration: this means that words get combined into sentences.

The mental lexicon is not organized alphabetically. This would also be very impractical, because otherwise it would take a very long time to produce words with the letter Z, for example.

What are other important components of language?

Morphemes: the smallest combination of letters that have meaning. For example, in the word household, "house" is a morpheme and "hold" a morpheme and “old” a morpheme.

Phonemes are the smallest hearable sounds of words. For example, in the word beer, this can be ‘be’ or ‘beer’.

The semantic relationship of words is the relationship between words. It seems that words in the mental lexicon are connected to each other through their relationships with the words. Priming tasks can be used to test this effect. In these kind of tasks there is a word displayed, for example: “car”. This is called the prime. After the prime, the participant gets showed a target word which is related or unrelated word. The participants have to decide whether this target word is a real word or not. If the target word is related to the prime, it is expected that it is easier to determine if it is a real word. This is found in many tests. This result shows that the organization of words in the mental lexicon is based on the associations between words.

What is the mental lexicon?

There are several models that have tried to explain the semantic priming effect. For example, Collins and Loftus have proposed a model in which word meanings are represented in a semantic network. The word meanings are conceptual nodes, each of which is connected to each other. The strength and distance between these nodes are determined by the semantic or associative relationship between words. For example, the node for the word "car" will have a strong relationship with "truck" and a weak relationship with the word "shower." In this model it is is also assumed that nodes that are closer to each other are activated more strongly than nodes that are far apart. This model has great influence, but it is not possible to determine with full certainty how words are organized precisely in the mental lexicon. Other models state, for example, that words that occur together, such as cottage cheese, activate each other or that the semantic characteristics of words activate each other (a dog is an animal, has four legs, etc.). With these models the question is: how many characteristics must be mentioned before a person thinks of a dog, for example? There are of course also many different features: a table can be made of wood, glass, concrete, etc.

The neural substrates of the mental lexicon

Different neurological problems cause different problems in language. For example, patients with Wernicke's aphasia often experience problems in speech production which is called semantic paraphasia. For example, they call a cow a horse. Deep dyslexia can lead to that people read the word “cow” but say out loud “horse”, so they mainly make errors in reading. Progressive semantic dementia means that there are problems with regard to naming concepts. For example, patients with progressive semantic dementia find it difficult to name something precisely (dog), and therefore they could instead name the category to which something belongs (animal). This supports the view that words in the mental lexicon are interconnected. With this form of dementia there is a lot of variation in symptoms, which depends on the type of damage to the brain. Patients with damage or lesions in their inferior and medial temporal cortex usually experience difficulties with naming living things and damage to the left and frontal parietal lobes lead to problems in naming things that are produced by humans.

These findings support Warrington's hypothesis that the problems experienced by patients are primarily reflections of what type of information is stored in the semantic network. For example, when designating biological categories such as fruit, food or animals, we will mainly look at the physical properties (what color is a banana?) And when things are made by people, especially functional properties are analysed (where do we use a hammer for?).

How do people understand language?

When trying to understand spoken language, the listener must first decode the acoustic input into a phonological code. This is how auditory words are stored in the mental lexicon. Then there is lexical access and lexical selection. The meaning of the word leads to the activation of conceptual information.

When reading words, the first step is the perceptual analysis of words (orthographic analysis). These words are then directly stored in the mental lexicon or get translated into phonological units which in turn activate phonological words in the mental lexicon.

Understanding language

When trying to understand spoken language, the listener must first decode the acoustic input into a phonological code. This is how auditory words are stored in the mental lexicon. Then there is lexical access and lexical selection. Finally, the meaning of the word ensures the activation of conceptual information.

When reading words, the first step is the perceptual analysis of words (orthographic analysis). These words are then either directly stored in the mental lexicon or get translated into phonological units which in turn activate phonological words in the mental lexicon.

Understanding spoking language

There is a lot of difference between spoken language and reading language. With spoken language, the listener must take into account many different sounds in the environment and he or she must be able to focus on what is being said specifically and filter the other sounds. It appears that children up to one year are able to distinguish very well between all types of phonemes. Only after one year of living, they specialize in one or more languages ​​(which they learn from their caretakers). So Japanese babies up to 1 year are able to make a good distinction between the letters R and L, but as they grow older, they lose this capacity. The way of crying is also influenced by the phonemes with which the babies are most familiar. Other important features in understanding language are that a listener must take into account:

  • The variability of the signal (for example, is it a man or a woman who speaks?);

  • The fact that phonemes are often used in spoken language and that there is little pause between words, sometimes making it difficult to make a clear distinction between the beginning and the end of a word (coarticulation can occur);

  • To understand speech well, prosodic information can be used, which is about the speech rhythm and the tone of someone's words. For example, consider raising the tone when you ask a question. English speakers often emphasise syllables to clarify the meaning of words words (think of: lettuce).

The neural substrates of spoken language processing

It appears that the superior temporal cortex is important in the perception of language. "Pure word deafness" means that one is pure and only deaf to words: he or she can hear other sounds, but can not understand speech. This is the result of damage to the superior temporal cortex.

How are speech and non-speech distinguished in the brain? When there is speech, it is first processed by sound processing paths in general. Heschl's gyri is important here. The superior temporal sulcus (STS) also belongs to the area around Heschl's gyri and is therefore part of the auditory association cortex. It appears that Heschl's gyri is activated with sounds and with speech sounds, but that the STS is mainly activated with speech sounds.

FMRI has also been important in understanding how speech processing works. With the help of fMRI, Binder and colleagues have come up with a hierarchical model of word recognition. In this model auditory information from Heschl's gyri is passed on to the superior temporal gyrus (STG). But the distinction between speech and non-speech is not made here yet. After this, the auditory information then travels to the middle temporal gyrus and the inferior temporal gyrus, where a distinction is made between words and non-words. Eventually the information ends up in the angular gyrus. The processing of phonemes is done in the left center of the STG, the integration of phonemes is done in the left anterior STG and the processing of short sentences is mainly done through more anterior areas of the STS. Thus, the difference between speech and non-speech occurs primarily in the middle part of the STS.

How do people read words?

To read words, one must be able to recognize a visual pattern. The human brain is very good at recognizing patterns, yet reading is a fairly new invention. Reading is done with the help of learning symbols that differ across languages. There are three ways in which words can be symbolized in language: alphabetical (the West), syllabic (Japan) and logographic (Chinese). In an article about artificial intelligence, Selfridge came up with the 'pandemonium model'. He stated that demons (small components) lead to patterns that a machine could recognize. In this model, sensory input (R) is stored as an iconic reminder by the 'image demon'. Then there are 28 'feature demons' that pay attention to, for example, the shape of the symbol and decode it. There are also 'cognitive demons' which recognize certain combinations of characteristics. The 'decision demons' ultimately choose the best representation for reading. A criticism of this model is that it only uses bottom-up processing and not top-down processing and therefore does not take into account phenomena such as the word superiority effect (which means that people recognize letters faster in real words than in nonsense words).

The computational model assumes three levels of representation when recognizing letters:

  1. A level for the characteristics of letters or words;

  2. A level for letters;

  3. And a level for the representation of words.

In this model, top-down information (information at higher levels of representation of the word) is taken into account, which can influence lower levels of representation of information (such as the letters or characteristics of letters). So this is different than Selfridge's model. Another difference is that in the computational model processes can take place parallel, multiple letters can be processed simultaneously), while in Selfridge's model letters can be processed only one by one. The computational model also states that there are both exciting (activating) and inhibiting (inhibiting) connections between all levels of the information.

The neural substrates of processing written words

Pure alexia means that a patient cannot read words, while other aspects of understanding language are still intact. Damage to the occipitotemporal regions of the left hemisphere can lead to pure alexia. These areas are specialized in recognizing orthographic units (which are the visual representations of letters).

What is the role of the context in recognizing words?

To understand words in their context, syntactic and semantic properties of the words must be used that lead to an integrated representation of the meaning. A distinction can be made here between lower-order representations that are constructed from sensory information such as, for example, the word itself and higher-order representations, which are constructed from the context in which the word occurs. The context is very important in determining how a word should be interpreted.

Therefore, the sensory information and the contextual information must be integrated. 

There are different models that describe how this happens:

The modular or autonomous model states that a normal understanding of language is created by modules that are separate and independent. This means that higher-order representations cannot influence lower-order representations. That is why there is only bottom-up processing;

The interactive model states that all forms of information help to recognize words. The context can already impact the meaning of words without sensory information being available by activating the representations in the mental lexicon;

In the hybrid model it is stated that lexical access is autonomous and is not influenced by higher-order information. Lexical selection is influenced by sensory and higher-order contextual information. In these models information about word forms is given, which reduces the number of activated representations in the mental lexicon.

The ingretation of words into sentences

There are no representations of entire sentences in the brain. Instead, a syntactic structure is given to words that occur in sentences. This is called syntactic parsing.

Semantic processing and the N400 wave

The N400 effect is a wave that can be seen on an electroencephalogram (EEG) that occurs about 400 milliseconds after reading something that seems strange or incorrect. So if it says: "Piet eats a pizza", then there will probably be no N400 effect. However, if it says: "Piet eats a shoe", then there will probably be a wave 400 milliseconds after reading this strange sentence.

Syntactic processing and the P600 wave

The P600 wave is also called the syntactic positive change (SPS). This occurs about 600 milliseconds after reading sentences that are grammatically strange or incorrect. This is often the case with headlines of newspaper articles.

Neural models for language comprehension

In addition to the discovery of Wernicke and Broca, other brain regions or circuits have also been found to be important in language processing. A recent model is that of Peter Hagoort who divides language into three functional components:

  1. Memory: storing and retrieving information from the mental lexicon or storing words in the long term. The temporal lobe is important here;

  2. Unification: the integration of phonological, semantic and syntactic information into one representation. The STG and the inferior temporal gyrus are important here;

  3. Control: this links language to action (for example when switching turns in a conversation);

The networks of the language system in the left hemisphere

Angela Friederici has developed a model for understanding spoken sentences. In this model, four routes can be distinguished: two ventral routes that connect the posterior temporal lobe with the anterior temporal lobe and the frontal operculum. This is important in understanding the meaning of words. The other two routes are dorsal and connect the posterior temporal lobe with the frontal lobe. One of these dorsal routes connects the premotor cortex with speech planning and the other route connects Broca's area with the superior temporal gyrus and superior temporal sulcus. This last route is important in the processing of syntactic aspects.

The neural models of speech production

Levelt has developed one of the most influential models for speech production. He makes a difference between language comprehension and language production. Language comprehension is about a word for which we must then create a concept (understanding), while language production is about that there is a concept for which suitable words must be found.

The first step in Levelt’s model is that of preparation of a message. A distinction is made here between macro planning and micro planning: this is about that the speaker has a clear view of what he or she wants to  make clear. The intention or purpose of the message can be represented by goals and sub goals; this is called macro planning. Micro planning is about how information is expressed (do you choose to say: the park is next to the house or the house is next to the park?). So micro planning is about word choice and grammatical choices. Next, Levelt says that there is a 'formulator' that communicates the message in a grammatically and phonologically correct manner. Levelt's model is a modular model.

The tip-of-the-tongue effect can be explained as a crack between syntax and phonology. This is a problem in speech production. Some patients with brain damage (anomia) often experience this phenomenon. 

An example of an interactive model is that of Gary Dell. He states that there is phonological activation after the semantic and syntactic information of the word is activated. This leads to feedback between phonological activity and semantic and syntactic activity of a word. Levelt's model does not state that there is feedback.

The evolution of language

The purpose of language is to influence the behavior of others. This does not always have to be done consciously, such as in the case of animals. Animals often use language to sound alarm. If sounding alarm is the goal of language, it is important that it is specific and informative so that it becomes clear what needs to be done. With primates, the function of the communication seems to be mainly determined by whether it is transmitted verbally or by means of gestures. Verbal communication is often involuntary and indicates emotion, while gestures are mainly used in non-emotional situations, such as 'playing'. Tomasello states that the use of gestures in primates is more related to language with people than verbal language to primates. As a result, various researchers state that language started with gestures. Brain research also shows that brain areas for the motor control of the mouth and the hands are strongly related, so that it could be that the execution of gestures and talking are interconnected.

How do we achieve goals and meet needs? - Chapter 12

Patient W.R. was very passionate during his training as a lawyer, but as soon as he graduated, his passion disappeared and he became a tennis teacher. His family was very concerned about his sudden change. As time went by, the problems grew bigger and W.R. was not able to support his family financially anymore. It was clear that W.R. is intelligent and that he was also able to see that he had a problem. However, this did not interest him. He himself stated that he had lost his ego.

Patient W.R. went to the neurologist and told him that in the last year of his studies he had had a severe seizure. A CT scan soon revealed that W.R. had a large tumor in his brain. The tumor was located in the left, lateral prefrontal cortex and the right frontal lobe, as is where it crossed over the corpus callosum. The prognosis was poor because the tumor could not be removed by surgery and little was expected from chemotherapy and radiation. Despite the fact that W.R. understood this bad news, he did not show any emotion. W.R. showed no targeted behavior, the only purpose he had was to satisfy his immediate needs.

What is cognitive control?

Cognitive control, also called executive function, allows us to use our perceptions, knowledge and goals to make a certain selection of actions and thoughts. Cognitive control processes ensure that we can overcome our automatic thoughts and behavior and renounce habitual responses (habitual behavior). They give us cognitive flexibility and ensure that we can think in a new and creative way. This allows us to make plans about the future and predict problems. For this reason, cognitive control is of great importance for goal-oriented behavior and decision making.

To achieve goal-oriented behavior, we must prepare an action plan. These actions must be flexible and adaptive to changes and events. We need to monitor our actions to stay with our goal and to achieve that goal.

The prefrontal cortex is important in cognitive control. There are two prefrontal control systems, namely a system focused on targeted behavior (consisting of the lateral prefrontal cortex and the frontal pole) and a control system that is important for guiding and monitoring behavior (consisting of the medial frontal cortex).

Which parts of the brain are involved in cognitive control?

Cognitive control requires the integral function of various parts of the brain. In this chapter, the emphasis is placed on the prefrontal cortex, but other parts of the brain also play a role, such as the parietal lobe and the basal ganglia.

Subdivisions of the frontal lobes

The frontal lobes make up about one-third of the human cerebral cortex. The central sulcus separates the frontal lobe from the parietal lobe. The lateral fissure separates the frontal lobe from the temporal lobe. The frontal lobe is much larger in humans than in other species, especially in terms of the amount of white matter. The cognitive capacities of humans can therefore be due to the higher number of connections in the brain.

The primary motor area is in front of the central sulcus. The secondary motor areas are more anterior and ventral and contain the lateral premotor cortex and the supplementary motor region. The rest of the frontal lobe is the prefrontal cortex (PFC). This consists of four parts: the lateral prefrontal cortex (LPFC), the frontopolar region (FP), the orbitofrontal cortex (OFC) and the medial frontal cortex (MFC). The prefrontal cortex develops late, which means that cognitive control also develops late in development.

Networks that are important for cognitive control

The prefrontal cortex coordinates the processing of different parts of the central nervous system (CNS). This contains a massively connected network that connects motor, perceptual and limbic regions. There are several projections that connect the prefrontal cortex with almost all regions of the parietal and temporal cortex and even with certain regions of the occipital cortex. The largest input comes from the thalamus that connects the prefrontal cortex with subcortical structures, such as the basal ganglia, the cerebellum and the brainstem. The prefrontal cortex also has many connections with the motor areas and the homologous areas in the contralateral hemisphere.

Cognitive control defects

Patients with a frontal lobe lesion such as the previously discussed patient W.R. show a paradox. On the one hand, they show no evidence of neurological damage: they have no clear perceptual problems, are able to perform their movements and their speech is fluent and coherent. Also their intelligence and memory is not damaged.

However, when specific and sensitive tests are used, it becomes clear that the frontal lesions can indeed disrupt various aspects of normal cognition and memory. This allows patients to stick to an answer even if they have been told that this answer is incorrect. This behavior is called perseveration. These patients can also be impulsive or apathetic. They may not be able to make decisions, plan, organize, or understand the consequences of their behavior. Moreover, they can show utilization behavior (the extreme dependence on using prototypical responses to direct behavior). These defects often depend on the location of the lesion. Often the patients know that they have problems.

Targeted behavior

Researchers distinguish between two fundamental types of actions. The first consists of targeted actions. These are based on the assessment of an expected reward and the knowledge that there is a causal relationship between the promotion and the reward (promotion outcome). Most of the actions that we perform are of this type.

The other consists of habits. A habit is an action that is no longer under the control of a reward, but is stimulus-driven. We therefore consider such an action as automatic. The action is triggered by the context. This can lead to habit-driven actions in the presence of certain stimuli that trigger the retrieval of well-learned associations. With goal-oriented behavior we can maintain our goal, focus on important information to achieve the goal, ignore irrelevant information, monitor the process towards the goal, and carry out a flexibe and coordinated change of sub goals.

How is the working memory involved in cognitive control?

The working memory is a temporary representation of task-relevant information. That can be information from the long-term memory or something from the environment to which the person may have to respond. It brings together perception, long-term memory and action to arrive at targeted behavior and decisions.

The prefrontal cortex is necessary for the working memory

The lateral prefrontal cortex contains an important part of the working memory: the interaction between stored knowledge and perceptual information. Lesions in this area cause poor performance on tasks that require a response that must be delayed. With associative learning tasks, there is no worse performance, because this is a different kind of memory. This is about recognition memory, or the long-term association between a cue and reward. Piaget's Object Permanent Test is often used in humans to examine working memory. Young children younger than 1 year cannot perform this test properly, because their frontal lobe is not yet fully grown. As the child grows older (and the frontal lobe matures), he performs better on this task.

Physiological correlates of working memory

A working memory system requires two conditions:

  1. it must have a mechanism to reach stored information;

  2. there must be a way to keep information active.

The prefrontal cortex can perform both operations. Evidence for this was found through single cell recordings during a delayed response task. Cells from the prefrontal cortex became active during the delay period. The cells, in a way, remember what needs to be done. Miller's research shows that the cells in the prefrontal cortex show task-specific and task-dependent selectivity. Furthermore, research shows that the lateral prefrontal cortex is important for the working memory, because it maintains a representation of the target and cooperates with the inferotemporal cortex to retain relevant information for the target over a delay period.

Processing differences within the prefrontal cortex

Michael Petrides drew up a model of the working memory, in which information stored in the posterior cortex is activated, remembered and retained by the ventrolateral PFC. This is then manipulated with relevant information in more dorsal regions of the PFC, so that a certain goal can be successfully achieved.

The n-back test uses the working memory. A constant flow of certain stimuli (for example letters) is shown to test subjects. Subjects are instructed to press a button when they see the same stimulus. In the simplest version (n = 1), test subjects are asked to compare the stimulus with the previous one (1 back). However, there are trickier versions. At n = 2, test subjects should compare the stimuli with the stimulus from two tries back. This therefore requires remembering information, but also manipulating it.

Hierarchical organization of the prefrontal cortex

The frontal pole is probably important for integrating the specific content of mental activity into a general framework. For simple working memory tasks, only the posterior prefrontal areas appear to be involved or perhaps even only the secondary motor areas. If the task is more complex, the frontal pole also becomes active. The anterior region is also active with even more challenges.

The function of the prefrontal cortex (PFC) can be divided into three axes:

  1. A ventral-dorsal gradient organized in terms of holding and manipulating. It also reflects the general organizational principles that are observed in the more posterior cortex, such as the ventral and dorsal pathways for the "what" and "how" routes.

  2. An anterior-posterior gradient, where the anterior regions are important for the abstraction representations and the posterior regions for the less abstract representations. In the most posterior part, abstract intentions are translated into movement.

  3. A lateral-medial gradient, which is related to the extent to which the working memory is influenced by information from the environment (more laterally) or information related to personal history and emotional states (more medially). Lateral areas integrate external information that is relevant to current goals, and medial areas ensure that information related to motivation and reward influences targeted behavior.

How does the decision-making process work?

Theories about the decision-making process are normative or descriptive. Normative decision theories define how people should make decisions. However, this is often not how people ultimately choose. Descriptive decision theories, on the other hand, try to describe what people actually do and not what they should do. According to evolutionary psychologists, we sometimes make weird decisions, because they used to increase the chances of reproduction and survival. We are often unaware of the fact that many of our decisions are made as a result of simple, efficient rules (heuristics) that we have received due to evolution. The result of these decisions can be seen as non-rational. However, they are indeed rational if they are highlighted in an evolutionary perspective.

It is indicated that we make our decisions in different ways. For example, decisions can be purposeful or habitual. A similar way of classifying decisions is to subdivide them into action outcome decisions and stimulus-response decisions. An action-outcome decision involves a form of evaluation of the expected outcomes. If we repeat an action and the outcome is always the same, a process becomes a habit, or a stimulus-response decision. Another way of differentiating is the difference between model-based and model-free. Model-based means that we use an internal representation of the world to evaluate various actions. Model-free means that you have input-output mapping, just like with stimulus-response decisions.

Value and decision making

From an economic point of view, decisions are made with the aim of yielding the maximum value. We must then know the value of each option and the probability of this reward. It is important to know how the brain calculates such values ​​and processes rewards. There are two types of rewards, namely primary rewards (which have a direct benefit to the survival rate) and secondary rewards (without intrinsic value, but rewarding through the association with other forms of reinforcement, such as money and status).

A reward value is not a simple calculation. Value consists of various components that are both internal and external. There are various factors that contribute to the subjective component of value and that can change.

Brain areas involved in value

The ventromedial PFC (VMPFC) may be important for the value of a short-term stimulus, while the dorsolateral PFC (DLPFC) modulates the primitive value signals, so that long-term considerations are also taken into account. The difference between failed and successful self-control could therefore be the extent to which the DLPFC can modulate the signal from the VMPFC. In general, neurophysiological and neuroimaging studies show that the orbitofrontal cortex (OFC) plays an important role in the representation of value. More lateral regions of the prefrontal cortex are important for modulation control and the actions associated with the value.

In a lab setting, we know the different options and possible rewards and costs. In real life, however, there is often an option with a known value and multiple options with unknown value. It must be decided on the basis of this information whether someone stays with the first option, or if he or she will continue to investigate. Investigations are made if the so-called decision variable reaches a certain threshold value. This decision depends on the current expected benefit, the expected benefit and costs of the new option and the uncertainty about the benefit for the new option. The hypothesis is that the anterior cingulate cortex (ACC) plays a critical role in these decisions, which is supported by fMRI research. As a result, it can be stated that ACC signals bring about a type of control by promoting certain behavior.

Dopamine activity and reward

Rewards are of great importance for the behavior of animals. Many subcortical areas represent reward information, for example the dorsal and ventral striatum, hypothalamus, amygdala and lateral habenula. Much research into reward is focused on dopamine. However, reward is the result of the interaction of multiple neurotransmitters. Serotonin, for example, also seems to play an important role.

Two of the primary loci of dopaminergic neurons are the substantia nigra pars compacta (SNc) and the ventral tegmental region (VTG). Dopaminergic neurons from the substantia nigra project to the dorsal striatum, the main input nucleus of the basal ganglia. Dopaminergic neurons that originate in the VTG project through two paths. The mesolimbic path travels through structures that are important for emotional processing, for example the nucleus accumbens, the amygdala, the hippocampus and the anterior cingulate cortex. The mesocortical path travels through the neocortex, especially the medial frontal lobe.

Researchers showed that the activation of dopaminergic neurons was not dependent on the size of the reward, but more on the expectation on the reward. Dopaminergic neurons mainly became active when the reward was not expected.

We have to update our values ​​based on the changing environment. Prediction error (prediction error, PE) is the difference between the expected reward and the actual reward. The amount of dopamine neuron firing is correlated with the prediction error. Positive prediction errors, or more rewards than expected, lead to an increase in dopamine release. Negative prediction errors, or less reward than expected, lead to a reduction in dopamine release. However, both dopamine responses decrease in strength after the course of multiple offers, which is called extinction. Prediction error is usually used to obtain valuable information and to learn.

Punishment is not the same as withholding a reward. It is experiencing something negative. Both rewards and punishments are motivational. Dopamine neurons appear to be active with reward predictive targets and not active with targets that do not predict reward. The reward-related activity of the dopamine neurons may be regulated by the lateral habenula. Both the habenula and dopamine neurons are context dependent.

Berridge states that dopamine release is the result of learning, not the cause. Dopamine neurons would learn to code information consequences of prediction and learn to do something with this information. Dopamine activity would give an indication of the salinity of a stimulus or event. A reward would consist of three parts: wanting, learning, and liking. Dopamine would only mediate the "want" part. Dopamine activity indicates that we must pay attention to something and how desirable this object is.

Types of dopamine neurons:

Neurophysiologists have discovered that there are two types of dopamine neurons:

  1. dopamine neurons that respond to valence; firing is higher with reward and lower with aversive stimuli. These are neurons that code prediction errors. They are in the ventromedial substantia nigra and VTG;

  2. dopamine neurons that respond to the increased chance of any reinforcement, reward or punishment. These neurons encode salience or objects that require attention. They are in the dorsolateral substantia nigra.

How does planning work?

Planning and selecting an action

To achieve a goal you make an action plan to organize and prioritize actions. Three components are essential to implement an action plan:

  1. identifying (sub) goals

  2. mapping the consequences associated with the goals

  3. determining the requirements to complete the sub-goals

There are many possible ways in which achieving the goal cannot succeed; you have to figure out which way is efficient and sometimes it is not possible to ignore irrelevant information. Different areas of the PFC could be involved in response selection and rule specification. Because it is a hierarchy, people who cannot perform at the lower levels will also not be successful at the higher levels. However, people who cannot perform tasks at the higher levels can still be successful at lower levels.

Cognitive control with targeted behavior

It is essential that you evaluate and monitor sub-goals. Important in cognitive control is the ability to switch focus between different sub-goals. In addition, we must maintain the current goal, focus on relevant information and ignore irrelevant information.

Retrieval and selection of task relevant information

The prefrontal cortex can be seen as a dynamic filtering mechanism. The connections with the posterior cortex ensure that we can retain task-relevant information that is dependent on long-term knowledge from the posterior cortex. When changing goals, the filtering processes provide salient links to representations associated with the relevant information. The PFC therefore helps in retrieving and selecting information relevant to the task in question.

Task switching

Pre-frontal damage leads to a difficult focus on the current goal. You can investigate this with a task-switching experiment (the Wisconsin Card Sorting Test, WCST). Research with this test indeed shows that the PFC is important for coordinating goal-oriented behavior, especially if the goal has to be retrieved from memory.

What is goal-based cognitive control?

Dynamic filtering is a form of goal-based cognitive control (top-down control). There has been a lot of research to investigate how top-down control comes about. This is challenging because you look at areas of your brain that are far apart and yet communicate with each other.

Influence of goal-based control

Purpose-based control affects the content of information processing in at least two ways. One way is to accentuate the information to which attention is focused. For example: when we look at a certain location, our sensitivity to detect a stimulus in that location is increased. In addition, we can selectively keep our attention somewhere through executive functions. When we get multiple sources of information from the same location, we tend to increase task-relevant information and inhibit irrelevant information. The latter is called inhibition control.

Patients with damage to the prefrontal cortex have poor inhibition control. For example, they are unable to inhibit task-irrelevant information. Research with TMS shows that the inferior frontal cortex may be important in inhibiting task-irrelevant information and the dorsal frontal cortex for reinforcing task-relevant information. It could also be that TMS disruption of a prefrontal cortex area leads to changes in other areas of the prefrontal cortex. In that case, TMS on the dorsal frontal cortex would produce an indirect benefit for target-based selection by increasing confidence in the inferior frontal cortex. These, however, remain hypotheses.


We can consciously block information from the long-term memory from access to the main memory. By consciously trying not to think about anything, the activity in the prefrontal cortex increases, but the hippocampal activation decreases, so that we have less chance of ultimately thinking of the undesirable.

Inhibition of action is also a form of cognitive control. You can see it as the opposite of action selection. The neural mechanisms are probably not the same, because it would not be efficient enough to stop the unwanted action. This has been investigated with the stop signal task. In this task one must press one of two buttons as quickly as possible. Occasionally, however, there is a signal indicating that the response must be stopped. Patients with a frontal lesion are slower in stopping a planned response. However, this only appears to occur with damage to the right hemisphere of the inferior frontal gyrus. The subthalamic nucleus (STN) is also important in inhibiting the cortex, due to the strong exciting signal that it sends to the globus pallidus. The STN appears to be connected to the right prefrontal cortex. With Parkinson's, deep brain stimulation (DBS) is often treated at the STN. Even though this reduces motor symptoms, it does lead to an increase in impulsivity.

How do we succeed in performing targeted behavior?

Norman and Shallice drew up a psychological model for targeted behavior. This model sees the selection of an action as a competitive process. The core are schema control units, or representations of responses. These schemes are explicit movements or activations of long-term representations that lead to targeted behavior. They are activated by perceptual stimuli or other schemes. The schedules receive a lot of input. External input can activate the schema units, but often control processes determine which of the activated units is the appropriate one. There are two types of selection processes. One of these is contention scheduling. This process manages schedules for automatic and known actions. This process is fast, but passive. This prevents competition between schedules. The second method of selection is called the supervisor attention system (SAS). The SAS is essential to ensure that behavior is flexible, by enabling us to automatically control behavior. Selection can benefit from SAS in situations when:

  • planning or decision making is necessary;

  • responses are new or not well learned;

  • the required response competes with a strong, habitual response;

  • error correction is needed;

  • the situation is difficult or dangerous.

The medial frontal cortex as a monitoring system

When different sub-goals have been set, you must monitor whether they are going well (or if they lead to the goal). The medial frontal cortex plays a role here; in particular the anterior cingulate cortex (ACC). This mainly involves monitoring multiple things at the same time, during a new situation or with complex tasks. A hierarchical model has been proposed, where the ACC is the executive attention system, but this model is only descriptive. It does not explain how the medial frontal cortex is activated and by which types of representations. That is why other models have been proposed.

One model started with the fact that with an incorrect response a large evoked response occurs in the prefrontal cortex. This is called error related negativity (ERN). To be precise, this ERN is created in the anterior cingulate. It is possible that this signal is used as an indication to increase cognitive control. ERN is absent when someone is not aware of the error. ERN can ensure that the target is reactivated in the working memory. The medial frontal cortex and the ACC are also active when little or no error occurs. The medial frontal cortex response is thought to evaluate conflict. However, the exact functions of the medial frontal cortex and ACC remain speculated about.

What does social cognitive neuroscience study? - Chapter 13

Patient M.R. has damage to his orbitofrontal cortex due to a coup-contra-coup injury. The brain collides first with the back of the skull and then with the front, which often causes damage to the orbitofrontal cortex by the pointed edges. Despite this damage, he performs normally on neuropsychological tasks of memory, motor skills and language skills. However, as soon as you start a conversation you will notice that M.R. has trouble with social behavior. He will tell a completely strange, very personal thing, embrace him or go into great detail that will bore his conversation partner. Other problems commonly seen with orbitofrontal damage are reduced inhibition, lower tolerance for frustration, increased aggression, immatureness, apathy and emotional coldness.

Another well-known example of someone with such damage is Phineas Gage. While working, an iron bar pierced Gage's head, destroying almost the entire medial area of ​​the prefrontal cortex. Amazingly, the wound healed quickly and Gage was released from the hospital. It soon became apparent that Gage's personality had changed completely. Little was left of the hard-working, social and responsible person. He was, however, able to function reasonably well in daily life, it said in a more indecent way. He eventually died of an epileptic seizure, as a result of the damage his brain suffered as a result of his accident.

Social cognitive neuroscience looks at how cognitive processes from the brain make social behavior possible.

Which anatomical substrates are involved in social cognition?

It is difficult to identify areas of the brain involved in self-referential processing. This is because there is no clear definition of "self". Moreover, it cannot be devoted to one brain area, but they are different processes from many sources.

Regions of the prefrontal cortex (PFC) have the focus in this chapter. The lateral part of the PFC can be divided into the dorsolateral prefrontal cortex (DLPFC) and the ventrolateral prefrontal cortex (VLPFC). The medial regions are the orbitofrontal cortex (OFC) and the ventromedial prefrontal cortex (VMPFC). The regions associated with self-referential processes according to research are the DLPFC and VMPFC, the posterior cingulate cortex (PCC) and the medial and lateral parietal cortex. Subjective feelings also contribute to the feeling of ourselves and are mediated by all regions discussed in Chapter 10.

When we try to understand others, different brain regions are activated that, depending on the task, the amygdala, the superior temporal sulcus (STS), the medial prefrontal cortex and the OFC, the ACC, the fusiform face area (FFA), areas that are associated with mirror neuron systems, the temporal poles, the temporoparietal junction (TPJ) and the medial parietal cortex.

Which disorders are associated with limited social cognition?

Autism spectrum disorders (ASD)

Autism spectrum disorders are pervasive developmental disorders that are associated with impaired social interaction. It includes autism, Asperger, childhood disintegrative disorder, Rett syndrome and PDD-NOS. People with autism have little interest in others and are mainly internally focused. They feel very comfortable with many routine activities and get stressed out if the routine is disrupted. They often show repetitive behavior and are sometimes hypersensitive to sensory stimuli. Baron-Cohen states that people with autism turn their attention away from others because they have problems understanding others' mental states. He called this mind blindness. In addition, they find it difficult to use non-verbal cues to make inferences about another's internal state.

People with autism find it difficult to distract a person's mental state from facial expressions. When children receive photos with facial expressions, healthy children sort them by emotion, but children with autism often sort them by physical characteristics, such as having a big nose, or the color of clothes the person is wearing.

What are self-perception and self-knowledge?

Our self-knowledge is developed through self-perception processes to gather information about ourselves. Our sense of self is partly dependent on the difference between our self-knowledge and the knowledge about the qualities, wishes and thoughts of others.

Self-referential processing

How come we remember certain information better than others? According to Craik and Lockhart, when information is processed in a more meaningful way, the information will be better remembered in comparison with a superficial way of processing. They call this the "levels of processing model of memory". At the end of the 70s it was discovered that information that can be related to the "self" is processed in the deepest way and is best remembered. This is known as the self-reference effect. But why is this so?

There are two possible hypotheses:

  1. we have more knowledge about ourselves and therefore we can process information related to ourselves more deeply;

  2. the self is a special cognitive structure with a unique organization, making processing different from all other cognitive structures.

Research shows that the medial prefrontal cortex (MPFC) is more active in self-referential processing than in processing information about other people we know.

Self-describing personality traits

Another process that is unique for self-processing has to do with self-describing personality traits. When you are deciding whether a trait is self-describing (are you physically strong?) You use a different source of information than when you are deciding whether another person possesses the same trait (is Antonio strong?). People have a unique strong memory for characteristics that they judge in relation to themselves and they also have a unique way of deciding whether this characteristic is self-describing.

When we decide whether a certain adjective is self-descriptive, we use self-perceptions that are summaries of our personality traits. However, when we make judgments about other individuals, we often focus on specific situations in which the person would have shown this behavior.

It is possible to have a sense of self in the absence of autobiographical memories, because a separate neural system supports the summary of personality traits used in making self-describing judgments.

Self-reference as the baseline of brain function

When you are at rest and don't think about anything special, the blood continues to circulate to the brain, since it uses oxygen. A network of brain regions, including the MPFC, has metabolic amounts that are higher at rest. Why does the brain consume so much of the body's energy when it is not concerned with a specific cognitive task? According to Gusnard and Raichle, the brain continues with a number of psychological processes that they describe as the brain's default network. This network consists of the MPFC, precuneus, TPJ, medial temporal lobe, lateral parietal cortex and posterior cingulate cortex. The researchers have suggested that the higher metabolic ratio in the medial prefrontal cortex reflects self-referential processing. The default network therefore ensures that we always have an idea of ​​what is going on around us. This is the sentinel hypothesis.

The default network is especially active when our attention is diverted from external stimuli and we are internally focused, such as with self-reflective thoughts and assessments of social and emotional content. The default network is not activated when active tasks are being performed. However, when running active tasks that contain self-referential judgments, the MPFC deactivates less than with other types of tasks.

Similar regions are active during autobiographical memory tasks, tasks about ourselves in the future, navigating to another location and tasks with personal moral dilemmas. Similar regions are also active in thinking about the ideas and intentions of other people (their mental states).

When we think about ourselves, this is usually different from the here and now. We take a different perspective. We tend to delve into these simulations instead of the external environment. However, sometimes we have to step out of these virtual scenarios to focus on the real world. We are normally always ready to engage in social interactions. However, this must be able to be disabled if we have to focus on external objects. This happens through increased activity of the MPFC, TPJ and medial parietal cortex. If we cannot get out of those social thoughts, we project human thoughts and intentions onto inanimate objects or animals.

Self-perception as a motivational process

Judgments about ourselves are often inaccurate. We are often unrealistically positive about ourselves. This extends to our expectations in life; we expect to have more chance than others of positive events, and less chance of negative events.

Which part of the brain is responsible for this? Two studies show that the ACC is responsible for this. The ACC ensures that our attention is focused on positive information about ourselves and that this is distinguished from negative self-relevant information.

Although the judgments are sometimes inaccurate, this does not mean that they deviate completely from reality. Precise self-perception is necessary for social behavior. Research shows that the orbitofrontal cortex plays an important role in this. Patients such as M.R. have unrealistic positive self-images.

Research shows that patients with damage to the orbitofrontal cortex exhibit much socially inappropriate behavior. Although they understand social rules, they fail to recognize when they have broken these rules "in the moment." The orbitophronic cortex is therefore important for spontaneous, accurate self-perceptions.

Predicting a future mental state

The ventral MPFC is active when making predictions about mental states in new situations. The preferences for one new situation over the other remain stable. However, this is not the case when the VMPFC is damaged. Research shows that when someone predicts future events, the VMPFC was more activated when fewer short-sighted decisions were made. Researcher Mitchel argues that short-sighted decisions can result from the failure to fully represent a subjective experience of the future self.

What is the theory of mind?

In addition to self-knowledge, it is also important to understand the behavior of others. Other people's perceptions are formed on the basis of verbal and non-verbal signals they emit. The accuracy of estimates is high enough to handle others well, but not so high that we judge the interests of others the same as our own. Two concepts that are important here are empathy accuracy and theory of mind. Empathy accuracy refers to the observer accuracy in distracting someone else's thoughts and feelings.

Understanding the mental state of another is called the theory of mind, or mentalizing. It is important for successful performance in social activities and collaboration. We are interested in others from birth and respond particularly quickly to faces and social aspects of the environment. It used to be thought that theory of mind starts to develop from the fourth year. However, current research shows that theory of mind is probably congenital and automatic. Even babies of seven months already show some understanding of the mental states of others.


Social-cognitive neuroscientists are interested in how the brain makes inferences about what other people think, how we read their non-verbal communication and how we can understand the relationship between the two. The observer must translate what is observable into a conclusion about what is not observable. Various theories have been formed about this. One of these theories is simulation theory, also known as shared system theory experience. This theory states that we observe someone's behavior, then imitate it, we feel a physiological response to it, which we expect another person to feel. This process can happen unconsciously and it is based on mirror neurons.

Another theory, the "theory theory," or the mental state attribution system theory, states that we build a theory about the mental states of others about what we know about them. The knowledge consists of memories of others, the situation in which they find themselves, their family, their culture, et cetera. Evidence shows that both behavioral reading and mind reading are active in theory of mind.

Simulation theory:

The simulation theory suggests that we make inferences about another person's mind based on our ability to simulate with our mind what can happen in the mind of the other. It suggests the existence of an intrinsic relationship between the perception of the self and the perception of someone else. The role of the MPFC in both perceptions is that the perception of the self is sometimes used to form the perception of others. An fMRI study found that a ventral sub-region of the MPFC was more active in self-perceptions and perceptions of a person similar to us, and that a dorsal sub-region of the MPFC was more active in perceptions of a person not similar to us . The MPFC is important for thinking about ourselves and other people if an equal psychological process undergoes our thought processes, such as familiarity, warmth or competence.


Empathy is our capacity to understand and respond to the unique experiences of another person. This illustrates the strong relationship between self-perception and the perception of others. The first step in this is taking the perspective of the other, we must create the internal state of the other within ourselves to try to understand him or her.

The perception action model of empathy assumes that the observation of another person's state automatically activates the same mental state in the observer, triggering both somatic and autonomous responses. Through mirror neurons we can take over this mental state from the other person. The mirror neuron system is connected to the limbic system through the insula. The same areas of the brain are active in one's own emotional states, as when these states are active in another. In Singer's research, participants' brain activity was measured while they were receiving painful stimulation, or while seeing the painful stimulation of the romantic partner. In both cases the insula and the ACC were active.

Research shows that the dorsal ACC responds to self-related stimuli and is involved in self-reflection and action monitoring. The dACC and vACC specialize in self-specific processing by selecting representations and mental attitudes that suit their own personality. From this, representations that match our own concept are formed. The judge VMPFC is active with activation of itself; the left VMPFC upon activation for a person with whom one is close; and the dorsal MPFC in the left superior frontal gyrus upon activation for a publicly different person.

We must monitor our response. Decety proposed a model that contained stimulus-driven processing of affective sharing and targeted processing. The observer's motivation, intentions and self-regulation influence the degree of empathetic experience and the chance of behavior that is beneficial to others. It appears that activation of the mirror neuron system can be modulated by targeted processes that increase flexible responses.

Neural correlates of the mental state attribution

Various studies show that the MPFC plays a role in making inferences about other people's mental state and not about thinking about other information about other people. Another area of ​​the brain that probably plays a role in the theory of mind is the temporoparietal junction (TPJ) in the right hemisphere. Research that used the localizer task (a variation on the Sally-Anne task) showed this.

Yet it is not logical that one specific area is responsible for something as complex as theory of mind. The MFPC and the rTPJ are areas that may play a role, but how should you see their role? Two hypotheses have been proposed.

The first is that the rTPJ is specialized in reasoning about the mental state of another and the MPFC more global is about reasoning about others (not limited to the mental state). A second hypothesis is that MPFC guides reasoning about social tasks and that rTPJ is important for controlling attention in both social and non-social tasks.

Evidence for the first hypothesis emerges from the research of Saxe and Powel. Participants underwent a brain scan while processing information about a person's physical appearance, internal physiology or mental state. The MPFC was active with the first two types of information, the rTPJ only with information about mental state.

Evidence for the second hypothesis emerges from the attentional cuing procedure. Participants receive valid or invalid information about where they should direct their attention. Many studies find that the rTPJ is active in false belief tasks (disabled), both social false belief tasks and non-social tasks. However, the discussion about the roles of the rTPJ and MPFC remains.

In real life, non-verbal cues will tell us more about a mental state, because we do not know what another person thinks and the other person does not always tell this honestly. We know this because patients with language disorders are better at detecting lying than control subjects. A source of non-verbal information is, for example, facial expression or attitude.

Children develop shared attention at a young age; they follow someone else's gaze to determine what they are focusing their attention on. Eye contact can also tell about someone's mental state. Someone who looks away a lot during the telling may lie. Cells in the superior temporal sulcus (STS) can identify main position and gaze direction. The STS plays an important role in interpreting someone's gaze in combination with mental state.


The study on autism provides fascinating information about the important role of theory of mind in navigating within the social world. If theory of mind damage is a central characteristic of autism, then we should see differences in various neural regions that are involved in the perception between autists and controls. Is this the case?

This indeed seems to be the case. Proof of this is:

  • People with autism have a smaller head circumference, with smaller frontal lobes, STS, amygdala and hippocampus. There are also changes in connectivity compared to control persons. Regions such as the MPFC, amygdala FFA, STS, anterior insula and TPJ have an abnormal function;

  • the rTPJ functions abnormally in autism. It is less responsive by mind and there is no specialization. The less selective the rTPJ response, the more disruption in representing the mental states of others;

  • People with autism pay little attention to the eyes of others. This is because they are unable to notice that eye direction is important for understanding the social world. This is also shown in other research, where the STS is less active during theory of mind tasks. It could also be partly due to the reduced amygdala;

  • the default network shows no change in activation between rest and doing an active task for people with autism. This could be because people with autism have other types of internal thoughts that are not very self-related;

  • People with autism are often very good in visual-spatial or other non-social domains

  • multiple brain systems seem to function differently for autists. For example, there is a reduced mirror neuron system

  • People with autism have a reduced ability to imitate. Imitation is important for the development of social cognitive skills and empathy. It also leads to the failure of integrating movement activities into action chains that ensure that movement intentions of others are understood.

What is social knowledge?

Representations of social knowledge

A difficult aspect of social behavior is that there are few consistent rules. Every social situation involves different standards. Behavior that is appropriate in situation A is inappropriate in situation B. The frontal lobes seem to play an important role in judging when something is appropriate or not.

People with damage to the OFC do not include the context in their consideration. They cannot make good use of social knowledge to reason about social interactions. With damage to the OFC, people are unable to generate social emotions, such as shame, that ensure that social mistakes will no longer occur in the future. If the OFC is damaged, it is not possible to learn new social knowledge. It thus appears that the OFC is important for learning social knowledge and applying it in social interactions.

The use of social knowledge when making decisions

People with damage to the ventromedial PFC are bad at making social decisions. It leads to disruption in value-based decisions, even if there is no uncertainty. If the OFC is damaged, one is able to learn that a stimulus is rewarding, but when it becomes punitive, they continue to choose the stimulus. Reversal learning therefore does not take place. They do not learn from a negative experience. If the ventromedial frontal lobe (VMF) is damaged, you can no longer learn from negative feedback, but from positive feedback. The VMF is therefore important for reversal learning, extinction, anxiety conditioning, regret and envy.

A possible role for the VMF is to wear representations of expected rewards, not to guide choices, but to compare outcomes. If the outcomes are negative and unexpectedly do not meet expectations, the VMF ensures evasion learning. This does not happen directly, but indirectly through the amygdala to form new associations for flexible behavior change. The VMPFC therefore plays an important role in evaluating the negative consequences of social decisions. The orbitofrontal cortex is important in choosing correct behavior by supporting reversal learning by evaluating the negative consequences of social decisions.


The field of neuro-economics integrates psychology, neuroscience and economics to gain an understanding of how people make decisions. Economic models of decision making suggest that people should make rational decisions, namely those that maximize their rewards and minimize their losses. However, people often do not make rational decisions. People regularly make decisions by being guided by their emotions. This leads to the conclusion that decision-making, whether it is about social behavior, money or morality, is driven by a combination of emotional and cognitive systems.

Moral decisions

The trolley problem is often used for this. We make different choices depending on our personal involvement in sacrificing a life, because it is about making emotional decisions. With impersonal decisions we see greater activation of the right lateral PFC and bilateral parietal lobes, important in working memory. In personal decisions, regions such as the medial frontal cortex, posterior cingulate gyrus and amygdala are active. These areas are involved in emotional and social cognitive processes. Differences in moral decisions are therefore related to the extent to which we allow our emotions to influence decisions about what is morally acceptable.

What is the anatomy of consciousness? - Chapter 14

The conscious mind consists primarily of three brain structures:

  • the brainstem, including the hypothalamus;

  • the thalamus;

  • the cerebral cortex.

We are only aware when we are awake (and in dream sleep), but we can be awake without being aware, as in a vegetative state.

How does consciousness work?

Types of consciousness

Neurologist Antonio Damasio makes a distinction between two types of consciousness, namely core consciousness and expanded consciousness. Core consciousness (or "awareness") is what happens when the consciousness switch "turns on." The organism is awake, alive, alert and aware of the moment. It is then not concerned with the past or the future. A more complex form of consciousness is the expanded consciousness. This form of consciousness enables an organism to have an extensive sense of itself. It places the individual in a historical point of time, where thoughts about the past and the future also play a role. It includes the autobiographical self of memories and expected future experiences.

The brainstem

The brain regions that modulate vigilance and the ability to switch on consciousness are in the evolutionary oldest part of the brain; the brainstem. The brain stem nuclei provide homeostasis in the body and the brain. The nuclei are mainly in the medulla oblongata, together with input from the pons. In the pons are the reticular formation and the locus coeruleus (LC). The reticular formation state from nuclei involved in motor and cardiovascular control, pain modulation and the filtering of irrelevant sensory stimuli. The nuclei influence the cortex by cortical connections and by neurons that form the reticular activating system (RAS). The RAS has extensive connections with the cortex through two paths: the dorsal pathway through the intralaminar nucleus of the thalamus and the ventral pathway through the hypothalamus and basal forebrain. The RAS deals with arousal, sleep-wake regulation and attention. Arousal is also influenced by the LC. The LC can prevent sleep by activating the cortex. The brain stem receives information about the state of the organism from the spinal cord.

The thalamus

The neurons that connect the brainstem to the intralaminar nuclei (ILN) of the thalamus play an important role in nuclear consciousness. Damage to bilateral ILN or interruption of the connection with the brainstem leads to loss of core consciousness. The thalamus plays different roles that are related to consciousness. All sensory input passes through the thalamus. The thalamus is important for arousal, information processing of the RAS and sleep. Connection loops from the thalamus to the cortex and back assist in coordinating activity.

The cerebral cortex

The cerebral cortex maintains vigilance and contributes to selective attention. The expanded consciousness begins with contributions from the cerebral cortex to create a core awareness of the self. These contributions are recordings of memories of past activities, emotions and experiences. Damage to the cortex does not lead to loss of consciousness, but to the loss of a specific skill.

What is the consciousness?

The basic question regarding consciousness is how a pure physical system can create conscious intelligence. Philosophers have two perspectives on this. Dualism states that the mind and brain are two separate phenomena and that conscious experience is non-physical. However, it does not take biological findings into account. Materialism states that both the mind and the body are physical media and that understanding of the physical mechanism of action of the body will provide understanding of the mind. This does not take into account the reality of subjective experience.

Definition of consciousness

Stuart Sutherland defined "consciousness" as follows:

"Having perceptions, thoughts and feelings: awareness ". The term is impossible to define except in terms that are not intelligent (...). Awareness is a fascinating phenomenon, it is impossible to specify what it is, what it does or why it exists. "

Steve Pinker stated that the problem of consciousness can be divided into three issues: self-knowledge, access to information and "sentience." Access to information is the ability to report on the content of mental experience without being able to report what the content is made of. Conscious processing provides access to verbal reports, rational thoughts and conscious decisions. Unconscious processing does not provide access. According to him, sentience was the most interesting part of consciousness. It is the subjective experience, phenomenal awareness and the first view of a person - what it is like to be or do something. Sentient experiences are called qualia. It is called the difficult problem of consciousness. Cognitive neuroscience can provide knowledge about self-knowledge and access to information, but not about sentience.

Conscious versus unconscious processing and access to information

Most mental processes are unconscious. We are only aware of the content of our mental life, but not what it has generated. A statement about conscious processing is conjunction, bringing together the awareness of a stimulus with the identity, or location, or orientation, or other characteristic of the stimulus. A statement about unconscious processing is disjunction, the separate awareness of the stimulus' characteristics of the stimulus, so that even when people are unaware of the stimulus, they can still respond to these characteristics above beyond expectation.


Blindsight refers to the phenomenon when patients have a lesion in the visual cortex, but can nevertheless respond to visual stimuli presented in their blind area. These activities happen outside of consciousness. Such patients therefore have access to certain information, but do not experience it that way. This could be due to saved parts of the primary visual paths.

Patients with damage to the parietal lobe, but with a spared visual cortex, can make perceptual judgments without being aware of this. The failure to gain conscious access to information to compare stimuli cannot be attributed to processing within the secondary visual system. This is because the geniculostriate path is still intact.

The study of blindsight and neglect leads to important insights:

  1. it emphasizes a general characteristic of human cognition: many perceptual and cognitive activities can take place outside of consciousness;

  2. we can process information that we are not aware of;

  3. this characteristic does not necessarily depend on subcortical or secondary systems; unconscious processes probably take place in the cortex.

With subliminal perception, a stimulus is not consciously observed, but the choice is subsequently influenced by subliminal perception. The stimulus is then processed at a sub-conscious level. This provides evidence for unconscious processing. Obtaining evidence for subliminal perception depends on whether subjective or objective criteria are important. When the criteria are subjective, priming effects are visible. When the criteria are objective, no priming effects are seen.

Access awareness

There are three dimensions to access awareness: costs of space for storage and processing, costs of time for processing and retrieval, and costs of resources to process it. Access consciousness has four core brands, which have been drawn up by Pinker:

  1. it is full of sensations;

  2. we are able to move information in and out of our consciousness and in and out of our short-term memory by focusing on it;

  3. this information always comes with salience, a kind of emotional coloring;

  4. there is an "I" that determines what will be done with the information when it enters the field of consciousness.

We are able to switch from conscious, controlled processing to unconscious, automatic processing. According to Raichle and Petersen, the "scaffolding to storage" framework explains this. First we use conscious processing during practice when developing complex skills, the scaffolding process. The memory is consolidated or the skill learned. Once the task has been learned, the brain activity and involvement change. This is the removal of the scaffolding or the withdrawal of support structures and the involvement of more permanent structures. We use other brain networks for conscious and unconscious processing. It is difficult after the switch to unconscious processing to reinitiate the conscious processing. If we have learned a conscious task well, we can unconsciously carry it out in the future and we can focus the conscious processing on a new task.

Our evolutionary goal of consciousness seems to improve the efficiency of unconscious processing. The ability to banish learned tasks and memories to the subconscious enables us to dedicate our limited consciousness to recognizing and adapting to changes and new situations in the environment. This increases our chance of survival.

How are neurons involved in a conscious experience?

Neuroscientists are very interested in how the nervous system causes perceptual activities. One of the largest researchers in this field is William Newsome. He investigated how neural events in area MT of the monkey cortex, which is actively involved in motion detection, correlate with the actual perceptual event. One of his first findings is that the physiological performance capacity of the animal to discriminate in motion could be predicted by the neuronal response pattern of a single neuron. In other words, a single neuron in the MT area was just as sensitive to changes in the visual field as the monkey was.

By adjusting the response rate of the neurons through microstimulation, an animal can be steered to the right decision in a perceptual task. The greatest effects are visible during the interval in which the animal is thinking about the task. It is not thought that this is the place where the decision is made, but rather that it is part of a loop for a certain perceptual discrimination. Stimulation in other places in the course leads to different perceptual subjective experiences.

Benjamin Libet and colleagues drafted the backward referral hypothesis. This means that the awareness of a neural event is delayed by approximately 500 ms after the start of the stimulating event and that this awareness is referred back in time to the start of the stimulating event. This means that you think you have been aware of the stimulus at the start and that you do not notice the time difference. However, this is not such a big delay that we simply do things without thinking about it. We can still correct inappropriate behavior. The ability to detect and correct errors is, according to Libet, the basis of free will. The lateral prefrontal cortex is important for correction behavior. With damage to the lateral prefrontal cortex, behavior can no longer be monitored and integrated over time.

What is the brain interpreter system in people?

We experience ourselves as a whole. This is because an interpreter merges all information into a coherent whole. It is cortically located and works largely unconscious. The system seeks explanations for internal and external events to produce appropriate (behavioral) responses.

It is important to know that the interpreter can only use information that it receives. Interpreting the cause of both internal and external events creates the formation of beliefs. These are mental constructs that keep us from simple responses in daily life. The left hemisphere appears to be the most important in the conscious experience.

Left and right consciousness

Left hemisphere consciousness reflects what we mean by normal consciousness experiences, right hemisphere consciousness varies as a function of the specialized circuits that process the right hemisphere. The right hemisphere cannot make inferences and therefore has a limited consciousness. The left hemisphere is constantly labeling experiences, making inferences for causes, and effecting many other cognitive activities. The left hemisphere is therefore busy differentiating the world, while the right hemisphere is simply monitoring this.

Difference monkeys and humans with regard to consciousness

Is consciousness only among humans or also among monkeys? The prefrontal cortex appears to be much larger in humans than in monkeys. Research by Gallup shows that a number of chimpanzees are able to recognize themselves in the mirror. According to Gallup, this was a sign of self-concept and self-awareness. However, this is doubted nowadays. Self-recognition would only require awareness of the body and not a concept of itself. In addition, people with prosopagnosia cannot recognize themselves in the mirror, but they do have a self-concept. Another approach is imitation. If we can imitate, we can distinguish between our actions and those of another. It is seen as proof of self-recognition. It appears that monkeys cannot imitate.

Josep Call and Michael Tomasello have investigated the existence of theory of mind in monkeys. They concluded that chimpanzees only have a limited theory of mind. They can understand the perceptions and goals of others, but cannot understand their wishes or beliefs. However, children of eighteen months do tend to share beliefs and wishes with each other. Researchers state that this skill is at the basis of human cooperation.

What is free will?

There are many discussions as to whether there is indeed an actual free will in people. It is important that free will and responsibility are not the same. There may be a mechanical concept of how the mind works, eliminating free will, but responsibility always remains intact, because it is a characteristic of social interactions and not a process in the brain. Mental states arise from stimulus-driven neural activity, which is limited by targeted neural activity. Sometimes the stimulus-driven activity loses it from the goal-oriented idea in initiating an action. If this concept is right, we live in real time and affect everything that generates our brain and changes our later actions. Freedom is then getting multiple options that our brain can choose from.

Deterministic view

One of the views on free will is the deterministic viewpoint. Determinism is a philosophical belief that all current and future events and actions, including human cognition, decisions and behavior, are caused by past events in combination with certain laws of nature. This view therefore states that every event and action can in principle be predicted in advance, when all parameters are known. You can also know what happened in the past by looking at the present. Determinists believe that the entire universe is predictable. This thought is not believed by the vast majority of the population, because that would mean that people are not personally responsible for their actions.

Chaos theory

However, chaos theory demonstrated that a deterministic process is not possible. Chaotic systems are very sensitive to initial conditions. Although future behavior is determined by initial conditions, decisions of these initial conditions cannot be used to predict a future condition. This is because there is always a slight deviation. Extreme sensitivity to initial conditions is called dynamic instability or chaos.

Quantum theory

Quantum theory was developed to explain why an electron remains in its orbit, which could never be explained by Newton's laws, for example. Quantum theory is the study of the smallest parts that make up atoms to understand the fundamental properties of a substance. The discovery of quantum mechanical effects threw oil on the fire over the idea of ​​determinism.


A complex system consists of many parts that are related to each other, so that when they organize themselves into a single system, the resulting system exhibits one or more properties that is not part of the individual parts. Examples of complex systems are the brain, climate and human social structures. "Emergence" is the term for the emergence of a new structure (which did not exist before) with a new level of organization and new characteristics, which arises during the self-organization of a complex system. A key to understanding emergence is to see that there are certain layers of organization.

There are two different trends within the emergence doctrine. The hard, deterministic view is that there is only "weak emergence", with new parts being created as a result of interactions on an elementary level. Moreover, this view states that an emergent property can be reduced to individual parts. So you can predict the next level from one level. "Strong emergence", on the other hand, states that new properties cannot be reduced. According to this view, a new characteristic is more than the sum of the parts and the laws of one level cannot be predicted by an underlying fundamental theory or an understanding of the laws of another organizational level.

Multiple realizability

The phenomenon of multiple realizability demonstrates that there are many ways to implement a system to produce a behavior. The brain is a complex system, as a result of which knowledge about one organizational level does not enable one to predict actual functioning at another level.

Free will and the brain as a layered system

What is important to know about free will is that people who believe in free will will behave better and that people who do not believe in it will behave more aggressively and less helpful towards others. The belief in free will may be necessary for motivation to control automatic impulses for selfish behavior.

The mind-brain system appears to be a system that is divided into layers. Actions seem to be their own choice, but the mental state that we have is the result of selection from various possible mental states in a complex environment. This activity is called symmetry breaking. Because small fluctuations have an effect on the system, it is determined which outcome will arise. The mental state is automatic, deterministic, and driven by many thousands of physical systems.

People strive to make better decisions to deal with and adapt to the world in which they live. This is also what our brains do. After all, every brain makes decisions based on, for example, experiences and innate bias. The free will is found in the development of more options for the brain to be able to make choices within the context in which we find ourselves. We cannot look at behavior as one thing, but we must also consider the context. It also appears that the behavior of one person can influence the behavior of another person. This is about the social layer, with concepts such as following rules and personal responsibility. Just as a mental state can limit the brain, a social group can limit individuals from the group.

The law

Note, this paragraph is about the American legal system!

People are a group of interacting brains that form a society with rules to which they live. The laws enacted in most societies serve to enforce human rights and responsibilities, function as a social mediator between people and use institutions that operate when these laws are broken. Responsibility lies with the people, unless there is serious coercion or a serious lack of rationality. Concepts that are important in legislation are responsibility and motivation for punishment.


The law views the brain in the following way:

  • a practical reasoner works freely in a normal brain and produces action and behavior;

  • personal responsibility is a product of a normally functioning brain;

  • especially in criminal cases, the defender must have a "human rea" or have a real malicious intent;

  • things like a lesion, bleeding or injury or a neurotransmitter disorder can affect normal function;

  • this may result in reduced brain capacity;

  • so: a person with brain injury has a reduced responsibility.

Nowadays, brain scans are used to explain why someone behaves in a certain way. However, neuroscientists are not convinced that such scans can actually explain our actions. They have the following reasons for this:

  • a brain scan only records something from a certain area. It is unthinkable to state that a certain spot on a brain scan leads with 100% certainty to a certain thought or behavior;

  • there are variations in how brains are connected and how they process information;

  • the mind, emotions, and the way we think are constantly changing. What is measured at a certain moment in the brain does not reflect what happened in the time of the crime;

  • brains are sensitive to many factors that can change the scan, such as caffeine, tobacco, alcohol, drugs, fatigue and the menstrual cycle;

  • performance is not constant. People do tasks better or worse per day;

  • images of the brain are susceptible to prejudice.

When looking at an abnormal brain, the law makes various erroneous assumptions without relying on science. For example, it assumes that an abnormal brain scan is an indication of abnormal behavior. However, a person with an abnormal brain scan does not automatically show abnormal behavior. Another erroneous assumption is that a person with an abnormal brain who exhibits abnormal behavior is automatically unable to take responsibility. Responsibility is not localized in the brain. An abnormal brain does not mean that a person cannot follow rules. Moreover, an abnormal brain does not mean that a person is more violent.

Found guilty: now what?

The decision to punish someone who is found guilty for a certain act and for what purpose is in the hands of the judge. This is influenced by his or her own beliefs of justice, which occurs in three forms: retaliation justice, utilitarian justice and restorative justice. Retaliation justice is aimed at punishing the individual criminal in relation to the crime committed. The aim is therefore to give the criminal his or her earned wage. This is not about the benefits to society of the punishment, but only about what the individual deserves for his crime. It is not about deterring others, reforming the criminal or compensating the victim.

Utility justice, also called consequentialism, is more forward-looking. It deals with the better future for society that may result from the punishment of the individual offender. There are three types of utilitarian justice: with the first specialty, the offender is deterred, for example in the form of prison time or fines. The second type of utilitarian justice creates a handicap for the offender. This can be achieved geographically, for example, so that the perpetrator is no longer on the street. The third type is rehabilitation through treatment and education. As a result, an attempt is made to return the offender to normal functioning in society and to minimize the risk of relapse. Utilitarian justice is not always fair and can go against the rights of an individual.

Restorative justice, finally, looks at crimes such as having committed an act against a person rather than against a state. In this type of justice, the offender retains responsibility for the victim. There remains a possibility for the offender to make it "right," for the victim to say something and protect it, and for the offender to reintegrate into society.

The human tendency to be honest

People often tend to be honest, give people what they deserve and not act from a utilitarian perspective, trying to scare others away. Where does this sense of retribution of honesty come from?

We were born with a sense of reciprocity and punishment. Experiments show that toddlers already have these feelings. Young children already understand intentionality and see just intentional violations of rules such as "naughty". These innate tendencies help us to understand that although people say they support utilitarian considerations, they act in a retributive manner in practice.

Different regions of the brain are active when individual punishment assessments are made and when assessments are made by a third party. Individual assessments activate emotional regions of the brain and third party assessments activate the right-hand DLPFC.

What should a judge do?

If a judge believes that people are personally responsible for their behavior, either retaliation justice or restorative justice makes sense. If a judge believes that deterrence is effective or that punishment can turn bad behavior into good behavior, or that people are irreparable, utilitarian justice makes sense. From the retaliation perspective, the focus is on:

  • the individual rights of the offender: since the offender did not have control over his behavior, he or she does not need to be punished, but treated if possible;

  • the rights of the victim;

  • the greater good of society.

Only 5% of the population breaks the law. People are inclined to cooperate. This is also necessary for living in large social groups. According to the emotional reactivity hypothesis, everyone who was successful in controlling emotional reactivity has started to reproduce, while others have been banned or killed. However, people also work with others who are not part of their family. This cannot be explained from an evolutionary point of view.

In the absence of punishment, cooperatives cannot sustain themselve, in the presence of so-called free drivers, causing it to collapse. Free-driving individuals are individuals who do not cooperate or contribute, but merely exploit the efforts of others. This means that they do not incur costs, but also do not yield any benefits. Whether an individual is willing to cooperate, under what circumstances, can be investigated with the help of the "ultimatum economic game". When punishment is an option and the game is played repeatedly, the chance of cooperation increases.

If the function of the DLPFC on the right is disrupted, people accept lower messengers while still seeing them as unfair. It also increases self-centered responses to dishonest messengers. The area therefore normally inhibits self-interest and the impact of self-centered tendencies in a decision-making process. The DLPFC judge is important in implementing fair conduct.

Join World Supporter
Join World Supporter
Log in or create your free account

Why create an account?

  • Your WorldSupporter account gives you access to all functionalities of the platform
  • Once you are logged in, you can:
    • Save pages to your favorites
    • Give feedback or share contributions
    • participate in discussions
    • share your own contributions through the 7 WorldSupporter tools
Follow the author: Vintage Supporter
verzekering studeren in het buitenland

Ga jij binnenkort studeren in het buitenland?
Regel je zorg- en reisverzekering via JoHo!

Access level of this page
  • Public
  • WorldSupporters only
  • JoHo members
  • Private
[totalcount] 1
Comments, Compliments & Kudos

Add new contribution

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Enter the characters shown in the image.