Summary with the 5th editon of Sensation and Perception by Wolfe et al.

What is sensation and what is perception? - Chapter 1 (5th edition)

What are sensation and perception?

Sensation refers to the possibility of noticing a stimulus and sometimes converting it into a personal experience. Perception is the giving of meaning to a perceived sensation. Example: a possible sensation is feeling that a hand touches you; perception is the understanding of this sensation: is it an expression of affection or is it the customs who are looking for suspicious objects? Sensation and perception are central: everything we feel, think and do depends on this. There are five scientific methods to study sensation and perception:

  1. Thresholds. For example: What is the softest (loudest) sound that you can hear (without damage)?
  2. Scaling: measuring private experiences. Qualia (singular: quale) is the philosophical term for a personal conscious experience of sensation or perception: do you see the same color red as me or do you taste the same taste?
  3. The signal detection theory: measuring difficult decisions. For example: Is the abnormality on the mammogram really breast cancer or something benign?
  4. Sensory neuroscience. How can our perception of the world depend just as much on the activity of our sensory nerves as on the world itself? For example: After biting a pepper, a burning sensation occurs in the mouth. Yet there is no difference in temperature compared to the time for eating the pepper.
  5. Neuroimaging: an image of the brain. For example: on one eye a picture of a house can be presented and on the other eye a picture of a face. The result is binocular rivalry; an interesting effect resulting from two images that compete for perception. Sometimes you see the house, sometimes the face, but never both.

What is psychophysics and how and by whom was it developed?

Gustav Fechner (1801-1887) developed 'psychophysics' and is called by some the father of experimental psychology; a title that is normally attributed to Wilhelm Wundt (1832-1920). Fechner is best known for relating changes in the physical world to changes in our psychological experiences. His obsession with the relationship between thought and substance placed him in a classical debate between dualism and materialism. Dualism is the vision that thoughts lead an independent existence; apart from the material world of the body. Materialism is the view that substance is the only thing that exists and that everything - including thoughts and consciousness - is the result of interactions between substances. A modern materialistic position (the majority of scientific psychologists believe in this) states that the thoughts are what the brain does. Fechner came up with ' panpsychism' : the vision that the thoughts exist as a characteristic of all substances, that is, that all substances have consciousness (applicable to humans and animals but also to non-living things). Fechner's goal was to describe the relationship between sensation (thoughts) and the energy (substance) that caused this sensation. He called his theory and methods 'psychophysics': defining quantitative relationships between physical and psychological (subjective) experiences.

Fechner was inspired by Ernst Weber (1795-1878). Weber was interested in touch and studied the accuracy of our idea of touch by measuring the smallest distance between two points to perceive them as separate points instead of one point. Fechner called this the 'two-point touch threshold'.

Weber discovered that people who were given a light weight in a hand were better at detecting a difference in weight with a different weight in their other hand. This was in contrast to a heavy weight as a reference stimulus. He called this the 'just noticeable difference' (JND) or 'difference threshold': the smallest detectable difference between two stimuli, or in other words: the minimal change in a stimulus that is correctly noted to be different from a reference stimulus. JNDs change in a systematic manner and adhere to certain ratios for certain stimuli. This is what Fechner called Weber fractions: the constants of proportionality in Weber's law. Fechner also gave Weber's observations a mathematical formula called Weber's law : the principle that describes the relationship between stimulus and resulting sensation, saying that the JND is a constant fraction of the comparison stimulus. In formula terms, this says that the magnitude of a detectable difference (ΔI) is a constant part (K) of the level of the stimulus (I). In Weber's observations, Fechner found what he was looking for: a way to describe the relationship between thought (mind) and substance. He created Fechner's law : the smallest detectable change in a stimulus (ΔI) can be seen as a unit of thought, because this is the smallest perceptible change. S = k log R , where S is the psychological sensation, k a constant and log R the level of the physical stimulus. Conclusion of this formula: our psychological experience of the intensity of light, sound, smell, touch or taste increases more slowly than the current physical stimulus. This formula has an agreement with Einstein's E = mc² .

An absolute threshold is the minimum amount of stimulation needed for a person to notice the stimulus 50% of the time, the minimum intensity of a stimulus that can be noticed.

Psychophysical methods

What is the absolute threshold of sound, the softest sound you can hear? How do we measure that? There are 3 methods for this. The method of constant stimuli requires that many stimuli be created with different intensities to find the smallest / lowest detectable intensity. Stimuli - ranging from rare to almost always perceptible (or rare to almost always perceptible from a reference stimulus) - are offered one by one. Participants respond to each presentation with "yes / no" or "same / different". The method is easy to use but can be inefficient because much of the time is spent on stimuli that are clearly above the threshold.

A second, perhaps more efficient method is the method of limits, in which specific dimensions of a stimulus, or the difference between two stimuli, are varied step by step until the participant responds differently. A set of stimuli with, for example, tones is presented, starting with the softest sound and becoming slightly louder. The participant is asked to indicate when he / she hears something. It is also possible in reverse order (from loud to soft) and then the participant is asked when he / she no longer hears the sound. The average of such crossover points is seen as the threshold.

A third method is the method of adjustment that is the same as the method of limits, except that the participant himself increases or decreases the intensity of a stimulus step by step. This method is the easiest to understand because it corresponds to daily activities, such as adjusting the volume on a stereo, but is the least used. Because it is difficult to have people reliably adjust intensity to the same value between people and over time. To return to the absolute threshold: there are no hard 'absolute' thresholds. Due to variability in our nervous system, we will sometimes detect stimuli close to the threshold and sometimes not. We will have to do it with a somewhat arbitrary definition of a threshold.

Scaling methods

Scaling methods can be used to find out more about the magnitude of our experiences. Such a scaling method is, for example, the estimation of magnitude: a psychophysical method in which participants must assign values according to their perceived magnitude of the stimuli. Participants, for example, receive a number of solutions with sugar, one sweeter than the other. The participant must assign numbers to each solution in a way that seems logical to them. Sweeter solutions get higher numbers and if A seems twice as sweet as B, the number of A would be twice as large as the number of B. Stevens developed this estimate of magnitude. The relationship between stimulus intensity and sensation is described in Stevens' force law : S = aIb, in which the sensation ( S ) is related to stimulus intensity ( I ) by exponent b . If the exponent is less than 1, the sensation grows less rapidly than the stimulus. That is also what Fechners and Weber's laws predict. Imagine that you have lit candles and light ten more. If you started with one, the change in lightness is great, but when you started with 10,000 candles you will hardly notice the difference. The exponent for light intensity is 0.3. About 0.8 for sweetness. Some values are greater than 1: after all, a 12 cm candle is twice as long as a 6 cm candle. The relationship is only true over an average range of sizes. Adding an inch to the size of a spider changes our sensory perception much more than adding an inch to the size of a giraffe. In the case of electric shocks, the pain grows with I3.5, so a four-fold increase in the electric current is experienced as a 128-fold increase in pain. For comparison:

  • Weber's law concerns an objective measurement. We know how much the stimulus has varied and participants cannot or cannot say that it has changed.
  • Fechner's law has the same kind of objective measurement as Webers, but Fechner's law is actually a calculation based on assumptions about how sensation works. It assumes that all JNDs are perceptually equal to each other. This is an incorrect assumption and leads to places where the 'law' is therefore incorrect.
  • Stevens' power law describes the estimation values quite well, but these are qualitatively different from the data that forms the basis of Weber's law. We can record the participants' ratings and check whether they are reasonable and consistent, but we cannot see whether they are objectively correct or incorrect.

A variant of the scaling method shows that different individuals live in different sensory worlds even though they are exposed to the same stimuli. This method is called cross-modality matching: the ability to match the intensities of sensations that come from different sensory modalities. This skill provides insight into sensory differences. For example, a listener can adjust the light intensity of a light until it matches the loudness of a tone. However, this is not possible with taste. A molecule with the abbreviation PROP is experienced by some as very bitter and as some as completely tasteless, others fall back in between. When asked to match the PROP with sensations that are not related to taste, the same comparison is not found as between sound and light. People who taste no or almost no taste of PROP relate it to weak sensations, such as the sound of a whisper. Supertasters, in contrast, are individuals whose perception of taste sensations is the most intense. They relate the bitterness to the intensity of the sun or experience the worst pain ever experienced.

The Signal Detection Theory (SDT)

The signal detection theory (SDT) is a theory that states that the stimulus that you are trying to notice (the signal) is always noticed in the presence of noise. A distinction is made between internal noise (for example, the fact that when you close your eyes in a dark room, you still see a gray pattern with occasional lighter spots) and external noise (for example, the noise in a mammogram can also resemble cancer). Sometimes it is easy to distinguish between noise and signal. The theory helps us understand what happens when we make decisions in uncertain situations. The measurements obtained from a number of presentations are the sensitivity (d ') and the criterion of the participant. The criterion is an internal threshold, determined by the person. If the internal response is above this criterion, the person gives a different automatic response than when it is below this criterion. There are two normal distributions: one of the noise and one of the noise together with the signal. A vertical dotted line is drawn where the criterion is. There are four options:

  1. Correct rejection: there is no signal and you say you have not noticed a signal.
  2. Hit: there is a signal and you say you have noticed a signal.
  3. False alarm: there is no signal, but you say you have noticed a signal.
  4. Miss: there is a signal, but you say you have not noticed a signal.

If the two normal distributions overlap, it is very difficult or impossible to distinguish the signal from the noise. The further apart the tops of the distributions are, the greater the sensitivity. The distance between two peaks is called the sensitivity or d-prime (d '). Sensitivity is a value that represents how easily a person can notice the difference between the presence and absence of a stimulus or the difference between stimulus 1 and 2. If the sensitivity is fixed, you can only change the pattern of errors by changing your criterion shift. All the way to the left: if you don't want to miss a single signal, the chance of more frequent false alarms is greater. All the way to the right: you will not make false alarms, but more misses. Receiver operation characteristic (ROC) curve is the graphical plot of the number of hits as a function of the number of false alarms. If these are the same, all points fall on the diagonal line, indicating that the person cannot indicate the difference between the presence or absence of the signal. As the sensitivity of the person increases, the diagonal line curves into a bulging line that bends from the bottom right towards the top left corner and bends back to the top right corner. This indicates that a good distinction can be made between signal and noise.

Fourier analysis

Joseph Fourier (1768-1830) developed analyses whereby complex sounds such as music and speech, complex head movements and complex images such as objects and situations can be broken down into a set of simpler signals that are easier to describe. One of the easiest types of sound is a sine wave: in hearing this is also called a pure tone and it is a wave for which the variation as a function of time is a sine wave function. In vision, a sine wave is a pattern for which variation in a property such as color or brightness as a function of space is a sine function. The air pressure changes continuously on one frequency. The time (or space) required to complete one cycle of a repeating wave is called a period. The distance required for a full cycle of oscillation for a sine wave is called wavelength. The height of a wave is the amplitude. The phase of the wave is its position relative to a set marker and is measured in degrees. Sine waves are rare because only a few vibrations in the world are so pure. However, the Fourier analysis states that there is a mathematical procedure whereby every complex signal can be broken into sine waves at different frequencies. The complex signal is the sum of these sine waves. Noises are described as changes in air pressure over time; images can be described as changes in light and dark over space. The images are broken into components that tell how often changes from light to dark occur within a specific space, this is called spatial frequency and is often specified in cycles per degree: the number of pairs of light and dark vertical stripes ('bars') per degrees of the visual angle. The greater the contrast in the picture, the greater the contrast between black and white stripes.

What is meant by sensory neuroscience and the biology of perception?

An important assumption regarding nerves and brains is that studying the nerves of animals tells us something about human nerves, so there is a kind of continuity. The strongest argument for continuity between animals and humans came from Darwin's theory of evolution. At the same time psychologist wrote Johannes Müller (1801-1858) are 'Handbook of Physiology, which he doctrine of specific nerve energies (doctrine of specific nerve energies) formulated. This doctrine states two things: (1) that the nature of a sensation does not depend on how sensory fibers are stimulated, but which fibers are stimulated and (2) that we are only aware of the activity in our nerves and we are not immediately aware can belong to the world itself. There are twelve pairs of nerves (each one nerve from one body half and one from the other body half) that originate in the brainstem and reach to sensory organs and motor systems (muscles) through openings in the skull. These pairs of nerves are called cranial nerves . Three of the cranial nerves - smell, optical and auditory - are exclusively dedicated to sensory information. Odor nerves are the first pair of cranial nerves. The axons of these form bundles after they have passed through the cribriform plate, after which they form the olfactory nerve. The olfactory nerve led impulses from the olfactory epithelium in the nose to the olfactory bulb. The optic nerves are the second pair of cranial nerves and open up the retina. They bring visual information to the thalamus and other parts of the brain. The auditory nerves are the eighth pair and connect the inner ear with the brain. The nerve carries information involved in hearing and spatial orientation. The auditory nerve is also called the 'vestibulocochlear' nerve, because it also serves vestibular (equilibrium) purposes and it is made up of the auditory nerve (cochlear nerve) and the vestibular nerve. Three other nerve pairs - 'oculomotor', 'trochlear' and 'abducens' - are dedicated to muscles that move the eyes. Eye movement nerves (oculomotor nerves) are the third pair of cranial nerves and provide nerve action to all extrinsic muscles of the eye except the lateral rectus muscle and the superior oblique muscles. They do, for example, provide the lift muscle of the upper eyelid called the ciliairy muscle, the circle muscle of the jet-shaped body in the eye and the muscle of the pupil called the 'sphincter' muscle. The fourth pair of nerves, the pulley nerves, supplies the superior oblique muscles of the eyeball with nerves and the sixth pair of nerves, the draining nerves (abducens nerves) do this for the lateral rectus muscle. The other six cranial nerves are either motor-exclusive or process both sensory and motor signals. The first six are discussed later. Müller's leather does not only talk about cranial nerves, but also about warm and cold fibers that regulate the temperature of your skin. This is possible both via external outside temperature and through substances. Menthol, for example, lets cold fibers fire, leaving your skin feeling colder without changing the temperature. Substances such as in peppers (called capsaicin) do the opposite with heat fibers. Paradoxically, capsaicin and menthol are used in ointments for pain, while too many of these substances actually cause pain receptors to fire in the skin.

Four different types of sensory information reach the cortex in different places. Visual perception uses both cortex that runs anterior to the parietal lobe and cortex that runs ventral to the temporal lobe. Hermann Ludwig Ferdinand von Helmholtz (1821-1894) was one of the most important scientists ever, he discovered many important things and was very influenced by Müller. However, what Hermann did not like about Müller was his idea of vitalism: the idea that there is a force in life that is different from physical entities. Helmholtz stated that everything could be explained by physical forces. He demonstrated that the activity of neurons followed physics and chemistry rules and was the first to effectively measure the speed at which neurons transmit their signals.

Neural connections

Santiago Ramón y Cajal (1852-1934) conducted important research into the organization of neurons. He stated that neurons do not touch each other, there is a space between the cells. Sir Charles Sherrington (1857-1952) called that space a synapse: the space between neurons that allows the transfer of information. Sherrington also found that the speed of neural transmission decreased as the transmission went through more and more synapses. For a long time it was thought that the transfer of information from one neuron to another (arrival at the dendrites) went via an electric wave, but Otto Loewi (1873-1961) was convinced that this was not the case: some neurons increase the activity (exciting ) and some decrease the activity of the next neuron (inhibitory). Loewi introduced the idea that a chemical transmission should take place. These chemical molecules were called neurotransmitters. There are many different types and neurons are selective in allowing which neurotransmitters excite or inhibit them.

Neural firing: The action potential

Sir Alan Hodgkin (1914-1998) and Sir Andrew Huxley (1917-2012) conducted experiments in which they could isolate neurons and in this way they could test how an impulse spread across the axon. They discovered that firing a neuron is electrochemical. The voltage across the axon increases as a result of changes in the membrane of the neuron, this allows sodium ions (Na +) to flow in, this increases the voltage, and this is called the action potential. Hence, the action potential can also be viewed as the "firing of neurons". Potassium ions (K +) then flow out of the membrane, which reduces the voltage. This process happens along the entire length of the axon, until it reaches the end.

Neuroimaging

Methods for creating images of the structure or / and function of the brain are collectively called neuroimaging . These methods help to examine the brain.

Electroencephalography (EEG) is a technique where dozens of electrodes are placed on the head. The electrodes measure electrical activity of populations of neurons. EEG tells with great accuracy where groups of neurons are and their activity. The activity of one electrode is low, because it is variable, even if you repeatedly present the same stimulus (such as a light flash). However, if you take the average of many signals, you will see a pattern of electrically positive (P) and negative (N) waves. The resulting average wave is an event-related potential (ERP) : a measurement of electrical activity of a subpopulation of neurons in response to the presentation of specific stimuli. For this, the average of EEG measurements must be taken.

An EEG-related method is the magnetoencephalography (MEG) , this megahelm measures changes in magnetic activity between populations of neurons. It has an advantage and disadvantage compared to EEG. An advantage is that MEG gives a better idea as to which groups of neurons are most active, because MEG uses extremely sensitive means to measure the smallest changes in the magnetic field. A disadvantage is that these drugs are very expensive and rarer than EEG drugs.

Computed tomography (CT) is an imaging technique that uses X-ray to get an image of layers of the brain in this case. They are horizontal slices of brain that you get to see. Dark areas are filled ventricles, the white is blunt and the gray are the brain.

Magnetic resonance imaging (MRI) is an imaging technique that uses the reaction of atoms to strong magnetic fields. This forms an image of the brain structures. The atoms go wild with the change of the magnetic field. The result is a vertical image of half the brain. In functional magnetic resonance imaging (fMRI) , active brain tissue is hungry brain tissue: it uses oxygen and other supplies supplied by blood. The result is a blood oxygen level-dependent signal (BOLD signal) that can be measured. BOLD is the ratio between oxygen-rich and oxygen-poor hemoglobin and ensures that the neurons that are most involved in a task can be located. The image is a horizontal image of half of the brain showing a BOLD signal. Areas in warm colors (red / orange / yellow) are areas where the BOLD signal is increased by the presence of a visual stimulus. Blue areas show a decrease in BOLD activity.

Positron emission tomography (PET) is an imaging technique that shows where neurons are mainly active in the brain. PET measures brain cell metabolism by injecting a safe amount of radioactive isotopes (the 'tracer') into the blood stream. The idea is the same as fMRI: detect activity in neurons by searching for increased metabolic activity. A commonly used tracer is an unstable form of oxygen, 15O, which is active for two minutes. An advantage is that it is a silent technique, which is useful in studies of brain activity related to hearing. A disadvantage is that it is not the easiest technique.

What three questions are important to keep in mind regarding development over the life span?

Recall that no sensory system has fixed properties over the course of the life span. In other words, development is not a method (like PET or scaling). Rather, it is an approach of thinking about sensation and perception. In the next chapters, development will be mentioned a lot. When reading about development in these chapters, it is worth keeping the following three big questions in mind:

  1. What comes with the system?
  2. What has to be learned?
  3. What changes with age?

How can our eyes perceive light? - Chapter 2 (5th edition)

How can light be described from the perspective of physics?

Light is a form of electromagnetic radiation or energy produced by vibrations of electrically charged material. You can conceptualize light in two ways. First, light can be viewed like a wave: a fluctuation that passes through one medium, this by transferring energy from the particle to the other particle without causing permanent displacement of the medium. Secondly light can be viewed as a stream of photons: a quantity of light or another form of electromagnetic radiation that shows properties of both the particle and the wave. A photon is a small particle of light that contains an amount of energy. Visible wavelengths are between 400 and 700 nanometers. What happens to the light of a star? Electromagnetic radiation travels in a straight line with the speed of light to the atmosphere where some photons are absorbed by dust and water, among other things. Absorbing refers to absorbing light, sound or energy and not transferring it. In contrast, some photons will be scattered (sometimes called: diffracted) by these particles. The scattering of light in an uncontrolled way. However, most photons pass through the atmosphere and reach a surface / object. When it touches a light-colored surface, it reflects: when light hits a - mainly light, 'sound' or - hot surface, it is sent back to its origin. If it hits a dark surface, it is absorbed. If light is not reflected or absorbed, it is transmitted by the surface: it is transferred from one place to the next. When we stand behind a window and look at a star, some rays of light will break (refract): (1) the direction of an energy wave is changed as it passes through another medium, in this case through glass as light passes through; (2) measuring the degree of refraction in a lens or eye.

How do eyes capture light?

The first eye tissue that encounters the light from the star is the cornea: the transparent 'window' to the eyeball. This lets light through without interruption; the light is not absorbed or reflected, but transmitted. Here the light is deflected for the first time and also the strongest (it contributes two thirds to the focus power of the eye). It is transparent because it contains highly ordered fibers and no blood vessels that can absorb light. There are many sensory nerve endings that close the eyes or produce tears if the cornea is damaged, this often recovers within 24 hours. The space immediately behind the cornea, the anterior chamber, is filled with aqueous humor, a fluid that comes from blood. It provides oxygen and nutrients to the cornea and lens and removes waste from both. Here the light is also slightly bent. The lens allows you to focus on something else, which is also transparent and is controlled by the ciliary muscle. Light is also broken in the lens. To get to the lens, light must pass through the pupil: the dark round opening in the center of the iris of the eye where the light reaches the eye. The iris is the colored part of your eye and regulates the allowable amount of light by increasing or decreasing the pupil. As illustrated by the will light (for the fourth and last time) are deflected pupil and has come through the lens through the vitreous humor, a transparent liquid that fills the back room, which is 80% of the eyeball. The light bounces against the retina: a membrane sensitive to light in the back of the eye that contains cones and rods, they receive an image from the lens and send it to the brain via the optic nerve. Not all the light from the star comes to the retina, as mentioned earlier, not everything passes the atmosphere and much is lost in the eyeball: about 50% survive.

Focusing light onto the retina

The force with which the room moisture and glassy body bend the rays of light is much smaller than that of the cornea. Because the power is so low, they cannot focus on objects that are very close. The lens can do this, it can change its bending power by changing the shape of the lens, this process the accommodation: the change of focus. For example, the lens thickens when the viewing direction is focused on an object that is close by. Accommodation is achieved by ciliary muscle contractions. The lens and ciliary muscle are connected by fibers called 'zonules'. At rest the lens is fairly flat, the zonules are stretched out and there is tension on it. You see distant objects. When focusing on an object that is close by, the zonules relax so that the lens can bend / thicken. Presbyopia literally means 'old vision' and inevitably occurs in people between 40 and 50 years old. Due to insufficient accommodation, people are less able to see nearby. The cause is that the lens becomes harder and that which causes the lens to change shape loses elasticity. The lens is transparent, everything that causes it to become opaque is called opacity (opacities). Opacities of the lens is cataract (cataracts) referred to, this is caused by irregularities of the crystals in the lens. Cataracts can start at any age and there are different types, especially noticeable after age 50 and at age 70, everyone has lost some transparency. In order for the light from a distant star to reach the retina, the bending power of the four optical components of the eye (cornea, chamber fluid, lens, glassy body) must match the length of the eyeball perfectly. The perfect match is called emmetropia. If the length of the eyeball is too long, the image of the star is focused just before the retina, this is called myopia. This can be corrected by negative lenses that cause the light rays to widen slightly just before hitting the lens, so that they touch the lens a little wider and later precisely meet on the retina. The image of the star can also be focused just behind the retina, this is called hyperopia. If it is not serious, it can be compensated by accommodation, otherwise with positive lenses that bring the light rays closer together before they go into the eye. The average length of an adult eyeball is 24 millimeters. You can also have a slightly longer or shorter eye ball that is emmotropic, because the eye generally matches the innate power of the optical components.

When the cornea is not spherical in shape, astigmatism occurs, a visual defect due to the uneven bending of the cornea. Vertical lines can be focused just before the retina and horizontal lines just behind, or vice versa. Or some lines appear out of focus and others sharp. Lenses with two focal points can correct this.

The retina

The process of seeing starts with the retina, where light energy is converted into neural energy that can be interpreted by the brain. Converting from one type of energy to another is called ' transduction'. For example, light energy from a star is transduced into neural energt that can be interpreted by the brain.

What the doctor saw

The fundus is the posterior layer of the retina, ophthalmologists can see this through their ophthalmoscope. There is a white circle called the optical disk, blood vessels and veins coming in that feed the retina and ganglion cells leave the eye through the optic nerve. The fundus is the only place in the body where you can see blood vessels and veins directly, so it is an important framework for doctors to see how your vascular system is doing. However, even with an ophthalmoscope you do not get a detailed view of the retina, photomicrography is required for that. This shows that the retina is layered with a few layers of clear neurons with another layer of dark cells (the 'pigment epithelium'). These neurons together contribute to the process of interpreting the information in visual images. The conversion of light energy into neural energy starts at the very back of the retina, the layer that consists of cells called photoreceptors. Before light reaches the photoreceptors, it must pass through ganglion, bipolar, horizontal and 'amacrine' cells. Most of these are transparent, but cells in the pigment epithelium, which provide important nutrients to the photoreceptors, are opaque.

Retinal geography and function

There are approximately 100 million photoreceptors in the retina. They receive light and produce chemical signals. There are two types: cones and rods. Cones (cones) specialize in seeing daylight, fine visual sharpness and color. Rods specialize in night vision. Because the retina has both types, it is a duplex retina, consisting of two parts that function under different conditions. The cones are concentrated in the center of the retina (fovea). With retinal eccentricity, this concentration reduces the distance from the retina. Fine details can be seen with the retina.

How can the visual system adapt itself to changes in lighting?

There are four ways in which the visual system adapts itself to changes in lighting. Pupil size, photo pigment regeneration, the duplex retina and the neural circuit.

Pupil size

If bright light shines into your eye, your pupil shrinks to approximately 2 mm. This is in contrast to when you enter a dark room in bright sunlight where your pupil increases to 8 mm. Because the amount of incoming light in your eye is proportional to the size of the pupil, a four-fold increase in diameter equals a sixteen-fold improvement in sensitivity. So 16 times more light can enter the eye when your pupil is fully enlarged. In this way your visual system adapts to dark and light.

Photopigment regeneration

The second mechanism is because of the way photopigments are used and placed in receptor cells. Photo pigments in bars react better in dark rooms. If a photopigment molecule has been bleached (used to detect a photon), it must be regenerated before it can be reused. When it suddenly becomes very light, photo pigments cannot be regenerated quickly enough to detect all the photons that hit the photoreceptors. This slow regeneration is good for increasing our sensitivity. If photons are rare, we use them all to see, if we have too many photons, we throw something away and use what is left over.

The duplex retina

The compensation mechanism just discussed is assisted by the duplex retina. Cones are much less sensitive than dimmed bars, but their range is much larger to as many as thousands of photons per second. We use rods when there is little light and cones when there is a lot of light. Cones regenerate faster.

Neural circuitry

The main reason that we are not disturbed by all variations in light levels has to do with the neural circuitry of the retina. Ganglion cells fire maximum when the center of their receptive fields is strongly lit and the environment is dark (or vice versa). However, they still fire more than can be expected based on probability if the light falls on the entire receptive field, as long as the light is brighter in the ON part of the field than in the OFF part. Ganglion cells respond to the contrast between adjacent retinal regions and do their best to ignore any variation in the overall light level.

Age-related macular degeneration (AMD) is a disease that is associated with aging and affects the yellow spot (macula). The yellow spot is the central part of the retina with a high concentration of cones. Reading and recognizing faces becomes difficult because there is a gray round spot in the middle of the view. There is wet AMD and dry AMD.

Retinitis pigmentosa (RP) is a progressive degeneration of the retina that affects night vision and peripheral vision. It can be caused by defects in a number of recently identified genes. It is a hereditary disease characterized by the progressive death of photoreceptors and the degeneration of the pigment epithelium. The result is blindness.

How does the retina process information?

The retina consists of five major classes of neurons: (1) photoreceptors; (2) horizontal cells; (3) bipolar ces; (4) amacrine cells, and; (5) ganglion cells. Here, we take a closer look at the functions of each of these cell types.

Light transduction by rod and cone photoreceptors

Both types of photoreceptors consist of an outer segment: adjacent to the pigment pithelium and contain photopigment molecules, and an inner segment: located between the outer segment and the cell nucleus and a synaptic terminal, where the transmission of information is converted into the release of a chemical carrier. The visual pigments are created in the inner segment and stored in the outer segment. Visual pigment molecules consist of (1) a protein that determines which wavelengths of light they absorb and (2) a chromophore that captures the light photons. Each photoreceptor has one of the four types of visual pigments. The rodopsine pigment is only found in bars. Each cone has one of the other three pigments that respond to long, medium and short wavelengths. Recent research suggests that there may be a third type of photoreceptor that matches our biological rhythm with day or night. These are sensitive to the light from outside and contain the photo pigment melanopsin. These receptors with melanopsin send signals to the suprachiasmatic nucleus (SCN): the center of the brain's biological clock. When light from a star reaches the outer segment of a rod and it is absorbed by a rodopsin molecule, the energy will be transmitted to the chromophore part of the visual pigment molecule. This process is called photo-activation, activation by light. It initiates a set of biochemical processes that result in the closing of the cell membrane channels that normally allow ions to flow into the outer segment of the rod. The inside of the cell is now being charged more and more negatively, this is hyperpolarization. Decreasing the calcium concentration also reduces the concentration of glutamate molecules. This change sends signals to the bipolar cell that the rod has received a photon. Cones work the same way. The information to bipolar cells goes via ' graded potentials' : an electrical potential that can vary continuously in amplitude.

As mentioned earlier, rods are mainly for night vision and therefore function relatively well in situations where dim light is (scotopic). Cones need more / stronger light (photopic). In such scotopic conditions you therefore have a blind spot of approximately 1 degree. There is another important functional difference: all bars have the same photo pigment, so they cannot send differences in color. Cones each have one of three different photo pigments that differ in the wavelengths in which they absorb the light most efficiently. Cones can therefore transfer information about such wavelengths. This makes them the basis of color vision.

The photo pigments of the cones are not distributed fairly. Cones sensitive to short wavelengths (S-cones, S for Short) cover approximately 5-10% of the total cone population and on average there are twice as many L-cones (long wavelength) than M-cones (medium wavelength). The ratio between L-cones and M-cones can vary greatly per individual.

Lateral inhibition through horizontal and amacrine cells

Horizontal cells act perpendicular to the photoreceptors, they make contact with photoreceptors and bipolar cells in the neighborhood. These connections play an important role in lateral inhibition, the antagonistic neural interaction between adjacent regions of the retina. Amacrine cells are also part of this lateral path. Like horizontal cells, they run perpendicular to photoreceptors in the inner layer of the retina, where they receive input from bipolar cells and other amacrine cells. They send signals to bipolar, amacrine and retinal ganglion cells. Amacrine cells are thought to be involved in contrast and temporal sensitivity (noticing changes in light patterns over time), but the precise function remains unclear.

Convergence and divergence of information via bipolar cells

Photoreceptors, bipolar cells and ganglion cells are part of the vertical path. Bipolar cells are the intermediaries. They are cells in the retina that have a synapse with a cone or rod (not with both) and with horizontal cells. Bipolar cells receive from many photoreceptors, polarize the input and send it to the ganglion cells. Because there is input of about the convergence of 50 photoreceptors to one bipolar cell (called a diffuse bipolar cell) is a characteristic of the rod path. A diffuse bipolar cell therefore receives input from several cones. Reversing the input increases the visual sensitivity or (1) the ability to perceive through the senses (2) the extreme response to radiation especially to light of a specific wavelength and (3) the ability to respond to transmitted signals. Midget bipolar cells, on the other hand, receive input from a few cones. These types of bipolar cells are only in the retina. This explains why images are most evident when they fall on this part of the retina. Each cone in the retina consists of two bipolar cells (which stands for a divergence of information): one that responds to an increase in light received by the cone (ON bipolar cell) and one that responds to a decrease thereof (OFF bipolar cell).

Communicating with the brain via ganglion cells

By the time signals arrive at the last layer of the retina - ganglion cells - much processing has already taken place, some information has been pooled during convergence and some information has been propagated by lateral paths. A ganglion cell is a cell in the retina that receives visual information from photoreceptors via two intermediate neuron types (bipolar cells and amacrine cells) and transmit information to the brain and midbrain. A P ganglion cell is a small ganglion cell that receives exciting information from dwarf bipolar cells in the central retina and feeds the 'parvocellular' (meaning 'small cell') layer of the LGN (lateral geniculate nucleus). An M ganglion cell is a ganglion cell that is shaped like an umbrella by the large dendrites. This receives exciting input from diffuse bipolar cells and nourishes the 'magnocellular' (meaning 'large cell') layer of the LGN. Both types of ganglion cells get larger dendrites as retinal eccentricity, but M ganglion cells will always have more dendrites than P ganglion cells. Of all ganglion cells, approximately 70% are P ganglion cells and 10% M ganglion cells. The remainder are koniocellular cells: neurons between the magnocellular and parvocellular layers of the LGN.

Ganglion cells fire action potentials simultaneously, about once per second in the absence of visual stimulation. Each ganglion cell has a receptive field, a region on the retina in which visual stimuli affect the number of times the neuron fires. This influence can be exciting or inhibiting. Kuffler concluded from research that this field is concentrated: a small circular area in the middle responds to an increase in light and a surrounding ring responds to a decrease in light. A certain type of ganglion cell fires the fastest when light falls precisely on the exciting area of the field and it fires less quickly as the light focus overlaps both with the center and with a part of the environment. Such a cell it is an ON-center cell. A ganglion cell that does exactly the opposite does it an OFF center cell. This organization (center-surround organization) has two functional consequences. First, each ganglion cell will respond best to a light beam of a certain size and less well to rays that are larger or smaller, so that they serve as a filter by adjusting the information they send to the brain. Secondly, most ganglion cells are sensitive to differences in light intensity between the center and the environment and relatively insensitive to average light intensity. The average intensity of the light on the retina varies a lot (depending on, for example, day or night, inside or outside). Contrast is the difference in lighting between an object and the background or between lighter and darker parts of the same object.

P cells have smaller receptive fields than M cells. On the one hand because M cells have larger dendrites and on the other because M cells are more sensitive in low light conditions. In addition, P cells and M cells differ in temporal responses. P cells fire for a long time when light falls on them and M cells fire calmerly, with a brief burst of impulses. P cells mainly provide information about contrast and M cells about changes over time.

How can people perceive spatial figures? - Chapter 3 (5th edition)

In the previous chapter the eye was a biological camera: the iris regulates the amount of allowed light and the cornea, the lens and the watery and glassy moisture focus these rays into a clear image formed on the retina. Cones and rods of photo receptors capture this image. The cortical visual path goes through the optic nerve, the optic chiasm, the optic tract, the LGN, optic radiation and striate cortex. Form and color go through P-cells and parvocellular layers through the brain (what-route) and movement and spatial analysis go through M-cells and the magnocellular layers (where-route).

What is visual acuity?

Contrast is the difference in luminance (brightness) between an object and the background or between lighter and darker parts of the same object. Acuity (sharpness) is the smallest spatial detail where you can see what it is, or the smallest visual cycle grid that we can observe. A cycle is a single pair that consists of a dark and a light (white) stripe. The visual angle is the angle at which an object is seen and expressed in 'arc minutes'. It is determined by looking at the two lines from the top and bottom of a cycle that go through the center of the lens to the retina. The resolution sharpness is one of the fundamental limitations of spatial vision and is determined by the distribution of photoreceptors in the retina. Sine wave grating: a grid with a sine wave illumination profile. Wide black-white-black-white high-contrast stripes are wider than photo receptors and this pattern can be seen well (because they are spread apart 1 arc minute from the visual angle). However, narrow black-and-white-black-and-white stripes are smaller than photoreceptors: white and black streaks (a whole cycle) fall into one photoreceptor, leaving you with only a gray field.

A visit to the eye doctor

Ophthalmologists (eye doctors) do not describe acuity in terms of visual angles and cycles, but if your vision is good you have 20/20, if you need glasses 20/30 or if you could read even the smallest letters 20/10. For them, visual sharpness is the distance at which a person can just identify the letters by dividing the distance at which a person with 'normal' vision can just identify the letters. The distance was standard 20 feet. Many people have 20/15.

More types of visual acuity

So far, we discussed two types of visual acuity: the finest stripes that can be resolved (minimum resolvable acuity) and the smallest letter that can be recognized (minimum recognizable acuity). Yet, there are more types of visual acuity. Minimum visible acuity refers to the smallest object that a person can detect. Minimal soluble acuity is the smallest angle separation between two adjacent objects that a person can determine. Minimum recognizable acuity is the angle size of the smallest feature that a person can recognize or identify. Minimum discriminable acuity is the angle size of the smallest change in a characteristic that can discriminate against a person.

Acuity for low-contrast stripes

So far, we discussed the smallest high-contrast details that people can resolve. But what happens if the contrast of stripes is reduced, that is, if the light stripes are made darker and the dark stripes are made lighter? Spatial frequency is the number of patterns in a given space. The wider the stripes, the lower the spatial frequency. Spatial frequency is indicated by cycles per degree. Cycles per degree is the number of pairs of light and dark stripes per degree of the visual angle. You would think: the wider the stripes, the easier it is to distinguish between light and dark. However, the contrast sensitivity function (CSF) describes how the sensitivity to contrast depends on the spatial frequency and this function has a kind of inverted U-pattern (with spatial frequency on the x-axis and contrast sensitivity on the y-axis). The ratio rises linearly, reaches a spherical peak and then decreases rapidly: at some point the width of the stripes (the spatial frequency) is therefore no longer effective for contrast sensitivity. The contrast threshold is the smallest amount of contrast needed to notice a pattern.

Why sine wave gratings?

Pure sine wave gratings are rare, but patterns of stripes with vague borders are quite common. The edge of an object is often a single stripe, often blurred by a shadow, in the retinal image. In addition, the visual system breaks images into a number of components (the Fourier analysis), which are essentially sine wave grids.

What are retinal ganglion cells and stripes?

Each ganglion cell responds to certain types of stripes or gratings. If the spatial frequency is low, the ganglion has a weaker response. If the spatial frequency is too high, the ganglion also has a weaker response because both light and dark stripes fall into the receptive field. When the frequency is just right, the light bar fills the center and the dark stripes fill the environment, the ganglion cell response is strong. The reaction of the cells depends on the phase of the lattice. The phase is the relative position of a grid within the receptive field (for example, if the grid falls so that the separation between a dark and light line falls on the center, there is no response).

What is the lateral geniculate nucleus (LGN)?

The lateral geniculate nucleus (LGN) is a structure in the thalamus that receives input from the retinal ganglion cells. It has input and output connections to the visual cortex. You have two lateral geniculate nuclei: one in every half of the brain. This consists of these layers and its light arches in the middle (the second two layers are magnocelular layers, (magnolus neurons are larger than in the upper four layers. The upper four layers are parvocelular layers (parvoellular layers receive input from M ganglion cells; parvocellular layers receive input from P ganglion cells. The magnolluar padthway retards large, fast-moving objects.; The arcolluar padthway bold details of stationary targets. The koniocellular cells lie between the magnocellular and parvocellular layers. the right side of both retinas Each layer of the LGN then receives information from the part of one eye or the other eye: layers 1, 4 and 6 of the right LGN receive input from the left (contralateral) eye and layers 2 , 3 and 5 receive inputs from the right (ipsilateral) eye. Contralateral (contrart to the opposite side of the body, or brains. Ipsila teraa l l (ipsilrt to the same side of the body or brain. The figu ische mapping is tomorrhic mapen of the world in the lateral geniculate nucleus and the visual cortex. The LGN is part of the thalamus. When you sleep, the entire thalamus is inhibited by a circuit in the brain that keeps you nice If your eyes were open while you were sleeping, you would not see anything, you will not see any of the retins through your LGN., but the signals would stop before they reach the cortex, so they are never recorded. in complete, which explains why loud noises (alarm in the morning) or bright lights are picked up and wake you up.

What is the striate cortex?

The primary visual cortex (V1), area 17 or the striate cortex are names for the same area of the cerebral cortex of the brain. This region receives direct input from the lateral geniculate nucleus and also feedback from other brain regions. It is responsible for the primary processing of visual information. Layers are an important property of neural structures in the visual path. The striate cortex has six large layers and some have sub-layers. Fibers from the LGN project primarily on layer 4. The striate cortex has more than 100 times more cells than the LGN. Objects that are imaged on or near the retina are processed by neurons in the striate cortex. Objects depicted in the far right or left corner (peripheral images) are only partially assigned to neurons in the striate cortex. This disturbance of the visual field map on the cortex is called cortical magnification (cortical magnification), because the cortical representation of the retina is enormously increased compared to the cortical representation of peripheral vision. Cortical magnification is the amount of cortical area (often specified in millimeters) that is dedicated to a specific region (for example, 1 degree) of the visual field.

The topography of the human cortex

Much information about the topography of the human cortex comes from anatomical and physiological animal studies. MRI shows the brain structures, fMRI measures and localizes brain activity via BOLD signals that measure blood metabolism.

Some perceptual consequences of cortical enlargement

Why is the representation of the retina in the cortex so greatly increased? This is because the visual system must make an assessment. Seeing at high resolution requires many sources (three in total): an area with a strong density of photoreceptors, one-on-one connections between photoreceptors and retinal ganglion cells and a large bite of the striate cortex. The brain and eyes are not big enough to see everything in high resolution, therefore we see sharp and peripheral less in the center. An important consequence of cortical enlargement is that the visual sharpness decreases as eccentricity increases. However, this is not the biggest obstacle to reading or object recognition in the visual periphery. The big problem is visual crowding: the damaging effect of clutter on peripheral object recognition, objects in periphery that are in isolation are easy to identify, but when multiple objects are present, nothing becomes identifiable and everything has become a messy mishmash.

What form do the receptive fields in the striate cortex have?

The mapping of receptive fields of neurons in the striate cortex was done by using light points by Hubel and Wiesel in 1958, they were students from Kuffler. The most fundamental discovery of Hubel and Wiesel was that the receptive fields of neurons in the striate cortex are not circular, which is the case with the neurons in the retina and LGN. They are more elongated. As a result, they react much brighter to stripes, lines, edges and grilles than to round light points.

Orientation selectivity

Hubel and Wiesel also discovered that an individual neuron does not respond equivalent to every line in the receptive field. It responds best when the line is precisely oriented and worse when the line deviates from this optimum orientation. This selective response to orientation tuning , the tendency of neurons in the striate cortex to respond optimally to certain orientations and less to other orientations. An example is that a neuron fires the most when a line is exactly upright (90 degrees) and less and less as the line tilts. Another neuron, on the other hand, responds better to a line lying horizontally, with 0 degrees thus forming the perfect orientation point, and here too it fires less as the line deviates more and more from this orientation. There are more cells that respond to horizontal and vertical orientations than skewed orientations. This fits well with the fact that people have a lower visual sharpness and contrast sensitivity for crooked targets than for horizontal and vertical targets. How are circular receptive fields in the LGN converted to the vast receptive fields in the striate cortex? Hubel and Wiesel stated that the concentrated LGN cells are listed and provide input to the elongated classification of the receptive fields of the striate cortex.

Other properties of the receptive field

Cortical cells, like retinal ganglion cells, also respond to gratings and best when they are exactly in the right spatial frequency to fall into the center of their receptive field. Cortical cells are tuned more strictly than the retinal ganglion cells, they respond to a smaller number of spatial frequencies. This means that every neuron in the striate cortex functions as a filter for the part of the image that stimulates the cell. This filter therefore blocks some parameters (for example, the orientation) and lets others pass. A third discovery by Hubel and Wiesel was that many cortical cells respond very well to moving lines, stripes, edges and grids. More specifically, many neurons react strongly when a line moves in a certain direction, for example from left to right: the cell does not respond if it moves from right to left. Information from the two eyes is kept separate in the LGN, each LGN cell responds to information from one eye or the other but never to both eyes. This changes in the striate cortex in which cells can be influenced by both. There is indeed ocular dominance, the property of the receptive fields of neurons in the striate cortex to have a preference; they respond a little faster when a stimulus is presented to one eye than when it is presented to the other eye.

Simple and complex cells

Just like precortical cells, there are also different cortical cells. Hubel and Wiesel gave the name simple cells to cells that had clearly defined exciting and inhibiting regions. For example, an edge detector only responds to light on one side of its receptive field. A stripe detector responds when a light line has a certain width and it is dark next to the stripe. Like simple cells, complex cells are tuned to a specific orientation and spatial frequency and also have an ocular preference. However, complex cells have receptive field characteristics that cannot be easily predicted by mapping with light rays. For example, a simple cell only responds to a line if it is presented in the center of the receptive field, a complex cell responds to the line regardless of where it is presented, as long as it falls into the receptive field. Simple cells are phase sensitive and complex cells phase insensitive. It is stated that complex cells have a separate parallel path and that both simple and complex cells receive direct input from LGN neurons.

Further complications

Hubel and Wiesel described a trait of some cells in the striate cortex with end stopping: the process in which a cell in the cortex first increases its firing ratio as the length of light stretches to fill the receptive field and then reduces firing as the streak becomes even larger than the receptive field. There are simple end-stopped cells and complex end-stopped cells.

What are columns and hyper columns?

To summarize with regard to the properties of receptive fields: each of the approximately 200 million neurons in the striate cortex respond to a different set of stimulus properties (stripes, lines or fencing oriented at a certain angle with specific width and spatial frequency, possibly moving in a specific direction). There are simple and complex cells, some of them end-stopped and some not. Most neurons prefer to respond to stimuli presented to one of the two eyes. Hubel and Wiesel conclude that neurons with the same orientation are preferably organized in columns, vertical organizations of neurons. Orientation is not the only property on which columns are created in the visual cortex. Neurons that have the same eye preference (ocular dominance) are also organized in columns. A hyperkolom is a 1 by 1 millimeter block of the striate cortex containing two sets of columns, each covering every possible orientation (from 0 to 180 degrees) with one column favoring input from the left eye and the other preferring input from the right eye. Due to cortical magnification, not all hyper-columns view the world with the same level of detail. Orientation and ocular dominance are not the only dimensions with a systematic columnar organization. A staining technique (certain cells become darker or lighter colored in research) that uses the enzyme cythochrome oxidase (CO) also shows an order of 'CO blobs' (blob = stain). Cythochrome oxidase is an enzyme used to show the CO blobs that are approximately 0.5 millimeters apart in the primary visual cortex. What exactly this organization of blobs do is unclear, probably something with color and the processing of movement and spatial structure, but the point is that there is another arrangement besides orientation and ocular dominance.

What does selective adaptation imply?

All the aforementioned physiological information has been obtained from animal studies. Is this information generalizable to people? A major problem is that electrodes cannot be placed in the human brain to answer this question with 100% certainty. This is how all information regarding neurons and receptive fields and the like has been obtained from cats and monkeys. The psychologist's electrode is by an indirect method called adaption: a reduction in response caused by previous or permanent stimulation. Selective adaptation can provide insight into the properties of cortical neurons. Selective adjustment causes the neurons that are most sensitive to the adjusted stimulus (for example 0 degrees) to get tired. Because the neurons for another modified stimulus (for example -10 degrees) are not yet tired, the stimulus is perceived differently (10 degrees to the left instead of 0 degrees). The tilt altereffect is the perceptual illusion of bending produced by adapting to a pattern of a certain orientation. There is reduced contrast sensitivity for spatial frequencies that are very similar to the adjusted spatial frequency. Selective adjustment provides evidence that the human visual system contains neurons that are selective for spatial frequency and orientation.

The site of selective adaption effects

The neurons for orientation and spatial frequency are located in cats and monkeys in the striate cortex, not in the retina or LGN, where can they be found in humans? A test shows that the tilt alter effect and the reduced contrast sensitivity change from one eye to the other. Information from the two eyes is kept completely separate in the retinas and in the two LGNs: no neuron receives input from both eyes to the striate cortex, so: also in humans such neurons can be found in the cortical area.

Spatial frequency - tuned pattern analysts in human vision

Campbell and John (1968) noted that the human contrast sensitivity function actually reflects the sensitivity of multiple individual pattern analysts. Such an analyst is also referred to as a spatial-frequency channel: a pattern analyst, implemented by a group of cortical neurons, in which each set of neurons is tuned for a limited range of spatial frequencies. The most likely reason that the contrast sensitivity function fails at very low spatial frequencies is that there are simply fewer neurons tuned to lower spatial frequencies.

Our visual system does not perform an actual Fourier analysis after which the world is seen in very small bands of spatial frequencies. The visual system filters the image into spatially located receptive fields that have a limited range of spatial frequencies. Why does the visual system use spatial-frequency filters to analyze images? An important reason for this may be that different spatial frequencies emphasize different types of information. For example, low frequencies emphasize the outline of faces and high frequencies contain information about fine details. If we want to know how many people are in a room, it is most efficient to consult our low-frequency channels. If we want to know if people are laughing or frowning, we depend on our high-frequency channels.

How does vision develop?

The most commonly used method for studying children's vision is based on an observation made by Fantz in the early 1960s: when children see two environments, they continue to look at the most complex environment. Preferential looking is the most commonly used method for such researchers. This is done with the stimulus card: the baby looks at a plate with two square plates next to each other with a space between them. The image on the left is the reference card and the image on the right is the test. There is a hole in the middle through which the researcher can observe the baby. This technique depends on the willingness of children and it is a forced choice test. An alternative method is to measure visually provoked electrical potentials (VEPs), these are electrical signals from the brain that are triggered by visual stimuli and detected by electrodes on the skull. These electrodes measure the changes in electrical activity triggered by a changed visual stimulus.

Development of the contrast sensitivity function

It seems that the sensitivity to low spatial frequency develops much faster than sensitivity to high spatial frequencies. After about 9 weeks the contrast sensitivity for low spatial frequency has already reached almost the same level as what adults have. The high spatial frequency contrast sensitivity develops much slower. What limits the development of sharpness and contrast sensitivity? Postnatal changes in the retina show a differentiation of the macular region. After birth, both the density of receptors in the retina increases and the length of the outer segment of the cones. After four years the retina is like it is in adults. The peripheral retina develops faster than the retina.

The development of normal vision requires normal visual experience. Abnormal early visual experiences can have serious and often permanent consequences for seeing patterns. There is a critical period of visual development in people between the first 3 and 8 years. A critical period is a development period in which the organism is particularly susceptible to change. The cortical neurons are still arranged during this period. It is a period of neural plasticity. If abnormal visual experience occurs, for example born with cataracts, the neurons needed for normal vision cannot bind because they are not used. If cataracts remain untreated in the critical period, misplaced connections can no longer be repaired. The result is amblyopia , reduced spatial vision in another healthy eye, also called a lazy eye, even when correcting for the refractive error. Congenital cataract is not the only cause of amblyopia. Two other disorders in the early years of life can also cause amblyopia. These are strabismus, in which one eye is turned so that it perceives the world from an abnormal angle, and anisometropia , in which the two eyes have very different refractive errors, for example one eye is overlooked and the other is not.

How does object recognition work? - Chapter 4 (5th edition)

Chapter 3 ended with the primary visual cortex (V1, striate cortex) where cells that were optimally stimulated by stripes and fencing of different orientations emerged. V1 contained a folder with a large representation of the retina and a smaller representation of the more peripheral retina.

How to move from simple lines and edges to properties of objects?

Cells in V1 were interested in the basic characteristics of a visual image (edges, lines, movement, size, etc.). These cells have relatively small and precise receptive fields. Outside the V1, there is the extrastriate cortex: a set of regions involved in processing visual information, so called because they are just outside the primary visual cortex (V2, V3, and so on). A cell in V1 tuned to a dark border on the left side of the receptive field responds regardless of whether the background is black and the square is light gray or vice versa. A cell in V2 may not: the cell looks at 'the owner' of the border: a black square with a light gray background has a dark border, the border is the square and the cell responds to that. If the background is black and the square is light gray, the border is rather a coincidence because a square has been cut out of the black background (the border is now off the background).

What is meant by the "what" and "where" pathways?

From the extrastriate regions of the occipatal lobe of the brain, visual information moves out along two main pathways: one that goes toward the parietal lobe (where-pathway: the dorsal flow) and one that goes toward the temporal lobe (what-pathway: the ventral flow). The where-path is concerned with information related to the location of objects in the space and the actions that are required to 'interact' with these objects. This path also plays an important role in the implementation of attention. The what-path seems to be the place where explicit actions such as object recognition take place. This path is mainly discussed in this chapter.

As we go deeper into the temporal lobe, the receptive fields become much larger. The evidence for a relationship between the temporal lobe and object recognition came from lesions (a damaged brain region; damaging a certain section of the brain) of the temporal lobe in monkeys. Monkeys with a lesion could see, but did not seem to know what they were seeing. This is called agnosia. Agnosia is often caused by brain damage. The inferotemporal (IT) cortex was found to play a major role in this condition in monkeys. There were two surprising locations with respect to cells in the inferotemporal cortex.

Firstly, the receptive fields of the cells could include half or more than half of the monkey's view, as opposed to the receptive fields of cells in the striate cortex that are activated by simple stimuli and respond only when presented in a very limited area of the visual field.

Secondly, the IT cells were not activated by simple stimuli such as lines. Some reacted well to a silhouette of a monkey hand and some worked well for a toilet brush. The cells are therefore interested in very specific stimuli. For example, a cell reacts strongly to a monkey face and slightly less to a bad thing related stimuli, a little less on a slightly less related stimuli, and so on. Recent research has cast doubt on the idea that IT cells respond to a specific object regardless of its position in the visual field. The IT cortex has strong connections with parts of the brain that are involved in the formation of memory, the hippocampus is important. This is important, because these IT cells have to learn their properties from their receptive field, the recognition of your grandmother, for example, does not go through hardwiring in your brain, this relationship must be learned from memory. Humans have areas that correspond to the IT cortex in monkeys. They are corresponding regions (homologous regions), because the anatomy of the two brains is not identical, they are regions that have the same function in different species.

Quiroga et al. did a study in 2005 in which they placed an electrode in the brain to make a single cell recording of a cell in the temporal lobe. Normally, this invasive method is not possible, but the participants were patients who were prepared for brain surgery against epilepsy and the placement of an electrode was part of the treatment anyway.

Information does not often come from such studies, but mainly from lesion studies of patients with agnosia, for example. There are types of agnosia that are very specific, such as prosopagnosia, the inability to recognize faces. Recognition is very fast according to a feed-forward process, a process that performs a calculation, such as object recognition, where one neural step quickly follows another neural step, without the need to send feedback from a late stage back to an earlier one stage. In summary, the what pathway works through a number of phases, building an overall representation of, for example, your grandmother or the Eiffel Tower from the very specific, very localized dots, lines and stripes that cells in the retina, LGN and primary visual cortex were interested in.

What are the problems of observing and recognizing objects?

Consider the following question: How do we recognize a house? Some cells in the early visual system respond to stimuli such as simple lines. A cell notices the number of dots noted by retinal ganglion cells and forms a line of these. Another cell notices the lines that cells have noticed and detects an angle. This is how a house is 'built'. But sometimes there is a car at the door, for example. How do we ensure that the lines of the car are not mixed with lines of a house? So there are processes that successfully assemble characteristics and turn it into an object. The mid-level (or middle) vision is a fairly loosely defined phase of visual processing. It comes after the basic characteristics have been drawn from an image (low-level vision or early-vision) and before object recognition and understanding of the scene (high-level vision) take place. The rest of the chapter is about resource vision. Recognizing an object involves matching something we perceive with a reminder of something we have observed in the past. How do we do that?

What is the purpose of mid-level (or middle) vision?

The purpose of mid-level (or middle) vision is to organize elements of a visual scene into groups so that we can recognize them as objects.

Finding edges

We cannot simply say that all lines that touch each other form an object, because objects overlap each other. Before we worry about overlapping contours, we must first look at the quality of the edges and the information they bring. An example of how our brain can fill in missing information itself is the so-called Kanisza figure. Our visual system knows that the lack of a piece of line is an accident of the combination of light and dark within the contours and creates this piece of line itself. An illusory contour is the term for a contour that is observed, even though there is no change between one side of the contour and the other side. It is the best guess of what is happening in the world at that location.

Structuralism is a vision adopted by the first group of perceptual psychologists (such as Wundt and Titchener) that states that complex objects or observations can be understood by analyzing the components. Perception is built from local sensations in the same way that a crystal is made from atoms. An illusory contour could not explain this flow. Due to other defects, a second group of psychologists, led by Wertheimer, Köhler and Koffka, emerged, forming the Gestalt theory. They emphasized that the perceptual whole can be greater than the sum of the parts. They came up with the Gestalt grouping rules, a set of rules that describe which elements in an image would group together. Another of these rules is the principle of good continuation, a rule that states that two elements tend to group together if they appear to lie on the same contour. In addition, there is the principle of closure, a closed contour is preferred over an open contour. The visual system assumes that something blocks a line when it 'just' stops, together with the idea that the holes represent contours that can be extended, this leads to an illusory contour.

Texture segmentation and grouping

A line detector that deals with such illusory contours does not see line differences when a picture is divided according to structure and elements in the picture. For example, one half of the picture is stone than the other grain and within these textures there are also small squares with a different structure such as water. Texture segmentation is the division of a picture into regions that have the same texture properties. It is strongly related to the Gestalt grouping principles. The two most closely related principles are similarity and closeness. Parable states that traits that most closely match each other tend to group together. This similarity can be by color, size, shape, etc., but combinations of these characteristics do not work. Proximity states that items that are close to each other have a greater likelihood of grouping than items that are far apart. Two less related Gestalt principles related to texture segmentation are parallelism and symmetry. Parallelism states that parallel contours group together earlier and symmetry states that symmetrical patterns group together.

They can also hide the same Gestalt principles that help us to find objects in the world, as happens with camouflage.

Low-level vision collects information about a line here and a certain color there. This collection of information must be interpreted before we know what we see. Middelvisie behaves as a collection of specialists, each with a specific area of expertise. The idea of perception by committees has a long history. For example, there was the Pandemonium model for letter recognition by Selfridge (1959). According to him, there were characteristic devils who found lines, angles, etc. Cognitive devils, one for each letter, knew the characteristics of their letter. These devils looked at the characteristics identified by characteristic devils. The cognitive devils then screamed, the harshness of their scream increased as there was more resemblance between information from characteristic devils with their own letter. The decision-making devil decided which letter was read based on which cognitive devil shouted the loudest. Now of course we no longer believe in screaming devil commissions, but we see this as a set of connected neurons receiving input, which ultimately leads to a certain output.

The decisions made by perceptual committees need not be decisive. An ambiguous figure is a visual stimulus that can be interpreted in two or more ways. An example of this is the well-known Necker cube , a frame of a cube that can be interpreted in two ways. The classic duck-rabbit figure is another example. Every picture is ambiguous in theory, but the perceptual committees almost always decide on a single interpretation. A casual point of view is a position from which you look that produces a certain regularity in the visual image that is not present in the real world. Perceptual committees take accidental points of view into account, rather, they take enough into account not to assume that they are entirely based on it, unless there is a very good reason for it. A second set of assumptions made by the visual system involves an implicit understanding of some physical aspects in the world.

Figure and ground

The ability to separate figures in the foreground from their ground (the surfaces or objects behind the figures) is an important step on the way to object recognition. Just like finding edges and regions, this is a process governed by a collection of cooperating principles. Like grouping principles, the subject of figure-ground assignment became important in visual perception through the work of Gestalt psychologists. The figure-ground assignment is the process in which it is determined that some regions of a picture belong to the object in the foreground (figure) and other regions that belong to the background (ground). You probably know the classic vase / face figure from Rubin. Which principles are involved in assigning whether something belongs to the figure or the ground? Six in total: the first is surroundedness, this principle states that if one region is completely surrounded by another, it is likely that the region that is completely surrounded is the figure. Other principles are size (the smaller region is the figure), symmetry (a symmetrical region is the figure), parallelism (regions with parallel contours are the figure), outer edges and relative movement.

Dealing with occlusion

It has already been discussed how edge-finding processes can lead to illusory contours based on the physics of occlusion, obstruction. We still need a commission to connect the visible parts of blocked objects with each other by accepting hidden parts of the object. Relatability is the extent to which two line segments appear to be part of the same contour. The two segments are related if they can be connected to a smooth curved or curved bend, but not if the connection requires an S-bend connection. The visual system finds such an S-turn illogical and too difficult and therefore states that the line segments are not of the same contour, but are separate lines. Just like with figure-ground rules, this heuristics is not flawless. A non-random property is a property of an object that is independent of the exact (or random) position from which the observer looks at the object.

Parts and wholes

Navon argued that perceptual processes work from the global to the local. For example, you have the three letters HC and L in which the H is made up of Ss, the C of Ns and the L of Hs. You see an H, C or L faster than the letter S, N or H. The global superiority effect states that the properties of the entire object take precedence over the properties of parts of the object. This effect is consistent with an implicit assumption we made during our discussion of middle vision: the first goal is to split the retinal image into large (large-scale) objects. However, many objects are made up of parts and we can also recognize these parts individually. Many recognition models state that segmenting an object is a crucial step in determining what it is.

Summarizing mid-level vision

We have moved from an image with associated simple properties to objects with their parts. How did we get there? The goals of middle vision can be summarized into the following five principles:

  1. Bring together what needs to be brought together by, among other things, Gestalt principles and determining contours if something is left out or blocked.
  2. Share that on what needs to be divided by, among other things, grouping principles (edge-finding processes) that separate regions from each other, figure-ground mechanisms and texture segmentation.
  3. Use what you know, among other things, by dividing objects based on implicit knowledge of the physics of imaging.
  4. Avoid coincidences, avoid interpretations that assume that you need to use very specific, accidental combinations of characteristics, or accidental points of view.
  5. Search for consensus and avoid ambiguity by using the four principles mentioned above, the "commissions" of middle vision must eliminate anything but one possibility. Here they solve the ambiguity and offer a single solution.

From metaphor to formal model

We have talked about perceptual commissions, but this is just a metaphor. There are more formal approaches in which knowledge about regularities in the world can limit the interpretation of ambiguous sensory input. One such approach is the Bayesian approach , a way to formalize the idea that our perception is a combination of the current stimulus and our knowledge about the conditions of the world, what is likely and unlikely to happen. The Bayesian approach is mathematically noted as Bayes' theorem:

P(A|O) = P(A) x P(O|A) / P(O).

This approach makes it possible for us to calculate the probability (P) that the world is in a certain state (A) given a specific observation (O). P(A) is the probability that hypothesis A will ever happen.

How does object recognition work?

We go back to Quiroga's finding that cells respond to very specific objects. In V1, cells respond best to lines and edges in specific areas of the visual field. In V2 we take a step in the direction of local characteristics of objects, because they have a sensitivity to border ownership. V2 cells are also sensitive to deceptive contours. In V4 towards the temporal lobe, the cells are interested in much more complex attributes. No one has ever determined the perfect set of stimuli for V4 cells, because you have to be able to determine dimensions of shape. It gets more complex the further you get into the temporal lobe. We do not know exactly what optimally activates an individual cell as we continue in the true path. Functional imaging studies show that different regions of the cortex are more strongly activated by certain categories of stimuli than by other categories. One way is the subtraction method, in which brain activity is measured in two conditions: one with and one without the involvement of the process in question. The two conditions are subtracted from each other, after which the activation becomes visible in the area that is involved in the specific process. The parahippocampal place area (PPA) is an area in the human extrastriate visual cortex that is specifically and more strongly activated by images of places and less by other stimuli. The fusiform face area (FFA) is an area in the human extrastriate visual cortex that is specifically activated by human faces. The extrastriate body area (EBA) is an area in the human extrastriate visual cortex that is specifically activated by images of the body and not so much the face. The middle temporal area (MT) is an area in the brain that is thought to be important in the perception of motion. What happens if a cell responds to Jennifer Aniston? Could it be that when studying the 'Jennifer Aniston neuron' the researchers actually look at the process of object recognition? If the image of Jennifer Aniston falls on your retina, take this rough image with you and generate a description that is linked to Jennifer Aniston's representation in your memory.

The idea that we recognize objects by matching each pixel or even matching each low-level property of the input with a representation in memory is what is called the naive template theory. A template is an internal representation of a stimulus that is used to recognize the stimulus in the world. Unlike its use in, for instance, making a key, a mental template is not expected to actually look like the stimulus that it matches. This idea of object recognition via memory does not work for simple objects. The lock-and-key template idea implies that a template is made of the letter A in squares and if the letter A fits on these squares, it is seen as an A. However, you have many different types of A's in different orientations and positions and it is impossible to create a template for every A. A solution is to see that every capital letter A has a basic structure. Instead of matching each point with a point in the template, we can make a more conceptual match. A structural description is the description of an object in terms of the components and the relationships between these parts. Many versions of the structural-description hypothesis have been suggested. Biederman (1987) proposed 'geons', geometric ions. Geons are collections of non-random features, so the visual system recognizes a geon just as quickly and accurately regardless of how it is oriented in space. The recognition-by-components model is Biederman's object recognition model that states that objects are recognized by identities and relationships of their components. Just as letters can be combined in an alphabet to form an infinite number of words, the limited set of geons can be used to form many object representations. Viewpoint invariance is (1) a property of an object that does not change when the observer changes his or her viewpoint, (2) a group of object recognition theories that state that representations of objects do not change when viewpoints change.

As already hinted at the start of this section, the real breakthrough in the area of object recognition occured in computer science. Faster computer and increased memory capacity have allowed researchers to build deep neural networks (DNN) models with many more layers than previous researchers could. In the first layer of a DNN, it is quite like what simple cels in visual cortex are doing. Next, that information is pooled in a manner that is something like what complex cels are doing. These operations can be viewed as creating a new image that the next layer of the DNN will take as its input for feature extraction and pooling. These processes of feature extraction and pooling are repeated for a number of layers. Then, at the top of the stack of layers, there is a set of grandmother cells, one for each cagegory of object one is trying to identify. To conclude, DNN is a type of machine learning in artificial intelligence in which a computer is programmed to learn something (in this context: object recognition). First, the network is "trained" using input for which the answer is known (for instance: that is a cow). Next, the network can provide answers from input that it has never seen before.

Multiple recognition committees?

Every step so far (grouping of parts, segregation of figures and grounds, etc.) is based on consensus, a decision of a committee. It seems likely that recognition works the same way. Recognition is probably not a single act. We can recognize objects in multiple ways, perhaps even simultaneously. Entry level category is the label that comes to mind the fastest to identify an object. At a subordinate (subordinate) level, the object can be named more specifically, at a subordinate (superordinate) level, the object can be named more generally. Each of these recognitions can depend on different stored representations and different analyzes of the visual stimulus. A geon description of 'bird' is still possible, but of 'sparrow' it becomes difficult. Recognition of a subordinate level therefore seems more appropriate for a system in which the precise details of an object are coded. A number of studies have shown that it takes significantly longer to identify objects at a subordinate and subordinate level than at the entry level. Other parts of the brain also seem more active when people are involved in subordinate level recognition than when they are involved in recognizing objects at the entry level. When people see an atypical member of a category (such as an ostrich), they name the object at a subordinate level (ostrich) rather than at the entry level (bird).

Faces: an illustrative special case

A classic picture is that in which two faces of the same person are presented upside down, but one is slightly different: this is difficult to see, but if you hold it upside down you can clearly see that something is wrong is with one image. The processes that recognize a face as a face are not disturbed when the image is upside down. The processes that recognize the face as being of a specific individual are disrupted, these processes deal with the precise configuration of eyes, nose and mouth. Neuropsychological evidence that the brain modules are really separate should come from a double dissociation, the phenomenon in which one of the two functions, such as hearing and vision, can be damaged without affecting the other, and vice versa. Damage to certain areas in the temporal lobe of the brain can cause prosopagnosia: the inability to recognize faces. Congenital prosopagnosia is a form of sight blindness that is present from birth, in contrast to acquired prosopagnosia, which would result from damage to the nervous system.

How can color be observed? - Chapter 5 (5th edition)

Color is not a physical characteristic of things in the world itself, but color is a creation of our brains.

What are the basic principles of color perception?

As already discussed in Chapter 2, people only see a limited part of the electromagnetic spectrum, namely the wavelengths between approximately 400 and 700 nanometers (nm). The color of an object that we see is correlated with the wavelengths of the light rays that reach the eye. Most of the light that we see is reflected light. Some wavelengths are absorbed by surfaces that touch them. The more light is absorbed, the darker the surface appears. Different light is reflected and sometimes touches the eye. The color of a surface depends on the combination of wavelengths that reach the eye from that surface.

Three steps to color perception

  1. Detection. Wavelengths must be detected.
  2. Discrimination. We must be able to tell the difference between one wavelength (or mixture of wavelengths) and another.
  3. Appearance. We want to assign perceived colors to surfaces and objects and we do not want these colors to change drastically if the conditions change (blood must remain red in sunlight and shade).

What does the first step of color perception, that is color detection, entail?

Recall that there are three types of cone photo receptors with different photo pigments. As a result, they differ in their sensitivity to light of different wavelengths. Cones with a strong reaction at about 420 nm are S cones. An S-cone is a cone with a preference / sensitivity for short wavelengths, often called a blue cone. Cones with a peak around 535 nm are M cones. An M-cone has a preference / sensitivity for medium wavelengths and are known as a green cone. L cones have a peak around 565 nm. An L-cone has a preference / sensitivity for long wavelengths and is called a red cone. The spectrum sensitivity of the cones overlaps. The combination of sensitivities gives us the possibility to detect wavelengths. Cones function in daylight (photopic) and the only kind of photoreceptor rods in dim light (scotopic).

What does the second step of color perception, that is color discrimination, entail?

The problem of univariance

How do we make a distinction between, for example, wavelengths of 450 nm and 625 nm? The response of photoreceptors to wavelengths of 425 nm is the same as that of around 625 nm. This is the problem of univariance , so a photoreceptor cannot make color discrimination based on wavelength. The output of a single photoreceptor is ambiguous. The visual system solves this problem, but not under all conditions. Univarity explains that we see little color in poorly lit conditions. There is only one kind of photoreceptor rods with the same photo pigment (rhodopsin). This has the consequence that the problem of univariance makes it impossible to distinguish colors from each other. The fact that we cannot see colors well at night proves that color is something psychophysical and not something physical. In low light, only the bars are activated, no color can be observed.

The trichromatic (tri-color) solution

We can discriminate colors because we have more than one type of cone of photoreceptors: the two wavelengths that create the same response of a cone type (M) produce different patterns of responses across the three types of cones (S, M and L). The trichromatic theory of color vision or trichromacy is the theory that each color is defined in our visual system by the relationship between three numbers, namely the output of the three types of cones. This theory is also called the 'Young-Helmholtz theory'.

Metamers

Every light and surface that we see reflects many different wavelengths or radiates many different wavelengths. How do the cones react to combinations of wavelengths of light? What is important is that a mix of colors activates the L and M cones equally. Both a mix of colors and a single wavelength can, for example, produce the color yellow. Metamers are different mixes of wavelengths that look identical while actually having physical differences. However, the rest of the nervous system only knows what it is told by the cones. Two warnings:

  1. Mixing colors is a mental event, not a change in the physical aspects of light, it remains two separate wavelengths.
  2. When mixing light to a different color, exactly the right colors must be used, otherwise it still looks like one of the original colors.

The history of trichromatic theory

The very first basis of this theory came from Newton's crazy discovery that a prism breaks up sunlight into a spectrum of hues; a second prism reassembles the spectrum, it turns white again. The three-dimensional nature of experiencing color was further elaborated in the nineteenth century by Young (1773-1829) and then by Helmholtz (1821-1894). However, it was Maxwell (1831-1879) who developed a color-matching technique that was central to Helmholtz's work. In a modern version of Maxwell's color-matching experiment, a color is presented on the left. On the right, the participant must match three lights to create the color on the left. The most important conclusion from this experiment was that all colors can be obtained with three lights, red, green and blue.

A brief digression into lights, filters, and finger paints

The formula red and green makes yellow is an example of an additive color mix , because two sets of wavelengths are added together. A color looks like a certain color, because it absorbs some wavelengths and subtracts from white light that falls on a surface with the pigment. When red and green are used, almost all wavelengths are absorbed by some pigment and a dark color such as brown is observed. This is an example of a subtractive color mix. When pigment A and B are mixed, a part of the light that falls on the surface is subtracted by A and a part subtracted by B. That which remains is the color that is observed.

From retina to the brain: repackaging information

The nervous system determines the difference between different lights by looking at the activity of the three types of cones. Because the types of cones have similar sensitivity, calculating the differences in response of the cones is a good way to determine this. The visual system converts these responses into new signals.

Cone-opponent cells in the retina and LGN

Many ganglion cells in the retina and in the lateral geniculate nucleus (LGN) are maximally stimulated by light points. These cells have a center environment (center surround) organization in which some cells are excited when a light turns on in the center of their receptive field and inhibited when a light turns on in the environment of their receptive field. A similar antagonistic relationship characterizes color: some retinal and LGN ganglion cells are excited by activation of L-cones in their center and inhibited by activation of M-cones in their environment. These (L - M) cells are a type of cone-opponent cell. These are called so, because different sources of information are put opposite each other. There is also a difference in the path taken by the signals of the different cones. For example, signals from the S cones pass through the koniocellular layers in the LGN and the signals from the M and L cones pass through the parvocellular layers.

A different ganglion cell helps to keep track of day and night

The internal circadian clock is the biological cycle that repeats every 24 hours, regardless of time cues. There is a ganglion cell that receives input from the rods and cones, but also contains its own photo pigment, melanopsin , so that it can also detect light when normal photoreceptors are absent.

What does the third step of color perception, that is color appearance, entail?

The retina and LGN contain cells that repackage information into cone-opponent difference signals. These limit our ability to see differences and so we see only one color instead of the wavelengths separately.

Three numbers, many colors

Because we have three types of photoreceptor cones, any light that reaches the retina will be converted into three responses. After that, the nervous system can no longer change the physical wavelengths of light. With this system of three photoreceptors, more than 2 million colors can be observed. Color space is a three-dimensional space that has been established because the perception of color is based on the output of three types of cones that describes the set of all colors. It is analogous to the height, length and width of a cube. You can hardly describe the color in the same terms, but there are many other options. Color space can be described in terms of nuance, saturation and brightness. Nuance is the chromatic aspect of color. Saturation is the chromatic strength of a nuance. White has no saturation, pink is a bit more and red is completely saturated. Brightness is the perceptual consequence of the physical intensity of light.

The limits of the rainbow

There are nuances that you can see, but that do not occur in the spectrum. These 'non-spectral nuances' can only be created by mixing wavelengths. An example of such a color is brown.

Opponent colors

Young and Helmholtz described a trichromatic theory with the three basic colors red, green and blue, Herings opposite-color theory had four basic colors in two opposing pairs: red versus green and blue versus yellow. The black versus white component formed the third component. The opposite color theory states that the perception of color is based on the output of three mechanisms, resulting from an opposition between two colors: red-green, blue-yellow and black-white. Each color is made up of one of these two pairs. A color can only be reddish yellow or yellowish green for example, but not reddish-green. Experiments called "hue cancellation" are used to test this theory. These experiments start with a color and an attempt is made to determine how much of the opposite color of that nuance should be added until the original color disappears. A unique hue is one of the four colors that can be described with a single color term: red, yellow, green or blue. Other colors such as purple can be described as a composition: reddish blue.

Color in the visual cortex

It is known that transformations that produce perceived color take place in the visual cortex. It is, however, not clear how the physiology gives rise to perception. Several cells in the cortex are interested in color, byt do not seem to linearly add and subtract inuts from different cone types. Indeed, there is some evidence that indicates that they add and subtract in a nonlinear manner. For example, a cell may be responding to something more like (L2- M2 + S2), although exactly how this would produce the colors we see remains a topic for future work.

Achromatopsia (achromatopsia) is the inability to perceive colors. This is caused by damage to the central nervous system. Patients with this condition find the boundaries between regions of different colors, but cannot tell which colors they are. The vision itself is largely intact; it is the experience of color that has deteriorated.

What individual differences in color perception are there?

Language and color

Words to indicate colors that are often used and used by almost every speaker for the same are called basic color terms . The number of basic color terms differs greatly between cultures, this is called cultural relativism . New basic colors arise when a color lies on the border between two existing colors and none of those existing colors matches the color. Color perception does not depend on the number of basic colors that someone knows, it is not primarily influenced by language and culture.

Qualia is the name for the subjective aspects of sensation and perception experiences, such as sweet, purple or itching. The problem of inverse qualia is illustrated by proposing that a color would be exchanged with its opposite color, while the names would remain the same. Red would then be called green, while still seeing the same color.

Genetic differences in color vision

There is some variation between individuals in terms of color perception. Unique green is observed, for example, between wavelengths of 495-530 nm. Far more men than women are color blind. This is due to the fact that the genetic information of the M and L cones is on the X chromosome, of which men only have one. There are different types of color blindness. A determining factor is the type of cones that are affected. A second factor is the type of defect, the type of cones is different or the entire type of cones is missing. People who miss a sort of cone are called color blind, but it is not that they cannot see any color at all. Two types of cones remain. Defects in M- and L-type cones are most common, so most color-blind people have difficulty distinguishing between medium and long wavelengths. A deuteranope is a person who suffers from color blindness due to the absence of M-cones. A protanope is a person with color blindness due to missing L-cones. A tritanope is someone with color blindness who has no S-cones. People who differ in color have three types of cones, but two are almost the same, so these people will perceive colors the same as people with only two types of cones. A cone monochromate is a person with only one type of cones and is really color blind. A rod of monochromate is a person who does not have a type of cones. These people are also color blind and also have seriously impaired vision in bright light. Agnosia is not being able to recognize objects, despite the ability to see them. Anomia is not being able to name objects, but being able to see and recognize them. In this context, anomia is not being able to name color.

How to move from the color of light to a world of color?

In reality, regions of one color overlap with regions of another color. Color contrast is a perception effect where the color of one region induces the opposite color in a neighboring region. Color assimilation is a perception effect in which two colors blend into each other, with each of the colors taking on a bit of the chromatic quality of the other. An unrelated color is a color that cannot be experienced in isolation. A related color is a color, such as brown or gray, that is only seen in relation to other colors, a grayish pixel in complete darkness can appear white. Context therefore plays an important role in perceiving color.

Adaption and afterimages

Adaption can be color specific. This is evident from the phenomenon of negative afterimages. An afterimage is a visual image that is seen after the stimulus has been removed. The first colored stimulus is the adapting stimulus, that is, a stimulus that, if removed, causes a change in visual perception or sensitivity. The illusionary color that is then observed is the negative afterimage, an afterimage whose polarity is the opposite of the original stimulus. Light stimuli then produce dark negative afterimages and colors are complementary, red produces green and yellow produces blue. The neutral point is the point at which an opposite color mechanism no longer generates a signal.

Color constancy

Color constancy is the tendency of a surface to remain the same color under different exposures. This is a difficult problem for the visual system, since an illuminant, the light that illuminates a surface, is not constant. The spectral reflectance function of a surface is the percentage of each wavelength that is reflected. The spectral power distribution is the relative amount of light at different wavelengths, so sunlight is yellowish and daylight bluish. The light that is reflected to our eyes is the product of the surface and the degree of lighting. So there are two conditions. The three types of cones will give three outcomes for each condition. Despite the different outcomes of the cones under the different conditions, a banana will see yellow under both forms of lighting.

The problem with the illuminant

There is the illuminant (call it I) and a surface (call it S). The result that you observe is the product of I x S , but we want to know S. The visual system is very good at determining what S is.

Physical constraints make constancy possible

Certain assumptions limit the possible answers to the color of the surface. An example of such an assumption is that the brightest region is adopted as a white-colored region. Assumptions with regard to the illuminant are, for example, that unnatural light sources make the world look unnatural, such as in night clubs. Assumptions can also be made regarding surfaces. Reflection is the percentage of light that hits a surface and is reflected and not absorbed. In addition, assumptions can be made about the structure of the world: sharp boundaries in an image are often the result of boundaries between objects and not of boundaries between light sources. The visual system uses such assumptions to achieve color consistency.

What is color vision good for?

Seeing color is not an absolute requirement, but there are two areas where it is very useful, namely in terms of food and sex. In general, seeing color is therefore helpful in visual search tasks. Food is easier to find and it is also easier to decide if the food is already ripe. For example, bees can observe the reflection of short wavelengths of flowers, so they find suitable food sources. Colors are used by the animals themselves to send out sexual signals, like a peacock does with its feathers, for example. Seeing color is achieved in different ways in different ways.

How does the perception of space and binocular vision work? - Chapter 6 (5th edition)

The ability to perceive and respond to the structure of space is one of the fundamental goals of the visual system. As a starting point, we assume that there is an external world, a philosophical vision known as realism. Positivists, as an alternative, state that the world as we perceive it exists because our senses create it for us. This makes the world nothing more than a very elaborate hallucination. The term Euclidean refers to the geometry of the real external world. In the Euclidean geometry, parallel lines remain parallel and objects remain the same size and shape as they move through space. The geometry of retinal images of this world are clearly non-Euclidean. The geometry becomes non-Euclidean when the three-dimensional world is projected onto the curved, two-dimensional surface of the retina. So we have to reconstruct the Euclidean world from non-Euclidean input from two different retinal images, one from each eye. Having two eyes offers advantages, such as losing one eye does not immediately remove sight and being able to see a larger area. Binocular means with two eyes. The chance that the eyes miss something is smaller because there are two detectors present, this is called probability summation . If both eyes are searching for the same target, this is called binocular summation. The differences between two retinal images of the same scene is called binocular inequality. Inequality is the basis for stereopsis. Stereopsis is the ability to use binocular inequality as a cue for depth, it is a vivid perception of the three-dimensionality of the world that is not accessible with monocular vision. Monocular means with one eye. Monocular depth cues are depth-related cues that are available even when only one eye is used. Binocular depth cues are cues related to depth that depend on information from both eyes. Stereopsis is the primary example in humans, but also convergence and being able to see more of an object with two eyes than with one eye are binocular depth cues.

How are monucular cues for three-dimensional space used to form a representation?

Every vision of the world offers multiple depth cues. Normally these cues reinforce each other and we combine them to form a convincing and reliable representation of the three-dimensional world. But sometimes the cues contradict each other. How are cues used to form a representation?

Pictorial depth cues

For a long time, artitsts have used smart tricks to depict depth in their paintings. These are also referred to as pictorial depth cues. They are the natural consequence of the projection of the three-dimensional world onto a two-dimensional surface of the retina. When an image is viewed from the correct position, the retinal image (at least in one eye) formed by the two-dimensional image is the same as the retinal image that would have been formed by the three-dimensional world. As a result, we can see depth in the image.

Occlusion

Occlusion is a cue for relative depth in which, for example, one object gets in the way of seeing a part of another object. Occlusion is a non-metric depth cue , it only gives the relative order of objects that cover other objects and objects that are covered. A metric depth cue is a cue that does provide information about the distance in the third dimension.

Size and position cues

The image on the retina that is formed by an object in the world becomes smaller as the object moves further away from you. This fact is called projective geometry. Projective geometry is the geometry that describes the transformations when the three-dimensional world is projected onto a two-dimensional surface. Smaller objects appear to be further away than larger objects, this is called relative size , a depth cue that compares the sizes between items without knowing the absolute size of one of these items. Texture gradient means that larger objects are in one area and smaller objects in another area. A number of items that gradually or systematically move from large to small appear to be on a surface with depth. The depth cue relative height is the observation that objects on the ground at different distances from the observer form different images at different heights in the retinal image. Objects that are further away are placed higher in the image. Texture fields that give an impression of three-dimensionality are actually combinations of relative size and relative height cues. Familiar size is a depth cue based on our knowledge of the typical size of objects. Occlusion is a non-metric cue, but provides information about the depth order. The relative size and height cues do offer some metric information. There are two types of metric depth cues. A relative metric depth cue can specify that object A is, for example, twice as far away as object B, without information being given about the absolute distance from A to B. An absolute metric depth cue provides quantifiable information about the distance in the third dimension , trusted size is an example of this.

Aerial perspective

There is an implicit assumption that light is scattered in the atmosphere, the more light is scattered, the more atmosphere we have to look at. This cue is called air perspective or haze. Short wavelengths fall apart more than medium and long wavelengths. This is why the sky appears blue and objects farther away see not only blurred, but also bluish.

Linear perspective

Linear perspective is a cue based on the fact that lines that are parallel in the three-dimensional world seem to converge in a two-dimensional image. The point at which lines seem to converge is called the vanishing point .

Seeing depth in pictures

A visual depth cue is a depth or distance cue that artists use to indicate three-dimensional depth in two-dimensional images. In general, images from many different points of view can be viewed, despite distortions that arise. The anamorphosis or anamorphic projection technique illustrates that the ability to deal with distortion is limited. In this technique, rules from a linear perspective are used to make a two-dimensional image look so distorted that it only seems okay if you look at it from a certain point of view.

Which triangulation cues to three-dimensional space are there?

In addition to pictorial deth cues, there are a number of additional sources of information available to our visual system when we look at real-world scenes. Such cues cannot be reproduced in a static two-dimensional pictures. These cues are also known as triangulation cues due to viewing the picture from different vantage points. These cues can be either monocular (motion parallax and focus) or binocular (convergence and stereopsis).

Motion cues

The first non-visual depth cue is the motion parallax, an important cue based on the head movement. The geometric information obtained from one eye in two different positions at two different moments is equal to the information from two eyes in different positions in the head at the same time. How does motion provide a cue for depth? The term parallax refers to the fact that objects that are closer by pass faster by passing than objects that are farther away. Like objects that get bigger and smaller as they get closer and farther away, an object shown on a screen may seem to come to you as it gets bigger. This optical flow can even give you the feeling that you are moving towards the screen instead of the object on the screen towards you.

Accommodation and convergence

Accommodation is the process in which the eye changes focus, the lens becomes thicker when the focus is on objects that are closer. Convergence is the ability of the eyeballs to turn inwards. It is often used to place the two images of something being looked at at corresponding locations in the retinal images. Convergence reduces the inequality of this object to (almost) zero. Convergence happens when you focus on an object that is closer. Divergence works the same way, but the eyeballs turn outward and divergence occurs when you focus on an object farther away.

What are binocular vision and stereopsis?

The brain makes a translation from inequality to stereopsis. The object you are looking at always falls on the retina. The geometric concept of corresponding retinal points states that two monocular images of an object fall on corresponding points if these points have the same distance to the retina in both eyes. Every object that falls on the imagined circle that runs through the two eyeballs and the object on which the person focuses, projects at corresponding retinal points. This circle is called the Vieth-Müller circle. If life were simple, this circle would be the horopter: the location of objects whose images are at corresponding points. The surface without inequality. It is important that there is a surface without inequality whose position depends on the state of convergence of the eyes. Objects on the horopter are seen as single objects when viewed with both eyes. Objects that are closer or further away from the surface without inequality form images at non-corresponding points in the two eyes and we see the objects twice. This double vision is called diplopia. Objects that are close to, but not on, the horopter can also be seen as single objects. The space in front of and behind the horopter in which objects are perceived as ankle is called Panum's fusional area. Crossing inequality (crossed disparity) is produced by objects before the fixation point (the horopter). The term crossed is used because images or objects for the horopter to the left appear to be out of place in the right eye and to the right are out of place in the left eye. Uncrossed inequality is created by objects behind the horopter. These objects seem to be misplaced to the right in the right eye and to the left in the left eye.

Stereoscopes and stereograms

A stereoscope is a tool developed by Wheatstone, with which one image can be presented simultaneously to one eye and another image to the other eye. The stereoscope proved that the visual system interprets binocular inequality as a depth cue. A stereoscope is useful but is not really necessary to experience stereopsis. You can also teach yourself a technique called free fusion. Hereby the eyes must be converged and diverted to see a stereogram without a stereoscope. This allows you to create 3D images from 2D stereograms. The intention is that the eyes are directed separately at different copies of the image. Because of this the intersection of the two viewing directions of the eyes no longer falls on the leaf, but behind it, you look through the page, so you get the feeling of looking at an object that is closer or further away. About 3 - 5% of the population suffers from stereo blindness, the inability to use binocular inequality as a depth cue. These individuals have sight in both eyes.

Random dots of stereograms

Julesz theorized that stereopsis was used to discover objects and surfaces, this was in contrast to conventional wisdom that stereopsis only occurred relatively late in the processing of visual processing. Julesz thought that stereopsis might help to reveal camouflaged objects. To prove his point, he used random dots stereograms (RDSs), stereograms made up of many randomly placed dots. These stereograms do not contain monocular depth cues. The stimuli visible in these stereograms are often 'Cyclopean' in nature. Cyclopean refers to stimuli defined by binocular inequality only.

3D films also use stereoscopic photography. In the past they were viewed with special glasses with a red lens and a green or blue lens. These days, special glasses are used to ensure that alternating frames of the film are presented to different eyes.

Using binocular stereopsis

The use of stereopsis goes beyond entertainment purposes. Stereopsis can undo an ambiguous situation. Stereoscopic images are used in radiology, for example, it increases reliability.

Stereoscopic correspondence

The correspondence problem in binocular vision is the problem of figuring out which part of the image in the left eye should be matched with which part of the image in the right eye. The problem is especially difficult when images consist of thousands of the same properties such as dots in random dots stereograms. Two alternative heuristics to solve the correspondence problem have been proposed by Marr and Pagio: the uniqueness limitation and continuity limitation. The uniqueness limitation is the observation in stereopsis that a characteristic is represented exactly once in each retinal image. The continuity limitation is an observation in stereopsis that, apart from the edges of objects, adjacent points are at equal distances from the observer. This limitation helps in solving the correspondence problem.

The physiological basis of stereopsis and depth perception

Convergence does not happen until the striate cortex is reached, where the most neurons are binocular. A binocular neuron has two receptive fields, one in each eye. In the neurons in the striate cortex, these two receptive fields are reasonably the same in both eyes with approximately the same orientation and spatial frequency tuning and also the same preference for speed and direction of movement. These cells are therefore suitable for matching images in two eyes. Many binocular neurons respond best when the retinal images are at corresponding points in the two retinas and thus provide a neural basis for the horopter. However, many other binocular neurons respond best when the same images have slightly different positions on the retinas: they are tuned for a partial binocular inequality. Stereopsis can be achieved via both metric and non-metric cues. Non-metric cues tell you whether a characteristic is before or after the fixation surface. There are some of such inequality-tuned neurons found in V2 and some higher cortical areas. Some neurons respond positively to inequalities close to zero (images fall at corresponding retinal points), other neurons were broadly tuned for a number of crossed (near) or uncrossed (distant) inequalities. On the other hand, stereopsis can also be used in a precise metric way. Both forms of stereopsis are useful and fMRI data suggests that the dorsal true-path is more concerned with metric stereopsis, while the ventral wat-path is more concerned with categorical, near-versus-distant information.

How to combine depth cues?

The basic insight Bayes was that prior knowledge influences estimates of the possibility of a current observation ( P ( A | D ) = P ( A ) x P ( O | A ) / P ( O ). We can P ( A | O ) calculate: the probability ( P ) that the world is in state A, given observation (O).

The Bayesian approach revisited

We do not know how great the influence is of prior knowledge. Use can be made of the principle that certain hypotheses are more likely than others. For each hypothesis that is drawn up, the formula is entered and the probability determined.

Illusions and the construction of space

So if the visual perception of the world is the best guess about the cause of the input, then what happens when this guess turns out to be wrong is interesting. Depth can have a lot of influence but make you believe a wrong interpretation.

Binocular rivalry and oppression

What happens when two completely different stimuli are presented to the two eyes? The images 'fight' to be seen, this is called binocular rivalry. The two images alternate as a result in an overlapping manner. The brain tries to make a coherent perception of it.

What does the development of binocular vision and stereopsis look like?

Are babies born with stereopsis? If not: how does a binocular function develop? There are three laws according to McKee:

  1. As children get older, they get better at things.
  2. Whatever it is, girls do it before boys.
  3. Everything develops simultaneously with the rest.

Teller adds that they start badly with something, get better and better and get worse after a while. Research has confirmed the first two laws, but not the third. The development of binocular vision and stereopsis are strong violations of that third law. Children are blind to inequality for up to about 4 months, but then they suddenly develop stereopsis. Stereopsis is not an all-or-nothing phenomenon. Stereo sharpness is a measurement of the smallest binocular inequality that can generate a sensation of depth. Once stereopsis has been developed, the stereo sharpness improves very quickly to close to what adults have.

How do we explain the sudden emergence of 'stereopsis' in babies of around 4 months? Newborns can make converging eye movements to follow a target, but accurate and consistent convergence only starts around 3 to 4 months. We cannot conclude that the inaccurate convergence prevents stereopsis from developing earlier, because convergence does not have to be that accurate in the case of large inequalities. A second option is that stereopsis does not develop earlier, because part of the visual system is still immature, such as inequality-sensitive neurons in V1. However, recent data suggests that we should look further: research shows that the neural device in V1 can indeed combine signals from two eyes and is sensitive to inequalities. A possible explanation is that the derivation of relative inequality, which is necessary for stereo sharpness, takes place outside of V1, possibly in V2. A second possible explanation is that the problem lies in V1: the cells are mature in their response to inequalities between the images of the two eyes, but they are not yet mature in other important ways. Therefore, it may be that the signals that the not yet fully mature V1 neurons emit are too weak to support stereopsis.

Abnormal visual experience can disrupt the binocular vision

We do not have to 'learn' binocular vision: sufficient is present at birth. There is, however, a critical period during development, in which stimulation of binocular vision is required for normal development of the visual cortex. The strabismus disorder implies an erroneous assignment of the two eyes, so that an object is imaged in one eye on the retina and in the other eye outside. In esotropia one eye is directed too much towards the nose (cross-eyed) and in exotropia too much is directed outwards. Esotropia must be treated before the age at which stereopsis normally develops to limit the damage. Esotropia and exotropia can be resolved operationally. The tilt after effect means that the perception seems to be tilted , by adapting to a pattern of a certain direction. Suppression is the inhibition of an unwanted image. Suppression often happens in people with strabismus. That which is suppressed is not consciously experienced. In esotropia, the most common pattern is suppression of the input of the twisted eye. Binocular rivalry is a form of suppression, so some degree of suppression is an important part of the normal visual experience. Strabismus greatly reduces the number of binocular neurons in the visual cortex which disrupts stereopsis.

How does attention and the perception of a scene work? - Chapter 7 (5th edition)

We cannot process everything we see at the same time, because we do not have the brain capacity for it. We pay attention to certain stimuli and not to other stimuli. Attention is a very large set of selective processes in the brain. Because we cannot process all input at the same time, the nervous system has developed mechanisms that limit processing to a subset of things, places, ideas or moments. Attention is therefore not given to a single locus in the nervous system. There are different types of attention:

  • Attention can be both internal and external. External attention refers to attention for stimuli in the world. Internal attention is our ability to, for example, focus on a certain thought or choosing a certain reaction over another reaction.
  • Attention can be open or hidden. Open attention refers to focusing a sense on a stimulus. If you have your eyes on a word in a book, but actually watch the nice boy to your left, this is hidden attention.
  • Divided attention is, for example, when you are reading a text, but also remain aware of the fact that music is playing in the room.
  • Permanent attention is, for example, continuing to look at the kettle until the water boils.
  • This chapter focuses on selective attention: the ability to choose one or more stimuli from the multitude of stimuli.

How does selection in space work?

Reaction time (RT) is the amount of time that passes between the moment the stimulus appears and the moment the participant presses the response button. A cue is a stimulus that can indicate where (or what) the next stimulus can be. Cues can be valid, invalid or neutral. With a disabled cue there is a longer response time and with a valid cue there is a shorter response time. An exogenous cue or peripheral cue is a cue that is offered in the peripheral visual field of the participant. An endogenous cue or symbolic cue is located in or near the area to which attention is focused. The stimulus onset asynchrony (SOA) is the time between the onset of one stimulus and the onset of another stimulus. An exogenous cue becomes 100-150 ms after it appears fully effective. An endogenous cue often takes longer to get going and to be fully effective. The effort it takes to focus attention back on an area that was focused just before is called inhibition of return .

The 'spotlight' of attention

If you look from one point to another in the space, does your attention really move with it? Attention moves along in a way that is analogous to our eye movements, when we shift our gaze, our fixation point goes along with the space that we bridge, just like a spotlight. This spotlight metaphor is very popular.

What is a visual search?

A visual search is the search for a target in a display that contains many elements that distract. The target is the goal of the visual search. A distractor is any stimulus that is not the target. The size of the set is the number of total items in a visual display. The target is harder to find as the size of the set increases. To measure efficiency, the reaction time is often measured and search tasks are compared in this way.

Feature searches are efficient

A feature search is the search for a target that is defined by the presence of a characteristic, such as a prominent color or orientation. Prominent (salient): the liveliness of a stimulus compared to other stimuli. If the color is prominent enough, it doesn't matter how many distractors there are. Apparently we can process the color or orientation of all items at once, this is called a parallel search. The RT does not change with the size of the set.

Many searches are inefficient

If the target and distractors share the same basic characteristics in a visual search task, the search is inefficient. Serial serf-terminating search is the type of search that is needed in this situation where you go through each item separately until you find the target.

In real-world searches, basic features guide visual search

Guided search is a search in which the attention can be limited to a subset of possible items based on information about the basic characteristics of the target item, such as the color. Conjunction searches are searches for a target defined by the presence of two or more attributes. Conjunction is 'preventing together with'.

In real-world searches, properties of scenes guide visual search

Scene-based control means that the information in our understanding of scenes helps us to find specific objects within these scenes. This is related to Bayes' principle probability principle, which tells how likely it is that a certain object in the scene is the target.

The binding problem in visual search

The problem of binding is the challenge to assign different attributes of visual stimuli (such as color, orientation, movement) - which are processed by different brain circuits - to appropriate objects so that we see one unified object. In the pre-attentive internship , the processing of a stimulus takes place before selective attention is given to that stimulus. According to Treisman's characteristic integration theory, the basic characteristics of color and orientation are available in the pre-attentive stage of processing. However, we do not know how these characteristics are tied together until we focus our attention on a specific object. Imaginary conjunction is a false combination of two characteristics in a visual scene. An example of this is seeing a red X, while a display contains red letters and Xs, but no red X.

What happens if you search in time rather than in space?

Up to this point, we have focused on attentional selection in space. Now, let's consider what happens if you search in time rather than in space. Rapid serial visual presentation (RSVP) is an experimental procedure in which multiple stimuli in a current pass through at a location (often the point of fixation), this happens at high speed. The participant determines whether, for example, an X has passed. If the letters (stimuli) are quite large you can easily determine this. The task can be made more complex by using photos instead of letters. There is a tendency not to respond to the second stimulus of two different target stimuli between a series of distracting stimuli, when the participant responded to the first target stimuli about 200-300 ms before the second target stimuli was presented. This phenomenon is called attentional blink.

What is the physiological basis of attention?

Attention could enhance neural activity

If we have to pay attention to one location in the visual field, the neurons that respond to stimuli in that part of the field become more active. It has been demonstrated with fMRI that even the first stages of cortical processing are influenced by attention. If we go further into the visual areas of the cortex, we see greater effects of attention. It is likely that the influence of attention on the first stages of the cortex is the result of feedback from later stages of processing. That feedback can be a very important part of visual processing.

Attention could enhance the processing of a specific type of stimulus

The desire to find pennies leads to the pennies becoming more prominent. If we suddenly want to find euros, the prominence of pennies decreases and that of euros increases. In that case, attention is not based on spatial location, but on the basis of the characteristics of a stimulus. Selective attention can also be used to process a certain type of stimulus. fMRI shows that different parts of the brain specialize in processing different stimuli. For example, the fusiform face area (FFA) is important in the processing of faces and the parahippocampal place area (PPA) in the processing of places.

Attention could coordinate the activity of different brain regions

A solution to the problem of binding is that this binding and coordination of regions are involved in synchronization of the temporal patterns of activity in these regions. A role of attention is that it controls what is synchronized with what.

Attention and single cells

On a more precise level, we may ask how attention changes the responses of a single neuron. One way is response enhancement , the effect of attention on the response of a neuron, in which the neuron that responds to a stimulus that receives attention triggers a greater response. A second way is through sharper tuning : attention makes it easier for the neuron to find a weak vertical signal between the noise of other orientations. For example, a neuron that responds to lines with orientations of -20 to +20 degrees then responds to lines of +/- 10 degrees. A third option and a more radical option is changed tuning, with attention changing the preference of a neuron. If cells are limited to processing the object that receives attention, the sensitivity to nearby items should be reduced, because the sources that would be used for them are used for the stimulus that receives attention. The most important thing is that attention can change the activity of individual cells.

Attention may change the way neurons talk to each other

Before, we discussed how attention can change the activity of a neuron and of an area of the brain. In addition, attention may change the interactions between neurons and/or areas; attention may change the way neurons talk to each other. One way to do so is via the possibility of synchronizing the temporal patterns of activity in certain areas. If neurons are synchronized, it is as if they are all doing more or less the same thing. A group of synchronized neurons may, for instance, say "that is a face" while other neurons, roused into desynchronized action by attention, are working on different aspects of recognition.

What would happen if you could no longer pay attention?

In the case of a visual field defect, part of the visual field has no vision or abnormal vision. This is often the result of damage to the visual nervous system. A person with damage to the primary visual cortex in the right hemisphere will be blind to the left side of the visual space and vice versa. The parietal lobe is a lobe in both cerebral hemispheres at the top of the brain between the frontal and occipital lobes. If there is damage here, problems will arise in focusing attention on objects and places. These problems manifest in, among other things, neglect and extinction.

Neglect

Neglect patients behave as if part of the world does not exist. As a neurological symptom, patients with neglect (1) cannot respond to or consciously experience stimuli in the contralesional visual field and (2) ignore half of their body or half of an object. The contralesional field is the visual field on the opposite side of a brain injury.

Extinction

Extinction is related to neglect, it can even be neglect in a milder form. Extinction is the inability to detect a stimulus if it is located on one side of the fixation point when another stimulus is presented in the other side of the visual field. If an object is presented to an extinction patient in the contralesional field, he / she can only see this if it is a prominent object. If there is competition with an equivalent object in the ipsilaesional field, the two objects fight for attention and the ipsilaesional object wins.

Selective focus in ADHD

Attention deficit hyperactivity disorder (ADHD) is a common disorder in children. Symptoms include difficulty focusing attention and problems controlling behavior. Research with visual search tasks, however, shows that this difficulty with regard to attention outside the domain of visual spatial attention.

How to observe and understand scenes?

Two pathways to scene perception

To understand the perception of scenes, a two-path approach can be used, the selective and non-selective path. Early in the visual process, information about basic characteristics is processed and grouping and other rules are used to create 'proto objects', which can be selected and recognized in the selective path.

The nonselective pathway computes ensemble statistics

Ensemble statistics is the average and distribution of properties such as orientation or color over a set of objects or within a region in a scene. Ensemble statistics represent knowledge about the properties of a group or objects, or rather an ensemble of proto objects. The selective path is required to remember more specific aspects.

The nonselective pathway computes scene gist and layout - very quickly

Spatial layout is the description of the structure of a scene (for example closed, open, rough, flowing) without reference to the identity of specific objects in the scene. We can quickly understand a scene by analyzing the spatial frequency components. The selective path contributes to the identities and relative locations of objects, after which the scene can be described in terms of ' this chicken in this position with this cow'. The combination of this global and local information could create a representation that gives us the impression that we are seeing an entire scene.

Memory for objects and scenes is amazingly good

The representations of scenes generated by information from the selective and non-selective paths are very strong. First, it opens a world full of coherent objects in a coherent scene, even before we have had a chance to really see most of these objects with attention. Secondly, they are easily stored in memory.

But, memory for objects and scenes can be amazingly bad: change blindness

It seems that pictures can be understood very quickly and if enough time is given, then they are coded in sufficient detail in the memory, so that the pictures can still be recognized days later. However, there is also the phenomenon of change blindness, the inability to notice a change between two scenes. If the core or meaning of the scene remains unchanged, larger changes may well occur unnoticed.

What do we actually see?

As with change blindness, change is not observed if the core of the scene remains the same, unless attention is drawn to it. Not noticing the unexpected leads to the phenomenon of inattentional blindness: not being able to notice a stimulus that could be reported without any difficulty if it had been addressed.

How does the perception of movement work? - Chapter 8 (5th edition)

What is the Motion Aftereffect?

The motion aftereffect (MAE) is the illusion that a stationary object is moving. This illusion takes place after prolonged exposure to a moving object.

How does the computation of visual motion work?

The calculation of visual movement is based on the Reichardt model. A moving object first passes through the receptive field of one neuron (A) and a little later through the receptive field of an adjacent neuron (B). A third neuron (M) listens to the two neurons and detects movement. The problem with this arrangement is that M would also fire if there is a stationary object that falls on both the receptive field of neuron A and neuron B. To solve this, two components are added to the neural circuit. The first addition is a cell (D) that receives input from neuron A only and delays its transmission for a short period. Cell D has a fast adjustment and therefore fires immediately when A is stimulated, but stops quickly if stimulation persists. Cells B and D have a connection with cell X: the second addition. Cell X is a multiplication cell that only fires when B and D are both active, it multiplies the simultaneous input from B and D and forwards it to M. This mechanism is direction-selective, it only responds to movement from left to right. A more realistic circuit also adds receptors that can detect movement at longer distances.

Apparent movement

A criticism of the above neural circuit may be that the visual system does not require continuous movement to detect movement. The point of criticism, however, becomes a virtue rather than danger to the Reichardt model, as it provides an explanation for a visual illusion called apparent motion. Apparent motion is the deceptive impression of fluid movements due to the rapid alternation and sequences of objects displayed at different locations.

The correspondence problem - viewing through an Aperture

The correspondence problem is the problem of the movement detection system in detecting movement: the system does not know which characteristic of frame 2 corresponds to a specific characteristic in frame 1. If an image slides to the lower right there is a detector that captures diagonal movement and a detector which detects the downward movement. These two detectors compete against each other to determine our overall perception.

The aperture problem is strongly related to the correspondence problem. Another detector can win the competition just mentioned when an object is seen through an opening than when the entire object is seen. If you look at a moving object through an aperture (an opening through which you can only partially see an object), there is ambiguity about the direction of the movement. Without the aperture there is no ambiguity and therefore no problem. To contain the wider implications of correspondence and opening problem, it is important to know that every neuron in V1 has a limited receptive field, every V1 cell sees the world through a small aperture. Therefore, none of these cells can say with certainty which visual elements correspond to each other when an object moves, even when no mask is present. The solution to this problem is to have another set of neurons listen to the V1 neurons and integrate the potentially conflicting signals.

Detection of global movement in area MT

Lesions on the magnocellular layers of the LGN disrupt the perception of large, fast-moving objects. Information from magnocellular neurons nourish V1 and are transmitted to, among others, the middle temporal area (MT) , an area that is thought to be important in the perception of motion. The human equivalent of MT is labeled as MT + or V5. This motion sensitive area has at least two separate folders on the lateral surface at the temporal-occipital (TO) border. The majority of neurons in the MT are selective for movement in a specific direction, but they also have a small selectivity for shape or color. The MT is the area where the neurons are for global motion detection in the visual system.

The results of tests on monkeys can also be applied to humans. When we see a stationary object, the neurons are sensitive to downward movement with the same ratio as the neurons that are sensitive to upward movement, so the signals cancel each other out and no movement is observed. However, if you look at a waterfall for longer, detectors that are sensitive to downward movement will get tired. The result is that when you look at the stones next to it, now that the neurons prone to upward movement fire more than the tired neurons, it will seem as if the stones are rising. There is always a small amount of retinal movement that stimulates motion-sensitive neurons, because our eyes constantly make small movements. Interocular transfer is the transition of an effect (such as an adjustment) from one eye to the other.

Until now, the book has only focused on first-order motion, the movement of an object defined by changes in illumination, or the change of luminance-defined objects over time (objects that are outlined by changes in the reflected light). The next part is about second-order motion , the movement of an object that is defined by changes in contrast or texture, but not by luminance. Texture-defined objects or contrast-defined objects are objects that are defined by changes in contrast or texture, but not by luminance.

The man who could see no movement

Akinetopsia is a rare neuropsychological disorder in which the individual has no perception of movement. It appears to be caused by disturbances in the MT cortical area.

Second-order motion

As is the case in first-order displays, nothing actually moves in second-order motion. The only change is that frame 1 transforms into frame 2 and frame 2 into frame 3. Second-order motion proves that matching discrete objects between frames is not a requirement for the perception of motion. First-order and second-order motion use different areas of the brain, which has been demonstrated by research into double dissociations, which means that one of the two functions is damaged, without the other being damaged and vice versa.

Motion Induced Blindness (MIB)

Although this may seem hard to believe, motion cam make you blind temporarily. In 2001, it was discovered by Bonneh, Cooperman, and Sagi that if you carefully fixate a central target, stationary targets in the periphery will simply disappear, as if erased, when a global moving pattern is superimposed. Although there is no clear explanation (yet), motion induced blindness (MIB) seems somewhat related to the well-known Troxler effect, in which an unchanging target in peripheral vision will fade and disappear if you steadily fixate a central target.

How to use information about movement?

Going with the flow: Using motion information to navigate

Navigating yourself safely through the world is one of the primary functions of our visual system. The optic array is the collection of light rays that interact with objects in front of the observer. Some rays reach our retinas, through which we can see. Gibson stated that we experience patterns of optic flow as we move through our environment: the changing angular positions of points in a perspective image. This optic flow is used by our visual system to determine where we are going. The focus of expansion is the point in the center of the horizon from which all the perspective in the image seems to go out when we are moving. The focus of expansion is an aspect of optic flow.

Avoiding immenent collision: the Tao of Tau

The time to collision (TTC) is the time required for a moving object to hit a stationary object. TTC is represented by the distance (in feet) or speed (in feet per second). However, people are much better at judging a TTC than predicted based on our ability to judge distance. To resolve this discrepancy, an alternative source of information from the optic flow has been designated. This would transmit the TTC without having to estimate absolute distances or speeds. Lee called this source of information tau (τ) . This is how tau works: when a ball approaches your nose, the image of the ball on your retina becomes larger. The ratio of the size of the retinal image at any time and the speed at which this image magnifies is tau. The big advantage of using tau to estimate TTC is that it only depends on information that is directly available from the retinal image.

Something in the way you move: Using motion information to identify objects

Motion can also inform us about the nature of objects. Biological motion is the pattern of movement of living things. It plays an important role in understanding the actions that people perform.

How does the eye move?

As our eyes move, the image on the retina also moves. The brain must therefore determine which movements belong to real movements and which belong to movements through our eyes and our head. Smooth pursuit is a type of voluntary eye movement in which the eyes move smoothly to follow a moving object.

Physiology and types of eye movements

Six muscles are connected to each eye, arranged in three pairs. The superior colliculus is a structure in the middle brain important for initiating eye movement. When stimulated, the eye will move a certain amount in a certain direction. Every time the cell is stimulated, the same eye movement is the result. The motor system is very active. Even when we try to keep our eyes completely stationary, there are continuous involuntary small but important movements that are called microsaccades. If the eye muscles are temporarily paralyzed, the entire visual field gradually becomes completely blurred. The usefulness of microsaccades is likely to be important for being able to make fine spatial judgments, such as pulling a thread through the eye of the needle.

In addition to involuntary eye movements, there are three types of voluntary eye movements. The smooth-pursuit movements take place when we follow a moving object. We make vergence eye movements when we turn our eyes inwards (converge) to focus on an object close by or turn outwards (diverge) to focus on an object that is far away. The third voluntary movement is the saccade: a quick jump of the eye with which the fixation point shifts from one point to the other. Saccases are both voluntary and involuntary. When we see a scene, we tend to fixate ourselves on interesting places. There are also reflexive eye movements, these are automatic and involuntary. This happens, for example, when the eyes move to compensate for head and body movement while our fixation remains on a specific target. These are called vestibular eye movements and work via the vestibulo-ocular reflex (VOR). In the optokinetic nystagmus (OKN), the eyes involuntarily follow a continuously moving object.

Eye movements and reading

We make saccades while reading to bring the text into our fovea. We do this because text that is too far from our fixation cannot be read, partly by visual crowding. The perceptual span is asymmetrical, you can get information about 15 letters to the right of the fixation point, but only 3 to 4 letters to the left of the fixation point. People who read Hebrew have this the other way around. Readers who can read both Hebrew and English can switch this asymmetry. This is therefore determined by attention and not a product of limitations of the visual system.

Saccadic suppression and the comparator

Saccadic suppression is the reduction of visual sensitivity, which happens when we make saccadic eye movements. Saccadic suppression removes the messy spot from the retinal image during movement. By sending two copies of the commands that the eyes get to move, the motion system solves the problem of how an object in motion may appear stationary. The comparator is an area of the visual system that receives one copy of the command from the motor system when the eyes move, the other copy goes to the muscles of the eye. The comparator compares the signal of the moving image with the signal of the eye movement and can compensate for the changes in the image caused by the eye movement.

Updating the neural mechanisms to compensate for eye movement

One way in which this compensation can take place is by 'remapping' visual receptive fields. Receptive fields have fixed places and always hold the same position with respect to the fovea. However, research shows that certain receptive fields of some neurons change places for a saccade. This is called remapping or updating. Another study shows that the receptive fields only temporarily change location.

How can the development of the perception of movement be described?

The sensitivity to visual movement does not all develop at the same time. Some aspects are innate, for example physiological studies show that neurons in V1 have a sensitivity to visual direction that resembles the sensitivity that adults have. On the other hand, for example, the sensitivity to global movement (MT) develops more slowly and only reaches maturity around 3 to 4 years of age.

What is sound and how can people perceive this? - Chapter 9 (5th edition)

We humans are often called visual animals, because we experience the importance of hearing as a matter of course. Our ears are always open, around obstacles we can still hear and when we look the other way we can hear everything around us. Deafness deprives the most fundamental of human skills: communication through speech.

What is the function of hearing?

Every sense has an evolutionary history and has been exposed to various challenges. For example, biology has devised various solutions to the challenges of feeling and interpreting sound.

What is sound?

Sounds are created when objects vibrate. These vibrations also cause the molecules in matter to vibrate in the environment of the object. These vibrations cause pressure changes in the matter. These pressure changes can be described as waves. These waves spread and extinguish as they move further from their source. Sound waves have different speeds due to different types of matter.

Basic qualities of sound waves: frequency and amplitude

Sound waves are described in terms of amplitude and fluctuations in pressure. The amplitude or intensity is the magnitude of the pressure change in a sound wave. The fluctuations in pressure is called the frequency, the number of times per second that a pressure pattern repeats itself. The frequency of sound waves is measured in Hertz (Hz) where 1 Hz is equal to 1 cycle per second. The amplitude of a sound wave is associated with the loudness, the more intense a sound wave, the louder it will be perceived. Frequency is associated with the pitch, sounds with a low frequency correspond to a low pitch and sounds with a high frequency with a high pitch. Decibel (dB) is a unit of measurement for the physical intensity of sound. Decibels define the difference between two sounds as the ratio between two sound pressures. Every 10: 1 sound pressure ratio equals 20 dB and a 100: 1 ratio equals 40 dB. The equation for defining decibels is given by

dB = 20 log(p/p0)

where the variable p corresponds to the pressure (intensity) of the sound being described. The constant p0 is a reference pressure and is typically defined in auditory research contexts to be 0.0002 dyne per square centimeter (dyne/cm2).

Sine waves and complex sounds

A sine wave is often called a pure tone. All sounds can be described as a combination of sine waves. Complex sounds are best described in a spectrum that shows the relative energy at each frequency. Sounds with a harmonic spectrum are often caused by a single vibrating source such as the string of a guitar. Every frequency component in such a sound is called a 'harmonic'. The first harmonic is called the fundamental frequency and is the lowest frequency component of the sound. All other harmonics have frequencies that are multiples of the fundamental frequency. The shape of the spectrum is one of the most important characteristics that distinguish different sounds. The characteristics of sound sources determine the spectral form of sounds and these forms help identify the sound sources. Timbre is the term for a psychological sensation in which the listener can say that two sounds with the same loudness and pitch are not the same. It is a term that describes the quality of a sound that is partly dependent on the relative energy levels of harmonic components.

What does the basic structure of the auditory system of mammals look like?

Now that we know what sound is, we can examine how sounds are detected and recognized by the auditory system. Here, we describe different anatomical structures that are essential to understanding how sequences of tiny pressure changes are turned into meaningful sound perception.

Outer ear

Sounds are first collected at the pinna (pinna): the curled outer part of the ear. Then they go through the ear canal that runs about 25 mm into the head. The channel brings vibrations to the eardrum (tympanic membrane) and prevents damage to the eardrum. The eardrum is a thin layer of skin at the end of the ear canal and vibrates in response to sound. Together, the outer ear and the ear canal form the outer ear, the external sound-collecting part of the ear.

Middle ear

The eardrum is the boundary between the outer ear and the middle ear. The middle ear consists of three small bones called ossicles, which amplify the sound waves. The first ossicle is the hammer (malleous): connected on one side to the eardrum and on the other to the anvil. The anvil (incus) is again connected to the third ossicle, the stirrup (stapes). The stirrup brings vibrations of sound waves to the oval window. The oval window is the flexible opening to the cochlea (part of the inner ear) through which the stirrup transmits the vibration to the liquid inside. The oval window forms the boundary between the middle ear and the inner ear. The ossicles amplify the vibrations in two ways. First, the bones are connected to each other in such a way that they act as levers: more energy comes out than there is at the beginning. This leverage action increases the amount of pressure by approximately one third. Secondly, the bones concentrate the transferred energy from a larger to a smaller surface: the eardrum is about 18 times as large as the oval window, so the pressure on the oval window is 18 times greater than the pressure on the eardrum.

The middle ear has two muscles: the tensor tympani, connected to the hammer, and the stapedius, connected to the stirrup. Their main purpose is to limit the movement of the ossicles when very loud noises come in and thereby dampen changes in pressure, so that they can no longer damage the structures in the inner ear. Unfortunately, this acoustic reflex follows about a fifth of a second after the onset of the loud sound, so that the ear is not protected when abrupt loud sounds come in.

Inner ear

The inner ear is made up of a collection of fluid-filled chambers. Here the fine changes in sound pressure are translated into neural signals. The inner ear is a hollow spot in the temporal bone of the skull, in which the cochlea and the semicircular channels of the vestibular system are located. The main structure in the inner ear is the cochlea, a small spiral-shaped structure in the temporal bone of the skull. It is filled with aqueous fluid distributed over three parallel channels: the tympanic canal or scala tympani, the vestibular canal or scala vestibuli, and the middle channel or scala media. The tympanic and vestibular channels are connected by a small opening, the helicotrema. The three channels are separated by two membranes: the Reissners membrane, which sits between the vestibular and middle channels, and the basilar membrane, which sits between the middle and tympanic channels. The basilar membrane is the basis of the cochlear partition: a complex structure that converts sound waves into neural signals. It consists of the combination of the basilar membrane, the tectorial membrane and the organ of Corti. If sounds are extremely intense, all the pressure that is still present is transmitted by the helico theme back to the base of the cochlea through the tympanic channel where it is released by stretching a membrane called the round window.

The organ of Corti is a structure on the basilar membrane of the cochlea consisting of hair cells and dendrites of auditory nerve fibers. A hair cell is any cell that has stereocilia to convert the mechanical movements of the inner ear into neural activity that can be sent to the brain. Some hair cells also receive input from the brain. Stereocilium is a hair-like extension of the tips of hair cells in the cochlea that initiate the release of neurotransmitters when they are bent. Auditory nerves are a collection of neurons that transmit information from hair cells in the cochlea to and from the brain stem. The tectorial membrane is a gelatinous structure that extends to the top of the Corti organ. It's a piece that hangs above the outer hair cells. Through movement, this membrane causes the bending of the stereocilia. Each stereocilium is stuck to adjacent stereocilia by a thin filament, which is called tip link. When a stereo core bends, the tip link pulls the smaller stereo core, which opens an ion channel. This causes potassium to flow into the cell, causing depolarization. This depolarization ensures the influx of calcium and the release of neurotransmitters at the bottom of the cell to stimulate dendrites of the auditory nerve cell. Firing the auditory nerve cell completes the process of converting sound waves into neural activity.

As the amplitude of a sound wave increases, the eardrum and the oval window move further in and out with each fluctuation in pressure. This ultimately leads to the release of more neurotransmitters in the process, causing the auditory nerve fibers to fire faster. Thus, the greater the amplitude, the more the neurons fire. Frequency is coded differently. High frequencies cause the greatest displacement closer to the oval window and the bottom of the eardrum. Low frequencies cause displacements further away, more at the top. Different places in the cochlea are tuned for different frequencies. This tuning is also called the place code for frequency. Afferent fibers are neurons that transfer sensory information to the central nervous system. Efferent fibers are neurons that transfer information from the central nervous system to the periphery.

The auditory nerve

The responses of individual auditory nerve (AN) fibers at different frequencies is related to their location on the cochlear partition. Researchers found that different fibers react selectively to different sound frequencies. This frequency selectivity is most evident when sounds are very soft, with very low intensity levels an AN fiber will only fire more at a very limited range of frequencies. A threshold tuning curve is a graphical plot of the threshold values of a neuron or fiber in response to sine waves with varying frequencies at the lowest intensity that triggers a response. The characterizing frequency (CF) is the frequency for which a specific AN fiber is most sensitive. As long as the brain knows which AN fibers have which CFs, the brain can interpret the firing pattern of all AN fibers to determine the frequency of a tone. However, there are a number of complications here. Two of these are two-tone suppression and rate saturation.

Two-tone suppression means that an AN fiber will fire less by a certain tone when a second tone is added. Isointensity curves represent the rate of fire of an AN fiber under varying frequencies and intensities. Rate saturation is the point at which a nerve fiber fires as quickly as possible, whereby further stimulation cannot increase the rate of fire. The brain cannot rely on a single AN fiber to determine the frequency. One way the auditory system deals with this is to use AN fibers with different spontaneous fire rates. Rate-intensity function is a graph that shows the rate of fire of an AN fiber in response to a sound with a constant frequency with increasing intensity. A low spontaneous fiber is an AN fiber with a low rate of fire during spontaneous firing. These fibers need relatively intense sounds before they will fire at a faster pace. A highly spontaneous fiber is an AN fiber with a high rate of fire during spontaneous firing. These fibers increase their rate of fire at relatively low noise levels. A medium spontaneous fiber is an AN fiber with an average fire rate in spontaneous firing. In addition to having different AN fibers with different spontaneous fire rates, the auditory system can accurately determine the frequency of sound waves by integrating information from many AN fibers and using the pattern of fire rate. In addition to the place code of the cochlea, the auditory system also has a way to code frequency. Many AN fibers tend to generate action potentials at specific points in the phase of a sound wave, this phenomenon is called phase locking. The existence of phase locking means that the fire pattern of an AN fiber carries a temporal code for the frequency of the sound wave. The temporal code is the tuning of different parts of the cochlea with different frequencies. The volley principle is the idea that multiple neurons can provide a temporal code for frequency if each neuron fires at a separate point in the period of a sound wave and therefore does not fire in every period.

Auditory brain structures

All AN fibers begin to transmit signals in the cochlear nucleus, which consists of three subnuclei. The superior olive is an area in the brainstem where input from both ears comes together. Neurons from the cochlear nucleus and the superior olive rise in the brain stem to the inferior colliculus. The medial geniculate nucleus of the thalamus is the last stop in the auditory path before the cerebral cortex is reached. This transmits auditory signals to the temporal cortex and receives input from the auditory cortex. All structures of the auditory system show a consistent organizational pattern in how neurons are located, based on the frequency to which they are most sensitive. Neurons that respond most to low frequencies lie on one edge of each structure, neurons that respond most to high frequencies lie on the other edge. This is called a tonotopic organization. The primary auditory cortex (A1) is the first area in the temporal lobes that is responsible for processing acoustic information. Neurons from A1 project to the belt area, an area cortex where neurons respond to more complex characteristics of sound. From the belt area, neurons project into the parabelt area, an area cortex where neurons respond to more complex characteristics of sound and also to input from other senses.

What are the basic operational characteristics of the auditory system?

Psychoacoustics is the study of psychological correlations of the physical dimensions of acoustics. It is a research method in which a sound is played and the listeners are asked what they hear. When human listeners are asked to report their auditory sensations, their answers are partly based on the acoustic characteristics of the sound and partly based on their own psychological characteristics.

Intensity and loudness

The audibility threshold is the lowest level of sound pressure that is reliably detected at a certain frequency. Equal-loudness curves are graphical representations in which the level of sound pressure is plotted against the frequency at which the listener perceives constant loudness. The listeners are asked to make the loudness of sounds with different frequencies equally loud. It is not that doubling the perceived loudness is a matter of doubling the amount of acoustic energy. Temporal integration is the process in which a sound at a constant level is perceived as being louder when it lasts longer. The term can also be applied to the perceived brightness with light that actually depends on the duration of the light.

Frequency and pitch

The tonotopic organization of the auditory system, from the basilar membrane to A1, is a big hint that the frequency composition is a fundamental determining factor for how we hear sounds. Researchers have used masking experiments to investigate frequency selectivity. In such experiments a second sound is used, often noise, to make the detection of another sound more difficult. In a classical approach to measuring frequency selectivity by masking, a sine wave is placed between acoustic noise. White noise consists of all audible frequencies in equal amounts. White noise in hearing is analogous to white light in vision in which all wavelengths are present. The critical bandwidth means that adding more frequencies to the noise no longer influences the perceptibility of the tested tone. The width of the critical bandwidth changes depending on the frequency of the test tone and the width corresponds to the physical placement of frequencies on the basilar membrane. In addition, masking effects are asymmetrical, for a mask whose bandwidth is below the critical bandwidth of a test tone, the mask is more effective if it is set to a frequency below the frequency of the test tone, this is called an upward spread of masking.

What types of hearing loss are there?

When it comes to hearing loss, it is often not meant that one is no longer being able to hear (deafness), but that the the sound thresholds are raised. Conductive hearing loss is caused by the bones in the middle ear that can no longer freely transfer vibrations from the eardrum to the oval window. This often happens when the middle ear is filled with mucus during an ear infection, a condition called otitis media. The oval window still vibrates, but without the ossicles strengthening it. Otosclerosis is a form of hearing loss that is caused by the abnormal growth of the middle ear bones. An surgical procedure is required to resolve this. The most common and also the most serious form of hearing loss is sensorineural hearing loss, hearing loss due to defects in the cochlea or sometimes the auditory nerve. This usually occurs when hair cells are damaged. Certain antibiotics and anticancer medicines are ototoxic, which means that they kill hair cells immediately. A common cause of hearing loss is that hair cells are damaged by excessive exposure to noise. Hearing loss is also part of aging and it is difficult to separate a person's age from the amount of noise the person is exposed to.

Treating hearing loss

In the past a horn was used for better hearing, the narrow end in your ear and you held the tube with the broad end. As a result, more acoustic energy entered the ear. Hearing aids are more effective, but they must be designed to do more than just amplify sounds, since very loud sounds would then become louder. Most hearing aids are made in such a way that high intensities can be perceived in a pleasant way. One of the newest discoveries is the placement of implants in the cochlea. This stimulates the AN fibers. However, such an implant can never give what nature gives us, because it cannot replace the function of the thousands of hair cells.

How can we locate sound? - Chapter 10 (5th edition)

What are the two types of localization?

There are two different types of information for determining the source of a sound. Firstly, sound waves do not reach the different ears at the same time, and secondly, the intensity of a sound is greatest at the ear closest to the sound source.

Interaural time difference

The first cue is the interaural time difference (ITD), the difference in time between the arrival of the sound at one ear and the arrival of the sound at the other ear. If the sound comes from the left, the sound comes sooner to the left ear. The term used to describe locations on an imagined circle around us is called the azimuth. The part of the auditory system responsible for calculating ITDs must receive input from both ears. The medial superior olives (MSOs) are the first places where input from both ears comes together. ITD detectors already form their connections in the first few months of life. Interpretation of ITDs is highly dependent on the size of the head.

Interaural level difference

The second cue is the interaural level difference (ILD), the difference in the intensity between the arrival of the sound at one ear and the arrival of the sound at the other ear. Characteristics of ILDs are: sounds are more intense at the ear closest to the source, ILD is greatest at 90 and -90 ° and absent at 0 ° and 180 °, and between these extremes the ILD correlates with the angle of the source, but because the head has an irregular shape, the correlation is not as precise as with ITDs. An important difference between the two cues is that the head blocks sounds with high frequencies better than sounds with a low frequency. ILDs for low frequencies are not actually noticed below 1000 Hz. Neurons that are sensitive to differences in intensity between the two ears are primarily in the lateral superior olives (LSOs). The neurons in the LSOs are sensitive to differences in intensity due to the struggle between exciting inputs of the ipsilateral ear and inhibitory inputs of the contralateral ear. If the sound on one ear is more intense, the connections of that ear are better in exciting LSO neurons on that side and better in inhibiting LSO neurons on the other.

Cones of confusion

A cone or confusion is an area with positions in space where all sounds have the same ITDs and ILDs. These cones of confusion are not only theoretical problems for the auditory system, but actual perceptual phenomena. By moving the head the ITD and ILD will shift from a sound source and you will no longer suffer from a cone or confusion.

Pinnae and main cues

Time differences and intensity differences are not the only cues for determining the location of a sound source. Both the ear cups and the shape and size of the rest of the body influence which frequencies reach the ear most easily. This offers another auditory cue to locate sound.

Relative intensities of different frequencies are constantly changing with changes in altitude and azimuth. The sum of these changes in intensity can be measured and combined to determine the directional transfer function (DTF). The DTF is a measurement that describes how the ear, the ear canal, the head and the torso change the intensity of sounds with different frequencies that enter each ear from different locations.

Auditory distance perception

The easiest cue that helps determine the distance of sound is the relative intensity, the following applies: the stronger, the closer. However, the assumptions made here about the source of the sound may be wrong. In addition, this cue becomes less and less effective as the distance increases due to the inverse-square law. This is the principle that intensity decreases faster than the distance increases. Another cue for distance is the spectral composition of sounds: the sound-absorbing characteristics of air make higher frequencies more humid than low frequencies, so that high frequencies will lose more energy at longer distances than low frequencies en route to the ear. The final cue comes from the fact that sound is a combination of direct energy and reflected energy. The relative amounts of both types of energy inform the listener about distance, because a lot of direct energy means that a sound source is close and a lot of reflected energy that a sound source is far away.

What are complex sounds?

Simple sounds, such as sine waves and bands of noise, are extremely useful for exploring the fundamental operating characteristics of auditory systems. However, pure sine wave tones, are rare in the real world, where objects and events that matter to listeners are more complex, more interesting, and hence more challenging for researchers to examine. In this section, we discuss different types of complex sounds.

Harmonics

Many sounds have a harmonic structure. The fundamental frequency is the lowest frequency of a harmonic spectrum. If this is removed and the other harmonics remain, listeners still hear the missing foundation. What all harmonics have in common are fluctuations in sound pressure at fixed intervals that correspond to the fundamental frequency. At a certain moment they come together, which is the fundamental frequency. For this reason, the fundamental frequency is observed, even though it is not a component of the sound.

Timbre

Timbre is the quality that makes the listener hear two different sounds, while both sounds have the same pitch and loudness.

Auditory 'color' consistency

High frequencies are amplified by hard surfaces, but muted by soft surfaces. This resembles the problem of color consistency. The environment changes the spectral shape that eventually reaches the ear. If the brain interprets spectral tilt as being a consequence of the environment, listeners ignore the tilt.

Attack and decay

Attack is the way a complex sound starts and decay is the way a complex sound ends. Two identical instruments can have a different start. How quickly a sound dies depends on how long it takes before the vibrating object runs out of energy.

What is auditory scene analysis?

Environments with multiple sound sources are the rule, not the exception. Source segregation or auditory scene analysis is the transformation of an auditory scene that consists of multiple sounds into separate sound images.

Spatial, spectral and temporal separation

There are different strategies for source segregation. One is spatial separation between sounds. Sounds coming from the same location can be seen as coming from the same source. Sounds that move can more easily be separated from background sounds that are relatively still. Auditory stream segregation is the perceptual organization of a complex acoustic signal into separate auditory events, with each flow being heard as a separate event. The Gestalt principle of equality can be applied to sound: sounds that resemble each other tend to group together.

Grouping by timbre

Tones that deviate from a certain pattern in sound stand out. If there are two overlapping patterns and harmonics are added to one of the series, creating a richer timbre, the two overlapping patterns are heard as different. Timbre grouping is especially robust because sounds with the same timbres usually come from the same source. Stream segregation takes place from the first stages of auditory processing to A1 to the calling and parable. The brainstem plays a role in simple cues, but segregation based on higher perceptual properties of sound takes place in the cortex.

Grouping by onset

Sound components that start at the same time also seem to come from the same sound source. This phenomenon helps to group different harmonics into a single complex sound. This method of grouping corresponds to the Gestalt principle of common fate.

When sounds become familiar

In addition to the Gestalt principles discussed, listeners also use experience and familiarity with the sound to separate sound sources from each other. It is surprising how quickly you recognize a sound if you have only heard it a few times.

What are continuity and repair effects?

One sound is often masked by another for a short period. You will usually hear 'through it'. The Gestalt principle that is used here is that of good continuation and is also referred to as perceptual repair effect. This effect suggests that the recovered missing sounds are encoded in the brain as if they were present in the sound.

Restoration of complex sounds

Music and speech can also be restored, here too missing pieces are perceived as being present.

Auditory attention

The acoustic startle reflex is a very fast engine response to a sudden noise. This reflex is not selective. Auditory attention can also be selective if a certain sound source is selected from different sound sources. Focusing on a certain sound can cause other sounds to be missed, a kind of inattention deafness. This is an extreme example of auditory processes that help to listen in a busy environment. You can also switch your attention between different sound streams, but you cannot fully process two streams of speech.

How does the perception of music and speech work? - Chapter 11 (5th edition)

Music and speech follow the same laws of acoustics as all other sounds. However, music and speech are created with recipients in mind. Both serve communication purposes and they can convey emotion and deeper meanings.

How does the perception of music work?

For a very long time, people have been using music as a way to express themselves and influence the thoughts and emotions of other people. Although music may not actually explain the known universe, we all appreciate how important music is to culture and, perhaps, to one's personal cultural identity.

Musical notes

One of the most important characteristics of an acoustic signal is frequency. Brain structures for processing sounds are tonotopically organized to match frequency. The psychological aspect of perceived frequency is the pitch.

An octave is the interval between two sound frequencies that has a ratio of 2:1. Pitch has two dimensions. The first is pitch, a sound quality that corresponds to the pitch. The second is tone color and is related to octaves. Tone color is a sound quality divided by tones that have the same octave interval. We can visualize the pitch as a helix, frequency and pitch become higher the higher you go in the helix. At the same point in every round a specific sound is on a vertical line and all sounds on that line share the same tone color and are separated by octaves. Both a place code and a temporal code can be used in the perception of pitch. For frequencies higher than 5000 Hz, the temporal code can no longer be used for the perception of pitch, so pitch discrimination becomes more difficult because only the place code can be used.

Music is further defined by richer, complex sounds called chords, combinations of three or more notes with different pitches that are played simultaneously. There are consonants and dissonants in agreements. Consonant chords have combinations of tones with simple ratios between the frequencies, an example is an octave. Dissonant chords have less elegant ratios and do not sound fine.

Musical intervals vary between cultures. The relationship between notes such as octaves is often universal, but there can be much difference within this octave. The notation of notes can also vary, we have the heptatonic (seven-note) scale, but there is also a pentatonic scale with five notes. Some languages are tone languages, in which the changes in the pitch are used to distinguish different words from each other. Changes in pitch are larger and more frequent in tone languages. In tone scales in which octaves contain fewer notes, the notes are tuned more freely, allowing a greater range of pitches to continue for the same tone. People all over the world listen to different music and musical notes are heard differently.

Absolute pitch (AP) is a skill in which some people can accurately name or produce the name of a note without comparing it to other notes. Yet these people have no more sensitive auditory systems or better ability to detect differences between sounds. The best explanation for having AP is probably that it is acquired through experience, but that experience must be gained at a young age.

Music influences mood. It also has clear physiological effects, such as changes in heart rate. In addition, more blood goes to the brain areas that are involved in reward and motivation.

Making music

A melody is a series of notes or chords that are perceived as a single coherent structure. A melody does not necessarily have to be a series of specific sounds, if you shift every tone of a melody one octave, you still have the same melody. The average duration of a set of notes is the tempo of the music. Listeners tend to group sounds into rhythmic patterns, even if there is no rhythm. Sounds that are longer, louder and higher in pitch are often perceived as leading. A syncope is any deviation from a regular rhythm, such as emphasizing a note that is not expected to be emphasized. Like rhythm, melody is actually a psychological entity. Our experience with a certain sequence of notes helps us to observe coherence. Learning a melody starts when babies are around 7 to 8 months.

How does the perception of speech work?

The flexibility of being able to produce an incredible amount of speech sounds is due to the unique structure of the human speech channel: the airway above the larynx used to produce spoke. The speech channel consists of the oral and nasal tract. Unlike other animals, human larynx is fairly low in the throat. A disadvantage of this is that people can choke on food more easily.

Speech production

Speech production consists of three components: respiration (lungs), phonation (vocal cords) and articulation (voice channel). First, air is forced out of the lungs by the trachea to the larynx. With the larynx, the air must pass through the two vocal cords, which consist of muscle tissue and can be adjusted to determine how much air can pass through it. This adjustment is called phonation. The extent to which the vocal cords vibrate depends on their stiffness and their mass. If sound is measured immediately after the larynx, you can see that the vibration of the vocal cords creates a harmonic spectrum.

The manipulations that people do to change the shape of the speech channel are collectively called articulation, this is for example manipulating the lips, tongue and other structures. This ensures the increase and decrease of energy at different frequencies. These effects are called resonance characteristics. Peaks in the speech spectrum are called formants. A formant is a resonance of the speech channel. One of the biggest distinguishing characteristics of speech sounds is that their spectra change over time. To represent third dimension (time), in addition to the dimensions of frequency and amplitude, in frequency spectra of sounds, auditory investigators use a type of representation called a spectrogram. Here time is placed on the horizontal axis and frequency on the vertical axis and amplitude is indicated by the color of the point in the graph.

Vowels are formed with a relatively open speech channel, consonants more by obstructing the speech channel. Consonants can be classified into three articulatory dimensions:

  1. Place of articulation: the airway can be obstructed at the lips (letter b), the alveolar comb (letter t) and the soft palate (letter g).
  2. Way of articulation: the airway can be obstructed completely (letter b), partly (letter z) or very little (letter w). Or first blocked and then opened (letter j) or blocked by the mouth, but opened through the nose (letter m).
  3. Voting or voiceless: vibrating, voicing consonants (letter m) and non-vibrating, voiceless consonants (letter p).

Speech perception

Speech production is very fast. This is possible due to the phenomenon of co-articulation, where there is an overlap of articulation in time and space, and production is adjusted to anticipate where the articulators should be afterwards.

The context sensitivity is therefore important for speech. Context sensitivity can be a problem for listeners because it means that there are no 'invariants' that they can assume to identify different speech sounds. This is also one of the reasons why we still cannot train computers to recognize speech.

Categorical perception means that listeners only report differences in sounds if these differences would result in different labels for the sounds, so the ability to distinguish sounds can be predicted by how listeners label sounds.

The engine theory of speech production states that the processes used to produce speech can work 'inversely' to understand the acoustic speech signals. Two problems here are that speech production is at least as complex as speech perception, so difficult to prove and that non-human animals can be taught to respond to speech signals in the same way as humans. In addition, it appears that categorical perception is not only limited to speech sounds, but also occurs in music intervals and in sight.

The perception of co-articulated speech appears to be partly explained by fundamental principles of perception. Due to coarticulation, the production of a speech sound always influences the production of the next sound. Auditory processes that improve the contrast between consecutive sounds cancel this assimilation.

The extent to which perception strongly depends on experience makes speech special because (1) people have developed a unique anatomical instrument for producing it and (2) we spend a lot of time practicing speech perception. So we do not need individual acoustic invariance to distinguish speech sounds, we only need to be good at recognizing sound patterns. One of the things that the brain can best do is to integrate different information sources to recognize patterns.

Learning to listen

Experience is very important for auditory perception. This starts when babies are still in the womb. That is why the mother's voice is preferred to other voices after birth.

Children can distinguish irrelevant acoustic differences from relevant ones even before they can make speech sounds. The longer a person uses only the native language, the longer it takes to learn how to produce and perceive sounds from a second language. Differences between two languages are best picked up when the second language is taught at the same time as the first. The only downside to this is that it takes a little longer for children to master both languages than if only one language were learned. Series of speech sounds are basically not meaningful, it is about children learning the words that are specific to their language. Words in sentences often overlap, this looks like coarticulation. Children must learn which words often occur together in order to pick up the words from the sentence.

Speech in the brain

It is difficult to draw strong conclusions about brain processes based on brain damage. PET and fMRI have made it possible to learn more about how speech is processed in the brain. A1 and its association areas, the calling and parable, are activated with speech and music. Because language is mostly lateralized in the left hemisphere, the processing of speech should be more lateralized at some point, because speech perception is part of language comprehension. A challenge for researchers is to create stimuli that are as complex as speech, but not perceived as speech. Research shows that neural activity was present in both left and right superior temporal lobes when the offered sentence was incomprehensible. The activity only increased in the left anterior superior temporal lobe if the sentence was understandable. As sounds become more complex, they are processed in more anterior and ventral areas of the superior temporal cortex, farther from A1. When sounds become part of a linguistic message, they are processed more anteriorly in the left temporal lobe. Cortical processes related to speech perception can be distinguished in two ways from processes that contribute to the perception of other complex sounds. First, hearing speech and seeing the face of the speaker often go together. Visual cues can influence the way in which speech sounds are perceived. In addition, listeners hear the sounds they produce and experience the movements that echo in speech production, perhaps there is an area where these related activities are combined.

How does the perception of movement and (dis)balance work? - Chapter 12 (5th edition)

There are five vestibular organs, three semi-circular channels and two otolites. They are located in every inner ear next to the cochlea. They record the orientation and movements of the head with regard to gravity. The vestibular system contributes to the sense of balance, which consists of the perception of spatial orientation combined with reflexive vestibular responses such as posture. Spatial orientation consists of a sense of linear movement, angular movement and tilt. The vestibular organs were present in evolution very early and have remained almost unchanged. We only become aware of this 'sixth sense' when we experience problems such as vertigo, a feeling of vertigo. Vertigo is a sensation of rotation or spinning. The term is often used more generally to refer to any form of dizziness. The vestibuloocular reflex (VOR) helps to keep visual images stable on the retina, this is a compensation for head movements. The vestibular system usually performs its task automatically, so we are hardly aware of it, it often happens in the background.

What are the vestibular contributions to balance?

The vestibular system contributes to balance, but the sensory basis comes from kinesthesia. Kinesthesia is the perception of the position and movement of our limbs in space. Balance comes from the neural processes that regulate posture by distributing weight evenly and thus ensuring that we remain stable and upright. The kinesthetic system ensures the observation of the position and movement of the limbs. Information from the vestibular, kinesthetic and visual system together helps to improve balance. Feeling active balances information that comes from efferent commands that run from the brain to the periphery, and afferent signals that run from the periphery to the brain.

Our vestibular sense is active, not passive. This implies that our vestibuar sense combines information flowing from our brain to our muscles with information flowing inward to the brain from various sensory systems, especially the kinesthetic, visual, and vestibular systems. Active sensing balances information derived from efferent commands flowing outward from the central nervous system to the periphery with information from different afferent signals flowing from sensors inward to the brain.

What is graviception?

From an evolutionary perspective, gratification has been around for a long time. For instance, dinosaurs already had vestibular organs. Going farther back, even some bacteria need(ed) to know up from down. Graviception refers to the physiological structures and processes that sense the relative orientation of gravity with respect to the organism.

What modalities and qualities of spatial orientation are important?

Spatial orientation is based on three sensory modalities, namely:

  1. Angular motion: rotational motionn like the rotation of a spinning top or swinging saloon doors that rotate back and forth.
  2. Linear motion: translational motion such as the predominant movement of a train car.
  3. Tilt: to attain a sloped position like that of the leaning tower of Pisa.

Why do we call these 'modalities' as though they were different senses, rather than calling them 'qualities'? The answer to this question, lies in the converted energy, energy that has been transduced (that is: converted from one form of enery to the other, for example from light to neural electrical energy of from mechanical energy to neural electrical energy). The observation of tilting motion, linear motion and tilt also requires that three different stimuli be transformed: tilting acceleration, linear acceleration and gravity, respectively.

Sensing angular motion, linear motion, and tilt

These three stimulation energies are perceived by two types of vestibular organs. The semicircular canals observe angular acceleration, the change in angular speed. The otolites convert linear acceleration, the change in linear speed, and they convert gravity, the force that pulls everything to the center of the earth. The perception of tilt comes from the estimate of orientation of the brain, taking into account gravity and the perception of linear motion comes from the estimate of linear acceleration. Linear acceleration is not the same as gravity. The brain does its best to divide otolite signals into signals that represent gravity and signals that represent linear acceleration. Hence we have three sensory modalities: a sense of angular motion, a sense of linear motion (also called sense of translation) and a sense of tilt, which are parallel to the three different sources of stimulation energy: angular acceleration, linear acceleration and gravity.

Basic qualities of spatial orientation: amplitude and direction

All three modalities have two qualities: amplitude and direction. Amplitude is the size (increase or decrease) of a head movement. With linear movement, translation can have a high or low speed. Rotational speed and tilt can also have both a large and small amplitude.

The direction is the line along which is being moved, with the area to be moved as reference point. To classify direction, the Cartesian coordinate system is used, with three axes. Rotation around the x-axis for roll angular velocity, rotation around the y-axis for pitch angular velocity and rotation around the z-axis for yaws, course change. Each orientation has only two tilt directions, forward / back or left / right. Price change does not count, as this has no influence on gravity.

Where and how are motion signals transduced by the vestibular organs?

Neither the otolites nor the semicircular channels respond to constant speed. They respond to changes in speed and acceleration. Note that neither the otolith organs nor the semicircuar canals respond to constant velocity (the speed and direction in which something moves) itself. Instead, they respond to changes in velocity. Constant motion, whether angular or linear, does not result in vestibular signals that directly indicate motion.

Hair cells: Mechanical transducers

Hair cells (any cell that has stereocilia for transducing mechanical movement in the inner ear into neural activity sent to the brain) act as mechanoreceptors (a sensory receptor that responds to mechanical stimulation such as pressure, vibration, or movement) in each of the five vestibular organs. These receptors respond to mechanical stimulation (pressure, vibration or movement). Head movements cause the stereocilia on the hair cells to bend, which leads to a change in the voltage of the hair cell. This causes the release of neurotransmitters, which evokes action potentials in vestibular fibers that have synapses on hair cells. These afferent neurons bring the action potentials to the brain. Without stimulation, the hair cells are negatively charged and release a neurotransmitter at a constant speed, resulting in a constant number of action potentials. Changes in the voltage of hair cells is called the receptor potential. These changes are proportional to the bending of the hair cells. These potentials have control over the speed at which neurotransmitters are released to the afferent neurons.

Semicircular canals

Each inner ear has three semicircular canals: horizontal (or lateral), anterior (or superior) and posterior. Each channel is slightly swollen nearby where the channels and the vestibule come together. This swollen piece is called the ampulla and contains the cupula, crista (any of the specialized detectors of angular motion located in each semicircular canal in a swelling called the ampulla) and hair cells of the canal. Transduction takes place in the ampulla. The crista is a specialized detector for angular movement and consists of a narrow edge with epithelium with hair cells. The stereocilia and kinocilia on the hair cells project to the cupula: an elastic dam to the other side of the ampulla, surrounded by a liquid, endolymph. As the head moves, inertia causes endolymph to follow. This leads to bending of the cupula and small bending of the stereocilia in the crista. This ultimately changes the rate of fire of afferent neurons.

Without stimulation, afferent neurons fire from the semicircular channels at a high, constant rate. Because the spontaneous rate of fire is so high, rotations in one direction can cause the rate of fire to decrease and rotations in the other direction can increase the rate. Semicircular channels are organized in functional pairs, this is called the 'push-pull arrangement'.

The two horizontal channels form one of the three pairs. If the head turns to the right, afferent neurons in the right channels will increase their rate of fire and those on the left will decrease their rate of fire. The vertical and course change channels also work this way.

The sensitivity of the semicircular channels to rotations in different planes results in coding of main rotations. The brain combines the signals from the channels and thus perceives the direction of rotation of the head movement.

Moving forwards and backwards with a constant rhythm is called an oscillating movement. Every oscillating movement, such as a sound wave or a rotating movement, is a sinusoidal movement. The period of a sinusoidal oscillation is the time required for a full cycle of movement from front to back. The rate of fire of the afferent neuron increases and decreases as the angular velocity increases or decreases. Semicircular channels are not equally sensitive to all frequencies of rotation. However, 'pure' sine wave movements are rare. Yet patterns of oscillating movements usually have a predominant frequency. The Fourier analysis shows that complex movement consists of a number of single-frequency components. So if we know responses to some frequencies, we also know a part about responses to more complex motion.

Otolith organs

Each inner ear has two otolites, namely the utricle, which contains the utricular macula and the saccule, which contains the saccular macula. These structures are important for observing gravity and linear acceleration. Both structures contain a small oval-shaped bag filled with liquid in which the macula is located, this is where the sensory conversion takes place. Otoconia are very small calcium carbonate stones in your ear that can actually feel gravity and linear acceleration and pass this information on to the otolites.

As in the semicircular channels, one aspect of encoding amplitude in the otolites can be found in the response of a single neuron or hair cell. Remember that (1) the potentials of a hair cell receptor increase as the tips of the hair cells bend toward the largest stereocilia, (2) receptor potentials decrease for movements in the opposite direction, and (3) the direction that is rotated is encoded by the excitation of a semicircular channel on the one hand and inhibited on the other. This works the same in every otolite macula. Here, larger gears move the otoconia more than smaller ones. At higher accelerations, the hair cell bundles become more bent, which ultimately leads to an increased rate of fire of afferent neurons.

The coding of direction in the otolites is partly due to their anatomical organization. The utricle will be sensitive to horizontal linear accelerations and horizontal gravitational forces and the saccule is sensitive to vertical linear accelerations and vertical gravitational forces. In addition, different hair cells respond maximally to different directions of movement, with the direction of maximum sensitivity systematically differing in the plane of each macula.

How to study the perception of spatial orientation?

There are three techniques to study the perception of spatial orientation.

  1. Presence of motion (without informing us about the direction of the motion). For example: What is the minimum movement whose direction we can accurately perceive?
  2. The estimation of magnitude. For example: Participants ask how much they are tilted, to which they must respond with a physical unit such as degrees.
  3. Matching task. For example: Participants hold a pipe in their hand directly in front of them, but with their eyes closed. They feel they should keep it vertical.

Rotation perception

Velocity storage is the prolongation of a rotational response by the brain beyond the duration of the rotational signal provided to the brain by the semicircular canals; typically yielding responses that are nearer to the actual rotational motion than the signal provided by the canals.

The vestibular system is tuned for changes in motion. If you make a constant rotating movement, it will eventually seem as if you are not moving. However, if this rotation stops, you will notice an angular velocity that is the opposite of the rotation you made first. The dizziness (any form of perceived spatial disorientation) and imbalance (lack of balance) that we then experience is due to the illusion of self-rotation caused by responses from the semicircular channels. If you run at a constant speed, the cupula and the endolymph rotate together and there is virtually no bending of the hair cells. However, if you suddenly stop, the cupula stops, but the endolymph has a speed and keeps moving, causing the hair cells to bend. The bend is in the opposite direction. Thresholds for rotation vary with the frequency of the angular stimulus.

Translation perception

The otolites convert linear acceleration, which is the change in linear speed. The brain must integrate mathematically, which means that an integral must be calculated. Speed is the mathematically integrate, that is the integral of acceleration. Change of position is the integral of speed. This calculation suggests that when otolites detect linear acceleration, our brains convert this information into a perception of linear velocity.

Tilt perception

People are good at estimating how sloping they are, especially in positions between 0 degrees and 90 degrees. This sensitivity ensures that we can remain upright.

What is sensory integration and how does it work?

The sensory organs do not function independently, the signals from different senses are combined. This is called sensory integration and ensures that the information is more precise and accurate than when the information only comes from one sense.

Visual-vestibular integration

Vection is a deceptive feeling that moves you, caused by visual cues. Signals related to vision align with the signals from the semicircular channels in the vestibular nuclei. In some situations this gives rise to the feeling of movement while you stand still. This shows that our sense of spatial orientation is not limited to combinations of movement and orientation that are physically possible.

What is active sensing?

The ability of the brain to distinguish between sensory events that are self-generated, sensory reafference, and sensory events that come from the outside world, sensory exafference, is essential for perceptual stability and accurate motor control. This distinction is made in the first central synapse in the vestibular nuclei for these vestibular pathways that contribute to balance and perception.

What reflexive vestibular responses are there?

Sometimes the responses of the vestibular system are either automatic or reflexive. There are vestibulo-ocular reflexes (VORs), vestibulo-autonomous reflexes and vestibulo-spinal reflexes. The latter contributes to the regulation of posture through the balance system, this is the name for the senses, neural processes and muscles that contribute to the regulation of posture.

Vestibulo-ocular responses

The angular vestibulo-ocular responses (VOR) is the compensating eye rotation that is generated by the semicircular channels when they notice main rotation. Eye movements are a result of inhibition and excitation of the eye muscles. These are paired to pull in the opposite direction and are therefore called agonist-antagonist muscle pairs. The visual field on the retina is stabilized with the help of counter rotation, as the eyes rotate in the opposite direction from the direction in which the head turns. The neural path for VORs consists of an arc of three neurons. The first neurons are the afferent neurons, which bring information from the vestibular periphery to the vestibular nuclei. There they pass on information to interneurons, which in turn pass on information to efferent oculomotor neurons. The latter pass on information to the oculomotor muscles to rotate the eyes.

Vestibulo-autonomic responses

The vestibular system also contributes to autonomic nervous system: the part of the nervous system that controls the glands, heart, and digestive system. Travel sickness is an example of such a reaction. It is the result of disagreement between the orientation and motion signals of the semicircular channels, the otolites and sight. Other responses are often in the form of a compensatory contribution. An example of this is blood regulation. If you lie down, it takes less work to pump blood to the brain, because gravity works less. If you suddenly stand up, the cardiovascular system must suddenly adjust the regulation of blood supply. That way you can feel light in your head or get a black out.

Vestibulo-spinal responses

Vestibular responses ensure that we do not fall over. Without vestibulo-spinal responses our balance would be seriously disturbed. Spatial disorientation is an example where spatial orientation is disturbed. The vestibulo-spinal response can be seen as an entire family of reflexes. In the vestibular nuclei, the primary afferent neurons transmit information to interneurons. These bring information through the lateral and medial vestibulo-spinal tracts. How far they bring this information along the backbone depends on their contribution to the balance sheet system.

Is there something like a spatial orientation cortex?

There is no area specifically for the vestibular system. An explanation for this is that different senses contribute to spatial orientation and therefore there is no need for a special area.

Vestibular thalamocortical pathways

Vestibular information reaches the cortex via the so-called thalamocortical pathways. Neurons from the vestibular nuclei bring information to the thalamus, from there the information is processed and passed on to the cortex. Research suggests that the temporo-parietal-insular cortex is involved in spatial orientation perception.

This area of cortex receives input from the semicircular channels and the otolites. There is also a vestibular path that runs through the cortex to the hippocampus. There are neurons in the hippocampus that are called head direction cells because they respond to specific directions of the head.

Cortical influences

The areas that receive information from the vestibular system also project back to the vestibular nuclei. This feedback appears to modulate low-level vestibular processing in the brainstem. This can affect both perceptions and reflexive reactions. Perception can be influenced by knowledge and expectations.

What happens when the vestibular system fails?

If the vestibular system fails, problems arise such as disorientation and imbalance. The good news is that many patients can partially adapt to the situation, for example by using other sensory information.

Falls and vestibular function

Research has shown that failure of the vestibular system is correlated with falls and disfunctional balance. Even more, falls are a leading cause of accidental death, and the risk of falling is increased with failure of the vestibular system.

Mal de Debarquement syndrome

Many travelers are a bit out of balance when they have been on a boat for a while. They may feel nauseous, a little twisty, they can still feel the wavy movement and their legs feel like sea legs. Occasionally people cannot adapt to this, this is called Mal the Debarquement syndrome. The feeling remains for months or years.

Ménière's syndrome

Ménière's syndrome means that patients experience dizziness very suddenly. They are so spatially disoriented that they have to lie down quickly, because otherwise they will fall down. It can lead to repetitive vomiting. Some researchers think that an excess of fluid in the inner ear is the cause, while others think that different inner ear disorders cause this constellation of symptoms. Treatment of the syndrome contains medication that lowers the pressure in the inner ear, placing implants in the inner ear or procedures that destroy the vestibular device.

How can touch be observed? - Chapter 13 (5th edition)

The term touch refers to sensations caused by mechanical movements of the skin. The adjective form of touch, tactile is used to refer to the result of these mechanical interactions with the skin. Internal sensations that come from muscles, tendons and joints provide information about the positions and movements of our limbs, which is also called kinesthesia. Proprioception processes sensory input from locations in the body together with kinesthesia. Somatosensation is a collective term for sensory signals from the skin, muscles, tendons, joints and internal receptors. Touch is very important in evolutionary terms, just think of pain as a warning and temperature as a sign to create a safe environment. Mechanical sensations play an important role in our intimate and sexual activities and are a form of non-verbal communication. Touch is also important because we can identify and manipulate objects that cannot be seen or heard.

How can the physiology of touch be described?

Touch receptors

Tactile receptors are located both in the outer layer of the skin, the epidermis, and in the layer below, the dermis. There are different types of receptors, they are characterized by the following two characteristics:

  1. The size of the receptive field. This is the size of the region that triggers a receptor response.
  2. The rate of adaption (fast versus slow). A fast-adjusting (FA) receptor responds with an outburst of action potentials as the stimulus starts and ends, but does not respond in the time between them. A slow-adjusting (SA) receptor remains active throughout the period that the stimulus makes contact.

Tactile receptors are called mechanoreceptors because they respond to mechanical stimulation or pressure. All tactile receptors consist of a fiber and an end. A fiber consists of its axon and a myelin sheath. All tactile receptors fall into the class of A-beta fibers that have relatively wide diameters, making rapid neural conduction possible. There are four types of mechanoreceptors, these can be found in the hairless (glabrous) skin on the palms, these types always work together:

  1. Slowly adapting type I (SA I) fibers respond best to constant pressure, fine spatial details and very low frequency vibrations. They are important for the perception of texture and patterns. These fibers have Merkel cell neurite complexes as their end: specialized ends that are associated with fibers with small receptive fields that slowly adapt.
  2. Slowly adapting type II (SA II) fibers respond best to retained pressure and especially to the lateral extension of the skin. They help determine when the fingers are in the right position to grab an object. These fibers have Ruffini ends: ends associated with fibers that have a large receptive field and adapt slowly.
  3. Fast-adapting type I (FA I) fibers respond best to low frequency vibrations. These fibers help correct when grabbing objects. These fibers have Meissner bodies as their end: specialized ends that are associated with fibers with a small receptive field that adapt quickly.
  4. Fast-adapting (FA II) fibers respond best to high frequency vibrations. For example, they help determine how hard you tap on something. These fibers have Pacinian bodies as an end: ends associated with fibers that have a large receptive field and adapt quickly.

The nerve fibers of the various types are assymed to terminate in different expanded endings, and the receptors have come to be named after the anatomists who first described these endings, respectively: Meissner corpuscles, Merkell cell neurite complexes, Pacinian corpuscles, and Ruffini endings.

Thermoreceptors are located in both the epidermis and the dermis layers of the skin and provide information about changes in skin temperature. There are two types of these receptors: heat fibers that fire when the temperature of the skin around these fibers increases and cooling fibers that fire when the temperature of the skin around these fibers decreases. These fibers consist of slow, non-myelinated C-fibers and of faster, myelinated A-delta fibers.

Pain starts with signals from the nociceptors. They respond when damage is caused to the tissue. There are two types of nociceptors: A-delta fibers that respond primarily to strong pressure or heat. C fibers respond to intense stimulation of various types, such as pressure, but also heat and cold. Pain often takes place in two phases: a rapid sharp burst of pain (A-delta fibers) directly followed by a throbbing sensation (C-fibers).

Receptors are classified into different classes of physical sensations, namely tactile, thermal, pain and itching experiences. A fifth component is fine or emotional touch, which is largely mediated by non-myelinated C-fibers known as C-tactile (CT) afferents. Researchers think that these fibers are only found in hairy skin. There are indications that fine physical contact increases endorphins and oxytocin responses, leading to satisfaction, confidence and calmness.

Kinesthetic receptors

Other mechanoreceptors that are located in the muscles, tendons and joints are called kinesthetic receptors, these play a role in feeling where limbs are located and what movements they make. The angle formed by a limb in a joint is primarily observed by muscle receptors called muscle coils, these are sensitive to muscle tension, the change in the length of the muscle is monitored.

Thermoreceptors

Thermoreceptors are sensory receptors that signal information about changes in skin temperature. There are two distinct populations of thermoreceptors: warmth fibers fire when the temperature of the skin surrounding the fibers rises, and cold fibers fire in response to decreases in skin temperature. Cold fibers outnumber warmth fibers by a ratio of about 30:1. The neural fibers that mediate cold and warmth include unmyelinated, and hence relatively slowly conducting, C fibers and faster-conducting, myelinated A-delta fibers.

Nocireceptors

A nocireceptor is a sensory receptor that responds to painful input, such as extreme heat or pressure.

Pleasant touch receptors

Traditionally, bodily sensations were classified into: (1) tactile; (2) thermal; (3) pain, and; (4) itch experiences. Recently, a fifth component has been discovered: 'pleasant' or 'emotional' touch. The emotional properties of nonpainful bodily touch appear to be mediated in large part by a class on unmyelinated (and thus relatively slow) peripheral C fibers known as C tactile (CT) afferent that are not related to either pain or itch. It is believed by researchers that these CT afferents are located only in hair skin.

From skin to the brain

Axons of tactile receptors are combined in nerve branches, resembling the ganglion cells in the optic nerve and hair cells in the auditory nerve. However, there are only two optic and auditory nerves, while there are many somatosensory nerve branches. In addition, tactile information has to go a bigger way to reach the brain. This goes through the spinal cord. Nerve fibers coming from the skin appear to form labeled lines: each fiber type encodes a certain touch sensation. These lines are connected, which happens in the spinal cord. Axons enter the spinal cord at the dorsal horn, which is organized in different layers. The input in the spinal cord is organized somatotypically (referring to normal somatosensation).

The information goes to the brain from the spinal cord via two routes. The spinothalamic pathway is the slow path and primarily transports the information from thermoreceptors and nociceptors. The dorsal column-medial lemniscal (DCML) pathway transports information much faster and mainly transports tactile and kinesthetic information. From the thalamus a large part of the information goes to the somatosensory area 1 (S1). Neurons in S1 communicate with the somatosensory area 2 (S2). Touch sensations caused by skin stimulation are spatially represented in area S1, partly somatotopically: corresponding to spatial events on the skin. From this comes the sensory homunculus, a representation of areas of the body in the brain.

How people perceive their own body is called their body image. A phantom limb is a perceived sensation of a physically amputated limb. This may be due to overlapping areas in the homunculus: a maplike representation of regions of the body in the brain.

A distinction can also be made between a what and a true route in contact. Neural plasticity is the ability of neural circuits to undergo changes in function or organization as a result of previous activity. Paths that run from the skin to the brain tell only one side of the story of transmitting signals when touched. Paths from the brain can change sensations, the most surprising effects of which are related to pain.

Pain

The signals from the nociceptors enter the backbone in an area called the substantia gelatinosa (a region of interconnecting neurons in the dorsal horn of the spinal cord). There, neurons receive information from the brain and form synapses with neurons that bring sensory information from the nociceptors to the brain. According to the gate control theory, the bottom-up pain signals from the nociceptors can be blocked via a circuit in the dorsal horn. The transmission of pain is like a gate that is opened with exciting pain signals and closed with inhibitory input. The anterior cingulate cortex (ACC) is a brain region that is associated with the unpleasantness of a pain sensation. Pain can also be influenced by cognition. The pain then produces a secondary pain affect, with the emotional response associated with suffering when painful memories come up. This component of pain is associated with the prefrontal cortex, an area that deals with cognition and executive control. Reducing painful sensation while a person is conscious is called analgesia (decreasing pain sensation during conscious experience). This can be done through drugs, but also through religion. Studies emphasize the importance of interpersonal and broader social influences on the emotional component of pain.

Endogenous opiates are substances that are released that block the release or uptake of neurotransmitters involved in transmitting pain sensations to the brain.

The placebo effect means that pain diminishes when people think they are receiving an analgesic drug, but that is not the case. Nociceptors give a signal if there is imminent or permanent damage to the tissue, called nocicepive pain.

Once the damage is done, that piece of skin can become more sensitive and experience pain faster than before, this is called hyperalgesia.

How can the touch system be described from a psychological and psychophysical perspective?

Now that we have discussed the physiological aspect of the touch system, we can move on to the psychological and psychophysical aspects. How sensitive are we to mechanical stimulation? What are the limits on tactile acuity in space and time? What are the smallest details that we can feel?

How sensitive are we to mechanical pressure?

Thresholds for feeling pressure differ in different places of the body. In general, the face is the most sensitive to tactile pressure. This can be measured by, among other things, brushing over the skin with a specific object. FA II receptors can distinguish between smooth and non-smooth surfaces. In addition, people are sensitive to changes in pressure, tactile vibration. The sensitivity always depends on the mechanoreceptors in a certain area.

How finely can we solve spatial details?

With the two-point touch threshold it is possible to measure the minimum distance at which two stimuli are perceived as separate. Research shows that the face, fingertips and toes are the most sensitive to this. The threshold is low if the receptive fields are small and the density of receptors is high.

How finely can we resolve temporal details?

A method to measure temporal differences in tactile stimulation is by asking participants whether two tactile pulses are given simultaneously or sequentially. The smallest observable difference is 5 milliseconds.

Do people differ in tactile sensitivity?

Tactile sensitivity decreases with increasing age. However, this does not apply to blind Braille readers. Another factor that influences tactile sensitivity is predisposition. Genes influence the sensitivity of different senses.

What is haptic perception?

The term haptic perception refers to the perceptual processing of input from multiple sensory subsystems, such as those in the skin, muscles, tendons, and joints. Such perception is often active and searching for information, the person explores the world rather than passively accepts the world.

Perception for action

Using our hands to actively discover the world of surfaces and objects is action for perception. Using somatosensation to practice our ability to grasp and manipulate objects involves perception for action.

Action for perception

The term exploratory procedure describes a specific way of feeling an object to learn more about its properties. There are different procedures for discovering different traits, which is also reflected in the neural structures. For example, SA I fibers are mainly active when a movement is made over a rough surface and less active when there is no movement over the object. Research suggests that a dual-coding theory applies to perception of roughness.

The edges of your fingerprint help with the tactile perception of finer textures. The edges selectively boost frequencies while filtering out other frequencies. These edges would also help our fingers to maintain a stable grip when we hold an object. It would increase the amount of friction between our skin and the object.

The what-system of touch: The observation of objects and their properties

To determine the entire shape of an object haptically, we usually explore the object by feeling our fingers along the contours. The instantly recognizable characteristics of material are important in haptic recognition. As with visual search tasks, it is also the case with touch search tasks that certain features stand out. For example, research shows that hard surfaces, among other things, stand out between soft surfaces and cold surfaces between warm surfaces. Even though touch perception pattern is not very efficient, it can be done, especially if the patterns are small enough to feel with one finger tip. It appears that a certain decision-making process is concerned with both haptically and visually observed patterns. Tactile agnosia is the inability to identify objects by touch. Patients can often locate objects.

The where system of touch: locating objects

Haptic object localization requires that we first establish a frame of reference, a coordination system that is used to define locations. The ego center is the center of a frame of reference that is used to represent locations relative to the body. Your haptic ego center can be determined by placing a ballpoint pen at a certain point on a table with one hand and then pointing at the bottom of the table to the location that corresponds to that point. It appears that we are not at all good at determining how objects are oriented in the haptic space. This can be seen in a study in which people had to put pens in parallel with their eyes closed, where major mistakes were made.

Tactile spatial attention

When people anticipate a touch at a certain location, they can voluntarily focus their attention on that location. Attention that is focused on tactile modality in this way is called endogenous spatial attention and is a form of top-down control. Exogenous spatial attention is a form of bottom-up control. This means that the targeted attention is triggered by a stimulus that suddenly touches you somewhere. This focused attention is reflexive and therefore not voluntary. When touched, there is also a change blindness phenomenon: differences between stimuli were not noticed as quickly or not if several places on the body were stimulated between them.

Social touch

Touch influences social interactions by individual characteristics that are genetically determined. Certain behaviors are also transferred from parents to children.

Interactions between touch and other modalities

We often touch objects that we see and we hear the consequences of the contact. Sometimes the different inputs compete and sometimes everything is neatly integrated. Competition can arise if the resources are limited and attention is paid to a specific task. Unlike the competition for attention, intersensory integration can occur when different modalities receive information about the same object. In some cases, one modality may be more dominant than the other. However, it is not the case that one modality is more dominant than the other. In general, people integrate the signals of two modalities and take an average of them. Ultimately, vision and touch are complementary. A haptic virtual environment is a synthetic world that is experienced haptically through the action of an electromechanical device that transmits forces to the hands of the user. The machine also senses the player's actions. Haptic virtual environments can be very educational, they are used to train new surgeons.

What is olfaction? - Chapter 14 (5th edition)

People have two important chemical detection systems: one for molecules in the air and one for molecules that we put in our mouths. These are respectively sense of smell (olfaction) and taste sense (gustation). Smell is very important in our experience of food. This is because there are two routes through which we perceive odors. The first is orthonasal olfaction, this is when odor molecules are sniffed through the nostrils and go up to the olfactory epithelium. The second route is retronasal olfaction, this is when odor molecules are inhaled through the mouth and enter the top of the nasal cavity and the olfactory epithelium from the mouth.

How can the olfactory physiology be described?

Odors and odorants

Olfactory sensations are collod odors. An odor is the translation of a chemica stimulus into the sensation of an odor percept. For example: "the cake has a chocolate odor." However, not every chemical is an odorant. An odorant is a molecule that can be translated by the nervous system into the perception of odor. In order to smell odorant molecules, they must be able to move through the air and be small and hydrophobic. However, there are also molecules that meet these criteria but that we cannot smell, such as certain types of gas and oxygen.

The human olfactory appartus

The primary function of the nose is to filter, heat up and humidify the air that we breathe. The nose has small edges, turbines, which cause turbulence in the incoming air, which causes part of the air to be sent to the olfactory cleft, a narrow space at the back of the nose into which air flows and where the olfactory epithelium is located. The olfactory epithelium is a secretory mucous membrane that detects odorants in the inhaled air. The nostrils take in different amounts of air. This is called nasal dominance, the amount of air per nostril constantly changes throughout the day. The nostrils therefore continuously vary in their sensitivity to odorants as a function of the amount of air inhaled. The olfactory epithelium is the 'retina of the nose' at the back of each nasal passage and contains three types of cells:

  1. Supporting cells: these cells offer metabolic and physical support to the OSNs.
  2. Basal cells: these cells are the precursors of OSNs.
  3. Olfactory sensory neurons (OSNs): the most important cell type in the epithelium. They are small neurons just below a mucus layer in the epithelium.

A cilium (plural: cilia) is any of the hairlike protrusions on the dendrites of olfactory sensory neurons. At the tops of the cilia on the dendrites of OSNs are the receptors for odorant molecules. These are called olfactory receptors (ORs), to which the odorant molecules bind. Each OSN has only one type of OR and all OSNs with a type OR project together to the same glomerulus (any of the spherical conglomerates containing the incoming axons of the olfactory sensory neurons). Each OSN converges onto two glomeruli: one medial and one lateral. The interaction between an odorant and the OR ultimately causes an action potential that is transported to the olfactory flask along the axon of the OSN. We have approximately 5-10 million OSNs distributed over the epithelia. Animals such as dogs have about 100 times as many OSNs. That is why they can notice much lower concentrations of odorants than people. From the OSNs the information goes to the cribriform plate: a bone-like structure with small holes, at the level of the eyebrows that separates the nose from the brain. The axons of the OSNs pass through the tiny holes of the cribriform plate to reach the brain. When the cribriform plate breaks, causing the olfactory axons to be cut off, anosmia arises, an inability to perceive odor.

For a long time, it has been thought that humans are not very good at using their noses. However, in fact, human olfactory ability is actually very good. The absolute size of the olfactory bulb in humans is also much greater than that of for instance a mouse. An olfactory bulb is a blueberry-sized extension of the brain just above the nose, where olfactory information is first processed. There are in fact two olfactory bulbs: one in each brain hemisphere.

OSNs are regularly replaced, but broken cribriform plates (bony structures riddled with tiny holes that separate the nose from the brain at the level of the eyebrows) form scar tissue so that the axons of the new OSNs can no longer reach the brain. A sinus infection, chronic sinusitis and allergies can also lead to anosmia (the full inability to smell). In these cases, the nasal passages are blocked, preventing odorants from interacting with olfactory receptors. Smell decreases with age.

How can the neurophysiology of olfaction be described?

In people with normal functioning olfaction, the axons of the OSNs pass through the cribriform plate, then bundle together to form the olfactory nerve (cranial nerve I) and ultimately end up in the olfactory flask. Here the odor information is processed first. In contrast to all other modalities, smell is ipsilateral (referring to te same side of the body or brain). In the olfactory flask, sensory nerve endings come together and form small globules called glomeruli. Each odorant activates a unique pattern of OR activation and is translated into a specific pattern of activity over the glomeruli, which is interpreted by the brain as the indication for a specific odorant. There are exciting and inhibitory cells around each glomerulus called juxtaglomerular neurons. These cells respond to more different odorants than the next layer of neurons, the tufted cells. The tufted cells, in turn, respond more to different odorants than the deepest layer of neurons in the olfactory flask, the mitral cells. This deepest layer also contains the granular cells, these are inhibiting neurons that are probably the basis for the identification of specific odorants. Axons from the tufted and mitral cells together form the olfactory tract, which sends information to the primary olfactory cortex, which is also called the piriform cortex. The primary olfactory cortex includes the amygdala-hippocampus complex, which is involved in emotional and associative aspects of olfactory cognition. In addition, the primary olfactory cortex interacts with the entorhinal cortex, the hippocampus receives sensory association input from this area. The central brain structures that process olfactory information are all part of the limbic system, which is involved in many aspects of emotion and memory. Olfactory sensory neurons are different from all other sensory neurons, because they make contact with the brain right through the barrier.

The genetic basis of olfactory receptors

Pseudogens are genes that are present in the DNA, but the proteins for which these genes are coded are never produced. The number in people is unknown. One person has more functional odor receptors than the other. The more copies you have from a specific receptor, the more sensitive you are to a particular fragrance. It also matters whether a person has pseudogens or functional genes for a particular receptor, this can also change the perception of odorants. The number of receptors a person has of a certain type determines how an odorant is experienced, more receptors lead to a more intense odor, which is often perceived as less pleasant than a somewhat less intense odor. There are also factors that may temporarily affect sensitivity to odorants, such as alcohol and drugs. Smell influences food intake, the more intense the perception of the retronasal aroma of the food, the less we eat of it. OR variation can influence both our food choice and food intake.

The feel of scent

Because most odorants stimulate the somatosensory system by nociceptors in the nose, a feeling often comes with the experience of odorants. Menthol, for example, feels cold. These sensations are transmitted through the trigeminal nerve (cranial nerve V).

How to move from chemicals to smells?

In this section, we focus on the following question: How does the biochemical interaction between an odorant and an OR, and subsequent neurologica processing in the olfactory bulbs and later brain structures, result in the psychological perception of a scent such as wintergreen mint? To put it simpler: how to move from chemicals to smells?

Theories of olfactory perception

Nowadays, the most accepted biochemical theory is shape-pattern theory. This theory states that molecules and receptors have different forms, an odorant will be noticed by a specific OR insofar as the molecules of the odorant fit. Recent molecular research suggests that odors are noticed by a combined code, where an odorant binds to different receptors and a receptor binds to different odorants to varying degrees. An alternative theory is the vibration theory. This theory states that each fragrance has a different vibration frequency. Molecules with the same vibration frequency smell the same. A specific anosmia is the inability to smell a specific composition, while there is also a normal odor perception. This is due to faulty interactions between odor receptors or the lack of specific ORs. It seems to be a genetic disorder. Vibration theory cannot explain specific anosmias or why the same odorant can produce different odor sensations in different people. The form-pattern theory can do that. Another advantage of this theory relates to stereoisomers. These are molecules that can come in different forms, with the organization of molecules rotated in mirror image. Although these molecules contain the same atoms, they can smell very differently. This can be explained by the shape-pattern theory, but not by the vibration theory.

The importance of patterns

We detect odorants by the pattern of activation of different receptor types. The timing of the activation is important, odorants that activate different receptors stimulate them in a specific order and speed. Another odorant may do this in a different order, causing a different odor to be observed. Patterns are also important in early phases in the olfactory cortex. Specific patterns of activity are produced by specific odorants in the piriform cortex and odorants that are perceived as the same in odor have the same patterns.

Is odor perception synthetic or analytical?

We rarely smell 'pure scents'. There are two possibilities how odor combinations are processed, namely by analysis and by synthesis. It seems that both are true. The synthetic component is what we usually experience. The analytical component of odor can be seen in binaral rivalry, the competition between the two nostrils for the perception of odorants. If two different scents are presented to the different nostrils at the same time, one smell is perceived at one time and the other smell at the other moment, this always switches, no mix of the two scents is observed. Research suggests that scent is primarily a synthetic sense, but that analytical skill can be developed to a certain extent. This can be deduced from the existence of olfactory white. This means that any mix of 30 or more odorants has the same scent.

The power of sniffing

If you sniff, you consciously breathe in a lot of air. Sniffing increases the ability to detect odorants and it also produces more activation in certain areas of the brain, such as the cerebellum compared to normal breathing.

Odor imagery

People have little to no imagination of scents. In olfactory sensations, dreams are also very rare. However, experts can see activation in the piriform cortex when they are asked to imagine odors.

How to identify, discriminate, and adapt odors?

Psychophysics is the scientific field in which quantitative relationships are made between physical and psychological events.

Detection, discrimination and recognition

The threshold for odor detection depends on various factors. Molecules of odorants with longer carbon chains are easier to detect than molecules with shorter carbon chains. In addition, experience is also a factor, genes can be 'turned on' by environmental factors, this can happen through repeated exposure to a certain substance, which ensures higher sensitivity and faster detection. A healthy person can discriminate between many odorants. Discrimination is not the same as recognition, remembering whether you have smelled an odorant before. Recognizing an odorant takes three times as long as detecting the presence of an odorant.

Psychophysical methods for detection and discrimination

The staircase method is a psychological method for determining the concentration of a stimulus required for detection at a threshold level. The staircase method is an example of a method of limits. The technique is to determine what a person's threshold is for odor detection. Hereby an odorant is presented in increasing concentrations, until the participant says something to smell. After that the concentration is reduced again until the participant says that they no longer smell anything.

The triangle test is used to determine whether someone can discriminate against odors. Hereby a participant gets three scents, two of which are the same. The participant must determine which odor is different.

Identification

The tip-of-the-nose phenomenon means that an odorant smells very familiar, but the smell cannot be named. A difference with the tip of the tongue phenomenon is that at the tip-of-the-tongue phenomenon the first letter is sometimes still available or the number of syllables, while at the tip of the nose phenomenon you know nothing. One possibility why odor and language are so separate from each other is because odor information does not have to be integrated into the thalamus before it is processed in the cortex, and the thalamus is important for language. In addition, odor is mainly processed in the right hemisphere and language in the left.

Individual differences

One reason for individual differences in odor perception is that individuals differ in how many and which ORs have been expressed in the olfactory epithelium. Age and gender are the two most important factors that cause differences. Women are more sensitive to odorants than men at all ages. The ability to detect odorants deteriorates with age, this is because cell death in ORs becomes higher than cell regeneration. The ability to identify odorants strongly decreases from around 50 years. The main reason for this is that odor identification is strongly influenced by verbal and semantic processing. This connection can be important in the early identification of neurological diseases, such as Alzheimer's.

Adaption

Smelling is a detection system that responds to change. After a while in the same scent, the scent molecules are bound to the corresponding odor neurons in your nose. The ORs then withdraw into the cell body. The receptors are then no longer physically available to respond to the molecules. This is called 'receptor recycling'. This is a mechanism that takes place at all receptors in the class to which ORs belong: G-protein-coupled receptors (GPCRs). This is a class of receptors that is present on the surface of olfactory sensory neurons. The biochemical phenomenon that occurs after longer exposure to an odorant is called receptor adaptation. Receptors stop responding to an odorant and the detection is terminated. Research shows that fear of exposure to an odorant can cause people to perceive an odorant that is not there. Emotion can affect the way odorants are perceived, the same smell can be perceived differently in different contexts. Adjustment has the advantage that it helps to filter out stable background odors, which can be supported by sniffing. Sniffing helps to distinguish components of an odor situation. In some cases, exposure to one fragrance may increase the detection threshold for a second fragrance. This is called cross-adaptation. This happens when components of the odorant or the specific odorants bind to the same type of receptors.

Cognitive habituation

Cognitive habituation is the process whereby a person no longer detects an odorant after prolonged exposure to it. This involves at least three mechanisms. First, the receptors that are internalized in their cell bodies during adaptation are more impeded and after continuous exposure and take longer to recycle. Secondly, after continuous exposure, the molecules are absorbed into the blood and brought to the receptors, after which they are brought to the receptors. As long as the molecules are in the blood, we will be constantly adjusted. Third, cognitive-emotional factors are involved. Another feature that demonstrates the importance of consciousness in the perception of smell is that we cannot smell when we are asleep. The conscious perception of smell depends on attention.

How can olfactory hedonics be described?

The most immediate and basic response human being have to an odor is whether we like it or not. Such affective evaluations that we perform when observing odors are called odor hedonics. It is rather obvious that perceived pleasantness is reated to our liking for an odor. Yet, how are familiarity and intensity related?

Familiarity and intensity

We often find smells better if we have smelled them often. Intensity has a more complex relationship with whether or not to like a smell. This relationship is represented by a reverse-U function, although it depends on the odorant. Up to a certain point, the increase in intensity is pleasant, but after a certain point it becomes unpleasant. In the case of an unpleasant odor, it is often acceptable if the concentration is low, but the higher the concentration, the more unpleasant the odor, then there is a linear function.

Nature or nurture?

Are hedonistic responses innate or learned? Researchers who argue for innate state that everyone is born with a predisposition to like or dislike certain scents.

Researchers who argue for being taught say that everyone is born with a predisposition to learn whether they like it or not, which is determined by the emotional value that is assigned to a scent. Research with young children shows that there is more to be said for preferences being learned, for example, the smell of poo is not considered dirty by the children. However, a problem with this type of investigation is that the olfactory system already functions fully in the third month of the station. Babies are therefore already exposed to odors during pregnancy.

An evolutionary argument

Specific types of animals that live in specific environments do appear to have an innate odor response. For them this is adaptive, because they always have to deal with the same enemies and food sources. More general species do not have this because they come in many different environments. Learned taste aversions show that learning is an important mechanism by which general types of reactions to odorants are obtained. Hereby a new taste is avoided, after it has been accompanied by stomach disease. The smell appears to be important, not the taste. The first association with an odorant produces a unique signal in the amygdala-hippocampal complex, which predicts later memory for that association.

Caveats

The majority of the evidence suggests that hedonistic responses are learned, but there are two caveats. First, irritating odors can trigger pain responses and all people have an innate drive to avoid pain. Secondly, variability in receptor genes and pseudogens can influence the intensity of odorants, which in turn can influence whether something is found to be tasty or not.

What is the key to associative learning and emotion?

The key to associative learning of scents lies in the experience when the scent was first encountered and in particular the emotional association with that experience. If we smell a scent, the corresponding memory will also come up. Sometimes the memory is too vague to recall, so that only the feeling comes up. Neuroanatomy supports the idea that the olfactory system is primarily developed to learn the affective significance of odors. The amygdala has direct connections to the olfactory nerve. We consciously experience odors in the orbitofrontal cortex (OFC). For this reason the OFC is also called the secondary olfactory cortex. A distinction is made here between tasty and unpleasant odors. This mainly takes place in the OFC court. This is important because the right hemisphere is dominant for the processing of emotional stimuli. The oldest part of the brain, the rhinencephalon, originated from tissue that specialized in the processing of odors. This area now consists of the primary olfactory cortex and specific structures of the limbic system.

The vomeronasal organ, human pheromones and chemo signals

The scent system of animals that depend on their sense of survival consists of two parts: the main olfactory bulb (MOB) and the accessory olfactory bulb (AOB). The AOB is connected to the rear of the MOB. These two systems function independently of each other. The AOB receives input from the vomeronasal organ (VNO), this is an organ to detect chemicals that cannot be processed by ORs, such as large or aqueous molecules, the type of molecules that make up pheromones.

The most important function of the VNO is the detection of pheromones, these are chemicals produced by a member of a certain species, which can trigger physiological or behavioral reactions in other members of the species. Pheromones are signals for chemical communication and do not necessarily have an odor. The transfer of pheromones, for example, causes the development of lordosis, the position that the females of some species must take before they are impregnated. The spine bends and the genitals become visible. Pheromones have two effects caused by two types of pheromones. There are releaser pheromones, these provoke a direct behavioral response in peers, and there are primer pheromones, these provoke a change, this requires longer exposure to the pheromone. The existence of human pheromones is still very uncertain. One way in which human chemical communication is described is through the use of the term chemo signal: chemicals released by people who are detected by the olfactory system and have an effect on the mood, behavior, hormonal situation and / or sexual arousal of other people. Odorants are the best cues for memory. In aromatherapy, odorants are manipulated to influence mood, performance and well-being. The physiological correlations with emotion are also influenced, such as heart rate, blood pressure and sleep.

How does taste perception work? - Chapter 15 (5th edition)

What is the difference between taste versus?

Taste is the name for the sensations that are generated by solutions in the mouth that make contact with the receptors on the tongue and the palate, these receptors have connections with axons in the cranial nerves VII, IX and X. Molecules that we taste are always perceived by both our taste system and our sense of smell. The molecules that we taste are dissolved in saliva and stimulate taste receptors on the taste buds. When we chew and swallow food, other molecules are released into the mouth that are sent to the nasal cavity by breathing, where they make contact with the olfactory epithelium and stimulate the olfactory receptors. The brain combines these retronasal olfactory sensations with our taste sensations in a kind of metasensation called flavor, the combination of the actual taste (sweet, salty, sour, or bitter) and the retronasal odor.

Localizing flavor sensations

The chorda tympani is the branching of cranial nerve VII that carries information from the anterior mobile tongue (the part that can be extended). The chorda tympani is a branch of the facial nerve VII that carries taste information out of the tongue through cranial nerve V, through the middle ear to the brain. The brain processes odors in different ways, depending on whether the smell comes from the mouth or from the nostrils.

How can the anatomy and physiology of the taste system be described?

The process of taste perception works as follows: chewing breaks up the food into molecules that are dissolved in the saliva. The saliva-bearing molecules flow into a taste pore that leads to the taste buds, creating neural signals that are sent to the brain by the taste nerves. The taste buds are located in structures called papillae, these are located on the edges of the tongue. From small to large, the different papillae are: filiform, fungiform, foliate, and circumvallate. Taste buds contain taste receptor cells, each of which responds to a limited number of molecules. When it comes into contact with one of the preferred molecules, action potentials are produced that transmit information to the brain through the cranial nerves.

The filiform papillae are the papillae without taste function and ensure the bumpy appearance of the tongue. The fungiform papillae are mushroom shaped papillae. These papillae are closest to each other on the edge of the tongue and in particular on the tip. There are approximately six taste buds in one papilla. Foliate papillae are folds of fabric that contain taste buds. These papillae are mainly at the back of the tongue, where the tongue is connected to the mouth. Circumvallate papillae are circular papillae that form an inverted V shape at the end of the tongue. This contains taste buds. Taste buds are also found on the palate.

A supertaster is someone whose perception of taste sensations is the most intense. A variety of factors may contribute to this increased perception. Among the most important is the density of fungiform papillae.

Taste buds and taste receptor cells

Taste neurons are pseudo-unipolar: a single process exits the cell body and then splits into peripheral and central limbs. The peripheral axons form the nerves that go to the tongue and project the central axons to the brain. The tips of some taste buds end in thin microvilli, these contain places where flavor substances can bind.

There are three types of taste bud cells:

  1. Type I responds to salt, and this type is also likely to secrete potassium, so that other cells retain their resting potential.
  2. Type II responds to sweet, bitter, or umami.
  3. Type III cells are also called presynaptic cells because they have synapses. In addition, this type probably responds to acid.

A tastant is any stimulus that can be tasted. There are two classes of tastants. One class consists of small charged molecules that taste salty or sour. Ion channels in the microvillus membranes allow some molecules and some do not. As soon as charged particles from salt or acidic food enter the salt and acid receptor cells, these cells transmit their taste. Second class tasters produce sweet or bitter sensations. These are observed through a mechanism similar to that of the olfactory system, namely via GPCRs. When a certain molecule (key) fits into the slot of a GPCR, the neuron fires.

Taste processing in the central nervous system

The information goes from the taste buds through the cranial nerves to the medulla and the thalamus, after which it reaches the insula. The insular cortex is the primary cortical processing area for taste. It is the part of the cortex that first receives taste information. It is also known as the insule or the gustatory cortex. The orbitofrontal cortex (OFC) receives projections from the insula. The OFC is an integration area because it contains neurons that are multimodal and therefore respond to temperature, touch, odor and also taste.

What are the four basic tastes?

A basic taste is one of those four qualities on which it is generally agreed that they describe the human taste experience: sweet, salty, sour, bitter. One of the most important characteristics of these basic tastes is that our preference or aversion to it is recorded in the brain.

Salty

Salt is one of the four basic tastes. Salts consist of two charged parts: a cation that is positively charged and an anion that is negatively charged. The cation provides the salty taste. The ability to perceive salt is not certain, a low-salt diet can cause certain foods that someone liked before the diet to be perceived as too salty, due to habituation to lower sodium intake.

Sour

Sour, also known as acid, is one of the four basic tastes. This is produced by hydrogen ions. Ultimately, the concentration of hydrogen triggers the sour taste. Acid can be tasty in small concentrations, but at high concentrations they cause damage to both external and internal tissue.

Bitter

Bitter is one of the four basic tastes. There are around 25 different receptors for bitterness. Some receptors only respond to specific molecules and others to many different types, it is possible that 25 receptors can bitterly perceive thousands of different molecules. A bitter taste often means that something is toxic, but some stimuli are good for us, such as the bitter compositions in vegetables. Sensitivity to bitterness is influenced by hormone levels in women, so the sensitivity is higher during pregnancy.

Sweet

Sweet is one of the four basic tastes. Sugars are made up of carbohydrates, examples are glucose, fructose and sucrose. There are some biologically useless sugars that have the same structure as glucose, fructose and sucrose. The task of the flavor system is to tune the receptors so that the biologically important sugars stimulate a sweet taste, but others do not. A heterodimer is a chain of two molecules that are different. An example of this is the receptor that consists of two G protein-coupled receptors, this heterodimer has bonds for both large and small sweet molecules.

Are there more than four basic tastes and does it matter?

Umami has been discussed as a candidate for a fifth basic taste. Umami is the taste sensation produced by monosodium glutamate (MSG): the sodium salt of glutamic acid. However, the taste is not accepted as the fifth basic taste, MSG is part of a protein molecule and these are too large to stimulate taste and smell and cannot be observed in foods that contain proteins. In addition, some people like umami and some do not, while that is fixed in the four basic tastes. MSG can be a risk to sensitive individuals in large quantities.

The special case of fat

Like protein, fat is an important nutrient and fat molecules are too large to stimulate taste or odor. We do perceive the tactile sensations of fat in the mouth, as fat is oily and creamy.

Are there genetic variations in perceiving bitter?

In 1931, a chemist named Arthur Fox discovered that we do not all live in the same taste world. He was developing PTC when he accidentally spilled and some PTC flew into the air. His colleague noticed a bitter taste, while Fox did not notice this. A test with more colleagues led to more nontasters like Fox, while others did taste the bitter taste. Later research showed that this is a genetic issue.

Supertasters

As a reslt of cross-modality matching (the ability to match the intensities of sensations that come from different sensory modalities), we know there are humans that are supertasters: individuals whose perception of taste sensations are most intense. Various factors may contribute to this increased perception, among which the most important is the density of the fungiform papillae.

Health consequences of taste sensation

The variation in sensory characteristics of food and beverages influences food preferences. For example, supertasters can better taste the bitter components of certain vegetables and eat less of them, which can lead to a reduced intake of vegetables. Damage to the taste system can also lead to changes in preferences, which in some cases can also lead to weight gain.

How do taste and flavor contribute to the regulation of nutrients?

The omnivore's dilemma applies to the need for modern humans to find a healthy diet among the dizzying choices that are available. The specific properties of smell and taste help in this. The fragrance system can distinguish many different molecules and the taste helps to pick out the dangerous and good substances. One of the most important supports for the fact that there is a 'hardwired affect' for taste comes from research into the facial expressions of newborn children. Children respond with stereotypical facial expressions to sweet, salty, sour, and bitter solutions. Sweet brings a smile, while acid results in a twisted face.

How do we regulate nutrients? Early belief in 'specific hungers' provided a way for identification of conditioned preferences and aversions. According to the specific hunger theory, the need for a nutrient is sufficient for the body to crave it. Often such a craving for salt or sweet is associated with deficiencies in the related substances. The theory does not apply to other nutrients, such as vitamins. What the theory lacks is the fact that a sensory cue without ambiguity must be associated with the nutrient before the craving is turned into actually seeking and ingesting a particular nutrient. Through learning we find out which foods we like and dislike. An example of learning is evaluative conditioning, which means that the positive or negative value of a stimulus is transferred to a neutral stimulus.

Is all olfactory affect learned?

Certain odorants come from important nutrients in fruits and vegetables and are cues for these nutrients. The innate finding of these odorants would lead to a healthier diet.

How can the nature of taste characteristics be described?

We assume that sweet, salty, sour and bitter are taste characteristics, but that has not always been the case. In the 19th century it was thought that modality and quality should be distinguished. If two sensations are so different that there is no overlap, they can be called different modalities. This theory was followed by other theories. A reminder of this old debate is a modern debate about how taste characteristics are coded. A point of controversy is whether flavors are encoded primarily via labeled lines, where each taste neuron would unambiguously transmit the presence of a certain basic taste, or via patterns of activity by many different taste neurons.

A major source of historical controversy in the taste literature concerns whether tastes are coded mainly via labeled lines, in which each tste neuron unambigiously signals the presence of a certain basic taste, or via patterns of activity across many different taste neurons. An evolutionary argument for the labeled lines coding is that it would not be useful if tastes were coded by different taste neurons, a new taste sensation would then often be produced. A second argument is that we are good at analyzing taste mixes.

Taste adaptation and cross-adaptation

Constant adaptation to salt influences the ability to taste salt. Adaptation to certain components in food can change the perception of the next food that is taken. Cross-adaptation can be seen when sugar in a dessert changes the receptors for sweet, so that afterwards, for example, lemonade tastes less sweet than normal.

Pleasure and retronasal versus orthonasal olfaction

Little is known about the link between retronasal and orthonasal odor. We know we learn to like or smell odors, but do we learn these preferences separately for retro and orthonasal odor?

The pleasure of the burn of chili peppers

The preference for chili pepper in certain countries appears to depend on social influences. Children get these peppers to eat at a young age and see their family enjoying them. Chili has health benefits, such as the killing of microorganisms in food. The burning sensation experienced when eating chili peppers varies greatly between individuals. First, the more fungiform papilla a person has, the more intense the burning sensation. Secondly, the pain receptors are desensitized by capsaicin, the substance that causes the burning sensation. People who often eat chili peppers therefore have chronically desensitized receptors.

 

Image

Access: 
Public

Image

This content refers to .....
Psychology and behavorial sciences - Theme
Join WorldSupporter!
Search a summary

Image

 

 

Contributions: posts

Help other WorldSupporters with additions, improvements and tips

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Image

Spotlight: topics

Check the related and most recent topics and summaries:
Activity abroad, study field of working area:

Image

Check how to use summaries on WorldSupporter.org

Online access to all summaries, study notes en practice exams

How and why use WorldSupporter.org for your summaries and study assistance?

  • For free use of many of the summaries and study aids provided or collected by your fellow students.
  • For free use of many of the lecture and study group notes, exam questions and practice questions.
  • For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
  • For compiling your own materials and contributions with relevant study help
  • For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.

Using and finding summaries, notes and practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Use the summaries home pages for your study or field of study
  2. Use the check and search pages for summaries and study aids by field of study, subject or faculty
  3. Use and follow your (study) organization
    • by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
    • this option is only available through partner organizations
  4. Check or follow authors or other WorldSupporters
  5. Use the menu above each page to go to the main theme pages for summaries
    • Theme pages can be found for international studies as well as Dutch studies

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study for summaries and study assistance

Main summaries home pages:

Main study fields:

Main study fields NL:

Follow the author: Vintage Supporter
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Statistics
3119 2