BulletPoints per chapter with the 6th edition of Sensation and Perception by Wolfe et al. - Chapter


What do the processes of sensation and perception entail? - Chapter 1

  • Sensation and perception are central to, and often precede, almost all aspects of human behaviour and thought. There are many practical applications of our increased understanding of sensation and perception.
  • Gustav Fechner invented several clever methods for measuring the relationship between physical changes in the world and consequent psychological changes in observers. These methods remain in use today. Using Fechner's methods, researchers can measure the smallest levels of a stimulus that can be detected (absolute threshold) and the smallest differences that can be detected (difference thresholds, or just noticeable differences).
  • A more recent development for understanding performance - signal detection theory - permits us to simulate changes in the perceiver (e.g.. internal noise and biases) in order to understand perceptual performance better.  
  • We learn a great deal about perception by understanding the biological structures and processes involved. One early observation - the doctrine of specific nerve energies -expresses the fact that people are aware only of the activity of their nervous systems. For this reason, what matters is which nerves are stimulated, not how they are stimulated. The central nervous system reflects specialisations for the senses, from cranial nerves to areas of the cerebral cortex involved in perception.
  • The essential activities of all neurons, including those involved in sensory processes, are chemical and electrochemical. Neurons communicate with each other through neurotransmitters, molecules that cross the synapse from the axon of one neuron to the dendrite of the next. Nerve impulses are electrochemical; voltages change along the axon as electrically charged sodium and potassium ions pass in and out of the membranes of nerve cells.
  • Recordings of individual neurons enable us to measure the lowest level of a stimulus required for a neuron to fire (absolute threshold). Both the rate and the timing pattern of neuronal firing provide additional information about how the brain encodes stimuli in the world.
  • Neuroimaging methods have revolutionised the study of sensation and per- ception by allowing us to study the brain in healthy, living human observers. Useful methods include electroencephalography (EEG), magnetoencephalog- raphy (MEG), positron emission tomography (PET), and functional magnetic resonance imaging (fMRI). Each comes with its own combination of temporal and spatial properties, making one method suitable for researching some questions and other methods more suitable for other questions.
  • Computational models have become more useful than ever for revealing ways through which sensation and perception develop through experience. What all of these models have in common is that the world has predictability that can be exploited to make the senses more attuned to the world.
  • Deep learning algorithms are powerful tools that can take in vast amounts of data (e.g.. pictures) and categorise it (That's a cat). Such tools may become important in applications like autonomous driving or detecting cancer in medical images.

How can our eyes perceive light? - Chapter 2

  • This chapter provided some insight into the complex journey light must take for us to see stars and other spots of light. The path of the light was traced from a distant star through the eyeball and to its absorption by photoreceptors and its transduction into neural signals. In subsequent chapters we'll learn how those signals are transmitted to the brain and translated into the experience of perception.
  • Light, on its way to becoming a sensation (a visual sensation, that is), can be absorbed, scattered, reflected, transmitted, or refracted. It can become a sensation only when it's absorbed by a photoreceptor in the retina.
  • Vision begins in the retina, when light is absorbed by rods or cones. The retina is like a minicomputer that transduces light energy into neural energy.
  • The high degree of convergence in the retinal periphery ensures high sensitivity to light but poor acuity.
  • The low degree of convergence in the fovea ensures high acuity but poor sensitivity to light.
  • The one-to-one pathways between cones and ganglions exist only in the fovea and account for why images are seen most clearly when they fall on this part of the retina.
  • The visual system deals with large variations in overall light intensity by (a) regulating the amount of light entering the eyeball, (b) using different types of photoreceptors in different situations, and (c) effectively throwing away photons we don't need. 
  • The retina sends information to the brain via ganglion cells, neurons whose axons make up the optic nerves. Retinal ganglion cells have centre-surround receptive fields and are concerned with changes in contrast (the difference in Intensity between adjacent bits of the scene).
  • Age-related macular degeneration (AMD) is a disease associated with aging that affects the macula. The leading cause of visual loss among the elderly in the United States, AMD gradually destroys sharp central vision, making it difficult to read, drive, and recognise faces.
  • Retinitis pigmentosa (RP) is a family of hereditary diseases characterised by the progressive death of photoreceptors and degeneration of the pigment epithelium. In the most common form of the disease, patients first notice vision problems in their peripheral vision and under low-light conditions- situations in which rods play the dominant role in collecting light.
  • Several exciting developments are aimed at restoring sight in individuals with blinding retinal diseases.

How do people perceive spatial figures? - Chapter 3

  • In this chapter we followed the path of image processing from the eyeball to the brain. Neurons in the cerebral cortex translate the array of activity signalled by retinal ganglion cells into the beginnings of forms and patterns. The primary visual cortex is organised into thousands of tiny computers, each responsible for determining the orientation, width, colour, and other characteristics of the stripes in one small portion of the visual field. In Chapter 4 we will continue this story by seeing how other parts of the brain combine the outputs from these minicomputers to produce a coherent representation.
  • Perhaps the most important feature of image processing is the remarkable transformation of information from the circular receptive fields of retinal ganglion cells to the elongated receptive fields of the cortex.
  • Cortical neurons are highly selective along a number of dimensions, includ- ing stimulus orientation, size, direction of motion, and eye of origin.
  • Neurons with similar preferences are often arranged in columns in the primary visual cortex.
  • Selective adaptation provides a powerful, noninvasive tool for learning about stimulus specificity in human vision.
  • The human visual cortex contains pattern analysers that are specific to spatial frequency and orientation.

How does object recognition work? - Chapter 4

  • A series of extrastriate visual areas continue the work of visual processing. Emerging from V1 (primary visual cortex, striate cortex) are two broad streams of processing: one going into the temporal lobe and the other into the parietal lobe. The temporal pathway seems specifically concerned with what a stimulus might be. This chapter follows that pathway. The parietal where pathway will be considered in later chapters.
  • After early visual processes extract basic features from the visual input, it is the job of mid-level vision to organise these features into the regions, surfaces, and objects that can, in turn, serve as input to object recognition and scene-understanding processes.
  • Perceptual committees' serve as an important metaphor in this chapter. The idea is that many semi-independent processes are working on the input at the same time. Different processes may come to different conclusions about the presence of an edge or the relationship between two elements in the input. Under most circumstances, we see the single conclusion that the committees settle upon. Bayesian models are one way to formalise this process of finding the most likely explanation for input. Deep neural networks may be a way to build members/parts of the committee.
  • Multiple processes seek to carve the input into regions and to define the edges of those regions, and many rules are involved in this parsing of the Image. For example, image elements are likely to group together if they are similar in colour or shape, if they are near each other, or if they are connected. Many of these grouping principles were first articulated by members of the Gestalt school.
  • Other, related processes seek to determine whether a region is part of a fore- ground figure (like this black O) or part of the background (like the white area around the O). These rules of grouping and figure-ground assignment are driven by an implicit understanding of the physics of the world. Thus, events that are very unlikely to happen by chance (e.g., two contours parallel to each other) are taken to have meaning. (Those parallel contours are likely to be part of the same figure.)
  • The processes that divide visual input into objects and background have to deal with many complexities. Among these are the fact that parts of objects may be hidden behind other objects (occlusion) and the fact that objects themselves have structure. Is your nose an object or a part of a larger whole? What about glasses or hair or a wig?
  • In addition to perceiving the shapes of objects and their parts, we are also adept at categorising the material that an object seems to be made of-glass, stone, cloth, and so on. We use material perception to estimate physical properties. What would the object feel like? Can it be grasped like a bottle, or would it slip through our fingers like sand?
  • It is common to talk about the role of 'templates' in object recognition. The idea is that an object in the world is recognised when its image fits a particular representation in the brain in the way that a key fits a lock. It has always been hard to see how naïve template models could work, because of the astronomical number of templates required: we might need one 'lock' for every object in every orientation in every position in the visual field.
  • Deep neural networks (DNNs) are modern efforts to create computer algo: rithms that can categorise objects very well. Unlike a literal, lock-and-key template, a DNN tries to match an image with a complex pattern of activity in a network that arises when the network encounters an object in a specific category. This allows the network to categorise an infinite set of images as 'cat', 'coffee cup', and so on.
  • Faces are a special case of object processing. Viewpoint is very important. Upright faces are much easier to recognise than inverted faces. Moreover, some regions of the brain seem to be specifically interested in faces. Different regions may be important for different aspects of face processing ("Who is it?' versus 'How is he feeling?").

How do we observe color? - Chapter 5

  • Probably the most important fact to know about colour vision is that lights and surfaces look coloured because a particular distribution of wavelengths of light is being analysed by a particular visual system. Colour is a mental phenomenon, not a physical phenomenon. Many animal species have some form of colour vision. It seems to be important for identifying possible mates, possible rivals, and good things to eat. Colour vision has evolved sev eral times in several different ways in the animal kingdom.
  • Rod photoreceptors are sensitive to low (scotopic) light levels. Humans have only one type of rod photoreceptor; it yields one 'number' for each location in the visual field. Our rods can support only a one-dimensional representation of colour, from dark to light. Thus, human scotopic vision is achromatic vision.
  • Humans have three types of cone photoreceptors, each having a different sen- sitivity to the wavelengths of light. Cones operate at brighter light levels than rods, producing three numbers at each location; the pattern of activity over the different cone types defines the colour. Some animals have many more types of photoreceptors, but we know rather little about their colour experience.
  • If two regions of an image produce the same response in the three cone types, they will look identical; that is, they will be metamers. And they will look identi cal even if the physical wavelengths coming from the two regions are different.
  • In additive colour mixture, two or more lights are mixed. Adding a light that looks blue to a light that looks yellow will produce a light that looks white (if we pick the right blue and yellow). In subtractive colour mixture, the filters, paints, or other pigments that absorb some wavelengths and reflect others are mixed. Mixing a typical blue paint and a typical yellow paint will subtract most long and short wavelengths from the light reflected by the mixture, and the result will look green.
  • Colour blindness is typically caused by the congenital absence or abnormality of one cone type - usually the L- or M-cone, usually in males. Most colour-blind individuals are not completely blind to differences in wavelength. Rather, their colour perception is based on the outputs of two cone types instead of the normal three.
  • A single type of cone cannot be used, by itself, to discriminate between wavelengths of light. To enable discrimination, information from the three cones is combined to form three cone-opponent processes. In the first, cones sensitive to long wavelengths (L-cones) are pitted against medium-wavelength (M) cones to create an L-M process that is rough- ly sensitive to the redness or greenness of a region. In the second cone-opponent process, L- and M-cones are pitted against short-wavelength (S) cones to create an (L+ M) - S process roughly sensitive to the blueness or yellowness of a region. The third process is sensitive to the overall brightness of a region.
  • Colour appearance is arranged around opponent colours: red versus green, and blue versus yellow. This colour opponency involves further reprocessing of the cone signals from cone-opponent processes into colour-opponent processes.
  • The visual system tries to disentangle the properties of surfaces in the world from the properties of the illuminants, even though surface and illuminant information are combined in the input to the eyes. Mechanisms of colour constancy use implicit knowledge about the world to correct for the influence of different illuminants and to keep that strawberry looking red under a wide range of conditions.

How does the perception of space and binocular vision work? - Chapter 6

  • Reconstructing a three-dimensional world from two non-Euclidean, curved, two-dimensional retinal images is one basic problem faced by the brain.
  • A number of monocular cues provide information about three-dimensional space. These include occlusion, various size and position cues, aerial perspective, linear perspective, motion cues, accommodation, and convergence.
  • Having two eyes is an advantage for a number of reasons, some of which have to do with depth perception. It is important to remember, however, that it is possible to reconstruct the three-dimensional world from a single two-dimensional image. Two eyes have other advantages over just one: expanding the visual field, permitting binocular summation, and providing redundancy if one eye is damaged.
  • Having two laterally separated eyes connected to a single brain also provides us with important information about depth through the geometry of the small differences between the images in each eye. These differences, known as binocular disparities, give rise to stereoscopic depth perception.
  • Random dot stereograms show that we don't need to know what we're seeing before we see it in stereoscopic depth. Binocular disparity alone can support shape perception.
  • Stereopsis has been exploited to add, literally, depth to entertainment-from nineteenth-century photos to twenty-first-century movies. It has also served to enhance the perception of information in military and medical settings.
  • The difficulty of matching an image element in one eye with the correct element in the other eye is known as the correspondence problem. The brain uses several strategies to solve the problem. For example, it reduces the initial complexity of the problem by matching large blobs' in the low-spatial-frequency information before trying to match every high-frequency detail.
  • Single neurons in the primary visual cortex and beyond have receptive fields that cover a region in three-dimensional space, not just the two-dimensional image plane. Some neurons seem to be concerned with a crude in-front/behind judgement. Other neurons are concerned with more precise, metrical depth perception.
  • When the stimuli on corresponding loci in the two eyes are different, we experience a continual perceptual competition between the two eyes, known as binocular rivalry. Rivalry is part of the effort to make the best guess about the current state of the world based on the current state of the input.
  • All of the various monocular and binocular depth cues are combined (unconsciously) according to what prior knowledge tells us about the probability of the current event. Making the wrong guess about the cause of visual input can lead to illusions. Bayes' theorem is the basis of one type of formal understanding of the rules of combination.
  • Stereopsis emerges suddenly at about 4 months of age in humans, and it can be disrupted through abnormal visual experience during a critical period early in life.

How does attention and the perception of a scene work? - Chapter 7

  • Attention is a vital aspect of perception because we cannot process all of the input from our senses. The term attention refers to a large set of selective mechanisms that enable us to focus on some stimuli at the expense of others. Though this chapter talked almost exclusively about visual attention, attentional mechanisms exist in all sensory domains.
  • In vision, it is possible to direct attention to one location or one object. If something happens at an attended location, we will be faster to respond to it. It can be useful to refer to the 'spotlight' of attention, though deployments of attention differ in important ways from movements of a physical spotlight.
  • In visual search tasks, observers typically look for a target item among a number of distractor items. If the target is defined by a salient basic feature, such as its colour or orientation, search is very efficient and the number of distractors has little influence on the reaction time (the time required to find the target). If no basic feature information can guide the deployment of attention, then search is inefficient, as if each item needed to be examined one after the other. Search can be of intermediate efficiency if some feature information is available.
  • Search for objects in real scenes is guided by the known features of the objects, by the salient features in the scenes, by your history of searching, and by a variety of scene-based forms of guidance. For example, if you're looking for your coffee mug, you will guide your attention to objects with the right colour, shape, and size, and located in physically plausible locations and in logically sensible places.
  • Attention varies over time as well as space. In the attentional blink paradigm, observers search for two items in a rapid stream of stimuli that appear at the point of fixation. Attention to the first target makes it hard to find the second if the second appears within 200-500 ms of the first. When two identical items appear in the stream of stimuli, a different phenomenon makes it hard to detect the second instance.
  • The effects of attention manifest themselves in several different ways in the brain. In some cases, attention is marked by a general increase in neural activity or by a greater correlation between activity in different brain areas. In other cases, attention to a particular attribute tunes cells more sharply for that attribute. And in still other cases, attention to a stimulus or location causes receptive fields to shrink so as to exclude unattended stimuli.
  • Damage to the parietal lobe of the brain produces deficits in visual attention. Damage to the right parietal lobe can lead to neglect, a disorder in which it is hard to direct attention into the contralesional (in this case, the left) visual field. People with neglect may ignore half of an object or half of their own body. 
  • Scene perception involves both selective and nonselective processing. Tasks like visual search make extensive use of selective processing to recognise specific objects. Nonselective processing allows observers to appreciate the mean and variance of features across many objects (or proto-objects). Thus, you know the average orientation of trees in the woods (vertical) before knowing whether any particular tree is oriented perfectly vertically. Using these ensemble statistics, even without segmenting the scene into regions and objects, the nonselective pathway can provide information about the gist of a scene (e.g., whether it's natural or man-made). 
  • Picture memory experiments show that people can remember thousands of images after only a second or two of exposure to each. In contrast, change blindness experiments show that people can miss large changes in scenes if those changes do not markedly alter the meaning of the scene.
  • Our perceptual experience of scenes consists of nonselective processing of the layout and ensemble statistics of the scene, combined with selective processing of a very few objects at each moment. Our experience can be thought of as an inference based on all of the preceding processing. Usually this inference is adequate because we can rapidly check the world to determine whether the chair, the book, and the desk are still there. In the lab, however, we can use phenomena like inattentional blindness and change blindness to reveal the limits of our perception, and it is becoming increasingly clear that those limits can have real-world consequences

How does the perception of movement work? - Chapter 8

  • Like colour or orientation, motion is a primary perceptual dimension that is coded at various levels in the brain. Motion information is used to determine where objects are going and when they're likely to get there.
  • We can build a simple motion-detecting circuit by using linear filters that delay and sum information (and are followed by nonlinearities).
  • V1 neurons view the world through a small window, leading to the well-known aperture problem (that is, a V1 neuron is unable to tell which elements corre- spond with one another when an object moves through its receptive field).
  • Strong physiological and behavioural evidence suggests that the middle temporal area (MT/V5) is involved in the perception of global motion.
  • Aftereffects for motion, like those for orientation or colour, can provide important insights into the underlying mechanisms of perception in humans.  
  • Luminance-defined (first-order) motion and contrast- or texture-defined (second-order) motion appear to be analysed by separate systems.
  • The brain has to figure out which retinal motion arises in the world, and which arises because of eye movements. Moreover, the brain must suppress the motion signals generated by our eye movements, or the world will be pretty 'smeared'.
  • Motion information is critically important to us for navigating around our world, avoiding imminent collision, and recognising the movement of animals and people.
  • Eye movements that provide motion information include reflexive move- ments such as optokinetic nystagmus; involuntary microsaccades; and movements over which we have varying levels of control, including vergence movements, smooth-pursuit movements, and saccades.
  • Saccades are rapid-fire eye shifts that occur constantly throughout our waking hours and during REM sleep. Saccadic suppression masks saccadic images being processed in the brain so that we do not perceive the world as a blur

What is sound and how do we perceive it? - Chapter 9

  • Sounds are fluctuations of pressure. Sound waves are defined by the frequency, intensity (amplitude), and phase of fluctuations. Sound frequency and intensity correspond to our perceptions of pitch and loudness, respectively.
  • Sound is funnelled into the ear by the outer ear, made more intense by the middle ear, and transformed into neural signals by the inner ear.
  • In the inner ear, cilia on the tops of inner hair cells pivot in response to pressure fluctuations in ways that provide information about frequency and intensity to the auditory nerve and the brain. Auditory nerve fibres convey information through both the rate and the timing patterns with which they fire.
  • Different characteristics of sounds are processed at multiple places in the brainstem before information reaches the cortex. Information from both ears is brought together very early in the chain of processing. At each stage of auditory processing, including primary auditory cortex, neurons are organised in relation to the frequencies of sounds (tonotopically).
  • Humans and other mammals can hear sounds across an enormous range of intensities. Not all sound frequencies are heard as being equally loud, however. Hearing across such a wide range of intensities is accomplished by the use of many auditory neurons. Different neurons respond to different levels of intensity. In addition, more neurons overall respond when sounds are more intense.
  • Series of channels (or filters) process sounds within bands of frequency. Depending on frequency, these channels vary in how wide (many frequencies) or narrow they are. Consequently, it is easier to detect differences between some frequencies than between others. When energy from multiple frequencies is present, lower-frequency energy makes it relatively more difficult to hear higher frequencies.
  • Hearing loss is caused by damage to the bones of the middle ear, to hair cells in the cochlea, to neurons in the auditory nerve, or to the stria vascularis that provides nourishment and ions to the hair cells. Although hearing aids are helpful to listeners with hearing impairment, there is only so much that can be done to help when damage to hair cells cannot be repaired.

How do we locate sound? - Chapter 10

  • Listeners use small differences, in time and intensity, across the two ears to learn the direction in the horizontal plane (azimuth) from which a sound comes.
  • Time and intensity differences across the two ears are not sufficient to fully indicate the location from which a sound comes. In particular, they are not enough to indicate whether sounds come from the front or the back, or from higher or lower (elevation).
  • The pinna, ear canal, head, and torso alter the intensities of different frequencies for sounds coming from different places in space, and listeners use these changes in intensity across frequency to identify the location from which a sound comes.
  • Perception of auditory distance is similar to perception of visual depth because no single characteristic of the signal can inform a listener about how distant a sound source is. Listeners must combine intensity, spectral composition, and relative amounts of direct and reflected energy of sounds to estimate distance to a sound source.
  • Many natural sounds, including music and human speech, have rich harmonic structure with energy at integer multiples of the fundamental frequency, and listeners are especially good at perceiving the pitch of harmonic sounds.
  • Important perceptual qualities of complex sounds are timbre (conveyed by the relative amounts of energy at different frequencies) and the onset and offset properties of attack and decay, respectively.
  • Because all the sounds in the environment are summed into a single wave-form that reaches each ear, a major challenge for hearing is to separate sound sources in the combined signal. This general process is known as auditory scene analysis. Auditory stream segregation succeeds by using multiple characteristics of sounds, including spatial location, similarity in frequency and timbre, onset properties, and familiarity.
  • In everyday environments, sounds to which a person is listening often are interrupted by other, louder sounds. Perceptual restoration is a process by which missing or degraded acoustic signals are perceptually replaced.
  • Auditory attention has many aspects in common with visual attention. It is tion a balance between being able to make use of sounds one needs to hear in the midst of competing sounds and being on alert for new auditory information.

How do we perceive music and speech? - Chapter 11

  • Musical pitch has two dimensions: tone height and tone chroma. Musical notes are combined to form chords. Notes and chords vary in duration and are combined to form melodies.
  • Melodies are learned psychological entities defined by patterns of rising and falling musical pitches, with different durations and rhythms.
  • Rhythm is important to music, and to auditory perception more broadly. The process of perceiving sound sequences is biased to hear rhythm.
  • Humans evolved to be able to produce an extremely wide variety of sounds that can be used in languages. The production of speech sounds has three basic components: respiration, phonation, and articulation. Speech sounds vary in many dimensions, including intensity, duration, periodicity, and noisiness.
  • In terms of articulation and acoustics, speech sounds vary according to other speech sounds that precede and follow (coarticulation). Because of coarticulation, listeners cannot use any single acoustic feature to identify a vowel or consonant. Instead, listeners must use multiple properties of the speech signal.
  • In general, listeners discriminate speech sounds only as well as they can label them. This is categorical perception, which also has been shown for the perception of many other complex familiar auditory and visual stimuli.
  • How people perceive speech depends very much on their experience with speech sounds within a language. This experience includes learning which of the many acoustic features in speech tend to co-occur. Because of the role of experience in how we hear speech, it is often difficult to perceive and produce new speech sounds from a second language following experience with a first language.
  • One of the ways that infants learn words is to use their experience with the co-occurrence of speech sounds.
  • Speech sounds are processed in both hemispheres of the brain much as other complex sounds are, until they become part of the linguistic message. Then, speech is further processed in anterior and ventral regions, mostly in the left superior temporal cortex, but also in posterior superior temporal cortex

What does vestibular sensation entail? - Chapter 12

  • The vestibular organs are the inner-ear organs that sense head motion and gravity and contribute to our equilibrium sense.
  • The vestibular organs include three semicircular canals (horizontal, anterior, and posterior), which sense angular motion, and two otolith organs (utricle and saccule), which sense both gravity and linear acceleration.
  • Vestibular hair cells are the mechanoreceptors that convert both orientation with respect to gravity and head motion into signals that are sent to the brain.
  • Spatial orientation includes three perceptual modalities: linear motion, angular motion, and tilt. Direction and amplitude are qualities that define each of these three perceptual modalities.
  • We are exquisitely sensitive to head motion even in the dark, recognising the directions of rotation, linear motion, and tilt at very low thresholds.
  • We do not have vestibular perception isolated from the other senses. Spatial orientation perception combines information from multiple sensory systems (i.e., multisensory integration) - with the vestibular and visual systems making predominant contributions.
  • The brain processes the vestibular information to yield perceptions that differ substantially from the signals found on the afferent neurons.
  • In addition to their contributions to spatial orientation perception, the vestibular organs contribute to postural, vestibulo-autonomic, and vestibulo-ocular reflexes. Vestibular-evoked postural reflexes help us maintain balance. Vestibulo-autonomic reflexes help regulate blood flow, especially to the brain. Vestibulo-ocular reflexes are compensatory eye movements that helps us see clearly even when the head moves.
  • Vestibular problems are widespread, and treatments are limited. For patients with Ménière's syndrome, for example, the symptoms may become so disabling that these patients accept treatments that yield permanent disability just to be rid of the symptoms

How do we perceive touch? - Chapter 13

  • The sense of touch produces a number of distinct sensory experiences. Each type of experience is mediated by its own sensory receptor system(s). Touch receptors are responsive not only to pressure, but also to vibration, changes in temperature, and noxious stimulation. The kinaesthetic system, which also contributes to our sense of touch, is further involved in sensing limb position and the movement of our limbs in space. Pleasant or emotional touch is another form of sensory specialisation. 
  • Four classes of pressure-sensitive (mechano-)receptors have been found within hairless skin, and another five classes within hairy skin. The organs used to sense limb position and movement (namely our muscles, tendons, and joints) are more deeply situated within the body. Thermoreceptors respond to changes in skin temperature that occur, for example, when we contact objects that are warmer or cooler than our bodies. Nociceptors signal tissue damage (or its potential) and give rise to sensations of pain. TRP channels have been found that respond to thermal pain and hot and cold tastes.
  • The pathways from touch receptors to the brain are complex. Two major pathways have been identified: a fast one (the dorsal column-medial lemniscal pathway), which carries information from mechanoreceptors, and a slower one (the spinothalamic pathway), which carries thermal and nociceptive information. Both enter the dorsal horn of the spinal cord, which itself has dense neural connectivity. The pathways project to the thalamus and from there to the primary somatosensory area, located in the parietal lobe just behind the central sulcus. This area contains several somatotopically organised subregions, in which adjacent areas of the body project to adjacent areas of the brain. The neural organisation of the brain for touch has been shown to be remarkably plastic, even in adults.
  • Downward pathways from the brain play an important role in the perception of pain. According to the gate control theory, signals along these pathways interact at the spinal cord with those from the periphery of the body. Such interactions can block the pain signals that would otherwise be sent forward to the brain. The sensation of pain is further moderated by areas in the cortex.
  • Investigators have measured sensitivity to mechanical force by applying nylon hairs of different diameters to the skin. They determine spatial acuity of the skin by measuring the two-point touch threshold. More precise determination can be made by measuring discrimination of the orientation of gratings applied to the skin. Tactile pressure sensitivity and spatial acuity vary with body site because of varying concentrations of different types of mechanoreceptors; similar (but not identical) variations are found with pain. The minimum depression of the skin needed to feel a stimulus vibrating at a particular rate (frequency) provides a measure of vibration sensitivity.
  • The sense of touch is intimately related to our ability to perform actions. Signals from the mechanoreceptors are necessary for simple actions such as grasping and lifting an object. Conversely, our own movements determine how touch receptors respond and, hence, which properties of the concrete world we can feel. Touch is better adapted to feeling the material properties of objects than it is to feeling their geometric features (e.g., shape), particularly when an object is large enough to extend beyond the fingertip.
  • Like other sensory modalities, touch gives rise to internal representations of the world, which convey the positions of objects using the body as a spatial reference system. Touch-derived representations are inputs to higher-level functions like allocation of attention and integration with information from other modalities.
  • The psychological study of touch is useful for a number of applications. Creating glass surfaces that 'touch you back' opens the door to new uses for this modality on the phones and tablet computers in common use

How does taste perception work? - Chapter 14

  • Flavour is produced by retronasal olfaction (olfactory sensations produced when odorants in the mouth are forced up behind the palate into the nose by swallowing). Flavour sensations are perceptually localised to the mouth, even though the retronasal olfactory sensations come from the olfactory receptors.
  • Taste buds are globular clusters of cells (like the segments in an orange). The tips of some of the cells (microvilli) contain sites that interact with taste molecules. Those sites fall into two groups: ion channels that mediate responses to salts and acids, and G protein-coupled receptors that bind to sweet and bitter compounds as well as amino acids.
  • The tongue has a bumpy appearance because of structures called papillae. Filiform papillae (the most numerous) have no taste buds. Taste buds are found in the fungiform papillae (front of the tongue), foliate papillae (rear edges of the tongue), and circumvallate papillae (rear centre of the tongue), as well as on the roof of the mouth.
  • Taste projects ipsilaterally from the tongue to the medulla, thalamus, and cortex. It projects first to the insula in the cortex, and from there to the orbitofrontal cortex, an area where taste can be integrated with other sensory input.
  • Taste and olfaction play very different roles in the perception of foods and beverages. Taste is the true nutritional sense; taste receptors are tuned to molecules that function as important nutrients. Bitter taste is a poison detection system. Sweet taste enables us to respond to the sugars that are biologically useful to us: sucrose, glucose, and fructose. Salty taste enables us to identify sodium, a mineral crucial to survival because of its role in nerve conduction and muscle function. Sour taste permits us to avoid acids in concentrations that might injure tissue.  
  • Umami, the taste produced by monosodium glutamate, has been suggested as a fifth basic taste that detects protein. However, umami lacks one of the most important properties of a basic taste: hardwired affect. Some individuals like umami, but others do not. Taste receptors are not only in the mouth but also in the gut. Digestion breaks down proteins into their constituent amino acids, and the glutamate released stimulates gut glutamate receptors, leading to conditioned preferences for the sensory properties (largely retro- nasal olfaction) of the foods containing protein. 
  • The importance of taste to survival requires that we be able to recognise each taste quality independently, even when present in a mixture. Labelled-line coding preserves this independence.
  • Foods do not taste the same to everyone. We carry 25 genes for bitter taste. The most studied bitter receptor responds to PROP and shows allelic variation in humans, leading to the designations 'PROP nontaster' for those who taste the least bitterness and 'PROP taster' for those who taste the most. In addition, human tongues vary in the number of fungiform papillae (and thus taste buds) they possess. Those with the most taste buds are called super- tasters and live in a 'neon' taste world; those with the fewest taste buds live in a 'pastel' taste world. Psychologists discovered these differences by testing people's ability to match sensory intensities of stimuli from different modalities. For example, the bitterness of black coffee matches the pain of a mild headache to nontasters but resembles a severe headache to supertasters. The way foods taste affects palatability, which in turn affects diet. Poor diet contributes to diseases such as cancer and cardiovascular disease.
  • For taste, unlike olfaction, liking and disliking are hardwired: babies are born liking sweet and disliking bitter; salty taste is liked when its receptors mature a few weeks after birth. When we become deficient in salt or sucrose, liking for them increases. Junk foods are constructed to appeal to these preferences. Liking the burn of chili peppers, on the other hand, is acquired and, with the exception of some pets, is essentially limited to humans. Taste buds are surrounded by pain fibres; thus supertasters perceive greater burn from chilis than do nontasters. In addition, fungiform papillae, structures that house taste buds, are innervated by touch fibres; thus supertasters perceive greater touch sensations from fats (e.g., creamy, viscous, thick) in foods

What is olfaction and how does it work? - Chapter 15

  • Olfaction is one of the two chemical senses; the other is taste. The primary function of olfaction is to detect volatile chemicals in order to respond to the environment most effectively. These responses include safety, social, and navigational dimensions. To be perceived as a scent, a chemical must possess certain physical properties.
  • Contrary to longstanding beliefs that human olfactory capabilities are poor compared to those of other animals, recent research indicates that human odour detection is similar to many other mammalian species. Human olfac tion also has some unique physiological properties, one of which is that only 35% to 40% of the genes that code for olfactory receptors in humans are functional. Another unusual feature is that most odorants also stimulate the somatosensory system via the trigeminal nerve, and it is often impossible to distinguish the contribution of olfactory sensation from trigeminal stimulation, 
  • Anosmia is the complete absence of a sense of smell. It is most frequently caused by sinus disease, which can usually be treated. However, if anosmia is caused by head trauma, it is likely to be permanent. Anosmia can lead to severe disturbances in an individual's quality of life. Gradual loss of olfaction is a normal consequence of aging however, sudden olfactory loss can be the first sign of various diseases and should be investigated.
  • The dominant biochemical theory of odour perception - shape-pattern theory - contends that the fit between a molecule and an olfactory receptor (OR) and the spatial and temporal combinatorial code that is then activated determines the scent that is perceived. However, this theory is not universally accepted, and alternate explanations exist (e.g., vibration theory).
  • Researchers have demonstrated close connections between the visual system and olfaction. Two examples are binaral rivalry and the discovery of 'olfactory white'. Recent evidence further suggests that smelling odours that are congruent with what we see alters visual perception.  
  • There is a difference between active sniffing and passive inhalation of odours at both neurological and functional levels. Active sniffing may have therapeutic applications for individuals suffering from extreme physical disabilities. There are also new olfactory therapies to help recovery from anosmia and decrease addictive behaviours.
  • Almost all the odours that we encounter in the real world are mixtures, and we are generally not very good at analysing the discrete chemical compo nents of scent mixtures. Olfaction is thus primarily a synthetic, as opposed to analytical, sense. True odour imagery is also weak (or nonexistent) for most people, but training, as in the case of odour experts (e.g.. perfumers), appears to facilitate this ability.
  • The psychophysical study of smell has shown that features of odorant inten- sity and various cognitive functions are required for odour detection, dis- crimination, and recognition. Identification differs from odour recognition in that, in the former, one must come up with a name for the olfactory sensation. It can be difficult to name even familiar odours. This state is known as the tip-of-the-nose phenomenon-one of several indications that linguistic processing is disconnected from olfactory experience. However, at least some cultures possess an enhanced verbal connection with odours, which seems to be due to their need to use olfactory information. Regardless, unlike with other sensory experiences, we do not need to access semantic information in order to respond to an odour appropriately, as long as the odour is familiar.
  • Another important discrepancy between the physical and the psychological experience of odours is the difference between receptor adaptation and cognitive habituation. Receptor adaptation occurs after continuous odorant exposure over a number of minutes, can be undone after a few minutes away from the odorant, and is explained by a basic biochemical mechanism. Cognitive habituation occurs after long-term exposure (e.g., in a living or work environment) to an odor, takes weeks away from the odour to undo, and has not been conclusively defined in terms of mechanism. Psychological influences can have strong effects on both perceived odour adaptation and habituation. 
  • The most immediate and basic response to an odour is whether we like it or not; this is called hedonic evaluation. Odour hedonics are measured by pleasantness, familiarity, and intensity ratings. Pleasantness and familiarity are linearly related to odour liking; odour intensity has a more complex relationship with hedonic perception. Substantial evidence suggests that hedonic responses to odours are learned, even for so-called stenches; though there are certain caveats. That we learn to like or dislike various odours rather than being born with hardwired responses is evolutionarily adaptive for generalist species such as humans. The key to olfactory associative learning is the emotional value of the context in which the odour is first encountered. If the emotional context is good, the odour will be liked; if it is bad, the odour will be disliked. Previously acquired emotional associations with odours also underlie 'aromatherapy' effects. Emotional potency distinguishes odor-evoked memories from memories triggered by other sensory cues. The neuroanatomy of the olfactory and limbic systems and their neuroevolutionary development illustrate how emotional processing and olfactory processing are uniquely and intimately interrelated.
  • Pheromones are chemicals emitted by individuals that affect the physiology and/or behaviour of other members of the same species; they may or may not have any smell. In all mammals that have been shown to use pheromones for communication, detection is mediated through the vomeronasal organ (VNO) and processed by the accessory olfactory bulb (AOB). Humans do not possess a functional VNO or AOB, and empirical evidence for human pheromones is lacking. Nonetheless, human chemosignals that are processed through the olfactory system appear to have some influence on hormonal and socio-sexual responses.
Page access
Public
Comments, Compliments & Kudos

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.