Psychology and behavorial sciences - Theme
- 31095 keer gelezen
When cognitive psychology just came into existence, it was known as the scientific study of acquiring, retaining and using knowledge. Cognitive psychologists answered questions such as "how do people acquire knowledge?" and "how do people use their knowledge?".
Topics such as maintaining attention (concentration) and making decisions are also important within cognitive psychology. Cognitive psychology, however, does not only study our functioning as intellectuals - it is much broader because many of our actions, thoughts, and feelings depend on our cognition (knowledge). Most (and perhaps all) experiences in the world depend on the knowledge you have and use. Even when trying to understanding a simple story, you use the knowledge you have. So even activities that are not intellectual in nature are important within cognitive psychology.
However, cognition or knowledge is only useful when this knowledge is stored in memory. There is no use in learning something when you cannot remember the information later. That is why memory is also a very important topic within cognitive psychology.
The modern form of cognitive psychology is around fifty years old. Although this is quite short, this field of psychology has had an enormous influence, which is also referred to as the "cognitive revolution". This meant that a new research method was introduced. This new research method also influenced other research areas.
There were two important ideas during the cognitive revolution. The first idea was that human psychology (mental processes) could not be studied directly. The other idea was that this human psychology múst be studied if we want to understand behavior.
Before the emergence of the cognitive revolution, there were other traditions within the field of psychology. The first tradition was that of Wilhelm Wundt and his student Edward Bradford Titchener. They argued that researchers in the field of psychology should focus primarily on conscious mental events such as feelings, thoughts, perceptions, and memories. They also stated that no one but the individual could experience his or her mental processes. That is why Wundt and Titchener proposed the use of introspection: 'looking inside' to observe the content of the mental events of individuals.
However, not everyone could apply introspection: they had to be trained to do it. To train people in this, they would read a list with words that could be used to describe their thoughts and feelings. In addition, they were told that it is only the intention to objectively express what they felt or thought, without making any interpretations.
This way of doing research was very popular for a number of years. Nevertheless, there were a number of problems with introspection. The first problem is the fact that people are not conscious of all of their mental processes. It is therefore impossible for people to report everything that takes place in their mind. The use of introspection as a method to study mental processes is therefore limited. It was also impossible to call this research method scientific (empirical), because the self-reports could not be confirmed or rejected with the use of statistical testing. These reasons led to that introspection was eventually rejected as a valid research method.
As said before, introspection as a research method was rejected. Instead, a new tradition emerged called behaviorism. In behaviorism, researchers wanted to study only objective data, that is, behavior and stimuli that are observable. This objective data consists of observable behavior and external stimuli (noise, light). This data can be used to look at how behavior changes over time and therefore it is possible to determine one's learning history with this data.
Things that cannot be directly observed, such as someone's beliefs, wishes, goals, preferences, and experiences, are not part of the field of behaviorism. The explanation for this is that these things can only be investigated with the help of introspection, which was no longer seen as a suitable method for doing scientific research.
Behaviorism was very successful and yielded a lot of knowledge, among other things about how behavior changes based on the stimuli that one gets (rewards, punishments). However, around 1950 psychologists discovered that many behaviors could not be explained with the help of stimuli and observable behavior. After all, it is important to understand how people have come to a certain behavior and what thoughts have influenced them in this process. This is because the stimuli that people are offered must be interpreted and understood by people themselves before a certain behavior is exhibited. With the help of behaviorism it became unclear what these interpretations were and therefore it was not possible to study the underlying reasons or motives of the observable behavior and so it was not possible to explain the observable behavior.
The discovery that not all behavior could be explained with the use of observable behavior and external stimuli, brought some difficulties, such as that there was another research method needed to study human psychology. However, as said before, the use of introspection was not suitable. Fortunately, Immanuel Kant came up with a solution. He suggested the use of a transcendental method. This means that researchers first observe the effects and / or consequences of processes and then ask themselves: how did these effects (behaviors) come about? In other words, what are the causes for the observable behavior? This is called the transcendental method. Physicists use this method too when they try to explain visible effects from invisible causes.
The transcendental method is a common research method in contemporary psychology. This means that mental processes are not studied directly, but indirectly. The reasoning behind this method is that mental processes are invisible, but the effects and / or consequences (behavior) are observable. Examples are accuracy measures, error measures and response times. By using these measures, hypotheses can be drawn up and tested about what the mental processes must have been that have led to specific effects (behavior).
The greatest effect that the cognitive revolution has had on psychology is that contemporary researchers are constantly re-evaluating the best way to look at data. This is different than from before the cognitive revolution, because before this revolution psychologists were mainly concerned with finding an explanation that matches their data, without looking at other possibilities (other processes) that may have yielded the same data (behavior).
There are several causes for the emergence of the cognitive revolution. The first was that of the work of the researcher Edward Tolman, a behaviorist. Before Tolman, the main idea in behaviorism was that 'learning' meant that there was an observable change in behavior. According to them, there was no such thing as learning when there was no observable change in behavior. Tolman rejected this idea. According to him, learning was something more abstract and it meant that knowledge was acquired. He demonstrated this with the use of an experiment with rats, in which he placed rats in a maze for eleven days. The first ten days they received no food and there were also no observable changes in their behavior. According to traditional behaviorists, this meant that the rats did not learn anything in these ten days. However, this was not correct, because the rats were actually learning: they learned the design of the maze. The proof for this was that when food was placed somewhere in the maze on the eleventh day, the rats immediately ran to that specific location.
What Tolman's work demonstrated was that the rats probably made a "cognitive map" of the maze. The reason why this was not visible was that before the eleventh day, the rats had no motivation to show that they knew the maze. However, when food became available, they suddenly had a goal and thus motivation to show that they had created a map of the maze. The conclusion of this experiment is that even rats have mental processes (they get to know the maze and develop a cognitive map) that can explain their behavior (they run to a specific location; the food).
Another cause of the emergence of the cognitive revolution is that of strong criticism on behaviorism. An example of this criticism was addressed at B.F. Skinner, a radical behaviorist. He said that language was taught through behavior (observable data) and rewards (external stimuli). However, this idea was strongly rejected by the linguist Noam Chomsky, who demonstrated that Skinner was incorrect and that there should be another explanation to understand how people acquire language.
There were also European influences on the emergence of the cognitive revolution, such as that of the Gestalt psychology in Berlin. Gestalt psychologists were a group of psychologists who argued that behaviors, ideas and perceptions could not be seen as independent from each other. Instead, these elements must be viewed as a whole. This means that an individual, using a combination of these elements, forms his or her own unique experience. This idea became important within cognitive psychology.
Another important European influence on the cognitive revolution was that of the the British psychologist Frederic Bartlett. He was not a Gestalt psychologist, but he also emphasized that everyone forms their own experience. He also said that people place their experiences in a mental framework or 'schema' that ultimately determines how they interpret their experiences.
In 1950, new perspectives on mental processes emerged. This was made possible in part by the rapid developments in the computer technology. Psychologists were impressed by the possibilities of computers and thought that computers had similar processes compared with mental processes. It was also expected during this time that computers would soon become truly intelligent, which led to the emergence of the field of artificial intelligence (AI).
Donald Broadbent was one of the first researchers to use computers to explain human cognition.
There are various research methods that are used within cognitive psychology. First, we can look at how well people perform on a task (for example: how well does someone remember these words?) and how accurate someone is (does someone call the word banana, while this word was actually not on the list?). Second, we can look at how performance changes when input changes (instead of words, someone now has to remember a story). In addition, the effectiveness of learning strategies can be determined.
Another common way of doing research within cognitive psychology is to look at response times (RT). This is the time it takes for people to perform a certain response (such as clicking, naming, or pressing the spacebar) after viewing (or hearing) a stimuli. For example, if you see a picture, and you have to click left or right, then the time from the moment you see the picture to the moment you respond (click) is your response time. This reaction time can tell a lot about the mental processes of people, because the longer someone his or her reaction time, the longer this person had to process the stimuli (a mental process).
In addition, we can obtain information gained from research in cognitive neuroscience. This is a field that is concerned with understanding the mental functioning of people through the study of the brain and the nervous system. Observations from clinical neuropsychology can also be used. This is a field that deals with the way in which brain dysfunction can affect performance and behavior.
The Capgras syndrome is a rare syndrome that can occur after brain damage. Someone suffering from this Capgras syndrome is able to recognize significant others in his or her life (such as his or her parents, friends, partner), but thinks these people are not the real people. For example, they think their partner has been abducted and that instead someone else is pretending to be the loved one.
According to some researchers, the Capgras syndrome is caused by a disruption in the facial processing systems in the brain. These processing systems consist of two separate systems: a cognitive and an emotional system. The cognitive system is involved in recognizing people ("this is what my father looks like") and the emotional system is important for the emotions one experiences when seeing familiar faces ("I feel happy when I look at my mother"). In patients suffering from the Capgras syndrome there is probably a disruption in the emotional processing system, which leads to that these patients are able to recognize significant others, so they do see that someone looks like their father, mother, partner, etc., but they do not experience the (positive) emotional response that should come along with this recognition.
Using neuroimaging techniques, it has been determined that the Capgras syndrome can occur due to damage to various areas of the brain. First, the syndrome can occur due to damage in the temporal lobe. Damage to the temporal lobe also disrupts the functioning of the amygdala. The amygdala is an almond-shaped structure in the brain that ensures that people without damage (people with 'normal brains') are able to recognize danger. The amygdala is also important for recognizing positive stimuli. Damage to this area therefore leads to that patients do not experience the (positive) emotional response that one normally experiences when they see familiar others.
Patients with the Capgras syndrome also have damage to their frontal lobe, which is located in the prefrontal cortex. The prefrontal cortex is active when people read things or when they are analyzing situations. For example, when someone is dreaming, the prefrontal cortex is not that active, which leads to that dreams can sometimes be bizarre because the content is not analyzed by the prefrontal cortex. Therefore, for people with the Capgras syndrome, damage to the prefrontal cortex may mean that they are less able to distinguish between reality (real) and fiction (fake). This can lead to bizarre ideas, such as that someone is pretending to be a patient's acquaintance. The prefrontal cortex is also less active during the hallucinations that some patients with schizophrenia experience.
The Capgras syndrome teaches us a lot about the brain, such as that different areas of the brain are needed to be able to do simple things, such as recognizing acquaintances. These different areas of the brain normally work together. If the brain areas do not communicate efficiently, things go wrong. This applies to many things, and the Capgras syndrome is a good example of a consequence of this.
The human brain weighs around 1.4 kilograms and is the size of a small melon. There are approximately 86 trillion nerve cells in the brain and each nerve cell is connected to 10,000 other cells which means that there are approximately 860 trillion connections in the brain. In addition, the brain contains a lot of glial cells.
The consequences of brain damage is very dependent on the area where the damage is. A well-known example of this is the case of Phineas Gage, who experienced brain damage in specific areas in the prefrontal cortex. This led to major changes in his personality and emotions, demonstrating that the prefrontal cortex is important for these things. Another example of the consequences of brain damage in an specific area comes from Paul Broca, who in 1861 discovered that damage to the left hemisphere of the brain led to disturbances in language. Édouard Claparède discovered that patients with damage in other areas of the brain mainly experience problems in memory. All these discoveries show that different areas of the brain have different functions. To study the brain, it is therefore important to know the anatomy of the brain and to have knowledge about the specific functions of each area.
The human brain is generally divided into three parts: the forebrain, the hindbrain and the midbrain. The hindbrain brain is located directly above the spinal cord and contains various structures that are essential for controlling vital functions for life. This area is involved in controlling, among other things, the heartbeat, breathing, balance, posture and alertness. The largest part of the hindbrain, the cerebellum, has for many years been seen regarded as the center for the coordination of physical activity and balance. However, recent research shows that this area also has other functions. The midbrain has different functions, such as coordinating movement, processing auditory information and regulating pain experiences. For cognitive psychologists, the most interesting part of the brain is the forebrain and in particular the cerebral cortex. The outer layer of the brain contains convolutions, some kind of wrinkles. The grooves (fissures) between these wrinkles divide the brain into different parts. The longitudinal fissure is a deep groove that separates the left hemisphere from the right hemisphere. Other grooves divide the cortex into four lobes: the frontal lobe, the parietal lobe, the temporal lobe and the occipital lobe.
The so-called subcortical areas lie below the cortex. One of these structures, the thalamus, plays an important role in transmitting almost all sensory information to different areas of the brain. Located under the thalamus is the hypothalamus, a structure that is important for regulating motivation and behaviors such as eating, drinking, and sexual activity. There are various structures around the thalamus and hypothalamus that are part of the limbic system, such as the amygdala and the hippocampus, which are both involved in learning from experiences and memory. In addition, the amygdala plays an important role in emotional processing.
The brain can therefore be separated into a left and right hemisphere. Almost all parts of the brain come in pairs, which have a similar shape and pattern of connections in the two hemispheres, but (small) differences in functions. These functions are generally integrated thanks to commissures, thick bundles of fibers that transmit information between the two hemispheres. The best-known trajectory or commissure is the corpus callosum, the trajectory that connects the left and right hemisphere.
Sometimes it is necessary to break the corpus callosum. This was done for a long time in patients with epilepsy, as it was the only treatment that was helpful. Patients who underwent this surgery are called "split-brain" patients. Research with these patients has yielded a great deal of knowledge about the two hemispheres. An example of this knowledge is that certain aspects of language are processed in the left hemisphere and other aspects in the right hemisphere. So there are language processing systems in both hemispheres, but they differ in function.
We can find out more about different areas of the brain in different ways. For example, we can use knowledge from the clinical neuropsychology, which tries to learn more about the functioning of healthy brains by studying damaged brains. Hereby it is important to know that the consequences (symptoms) of brain damage are very dependent on the brain area in which the damage is located. For example, a lesion (brain damage) in the hippocampus leads to memory problems, but not to problems in language. A lesion in the occipital cortex, on the other hand, leads to problems with visuality, but not with problems with other senses (hearing, smelling, etc.).
Second, different neuroimaging methods can be used, such as computed tomography (CT) scans, PET (positron emission tomography) scans, MRI (magnetic resonance imaging) and fMRI (functional magnetic resonance imaging). There is a difference between structural imaging (CT scans) and functional imaging (PET scans). Structural imaging shows the structure (shape, size and position) of the brain areas. Functional imaging examines brain activity (functions of the brain).
MRI scans provide a detailed image of the brain. Normal MRI scans have a structural function, but the fMRI has a functional function (therefore the 'f'). During fMRI scans, the amount of oxygen that is flowing to a brain site is measured. The degree of oxygen then indicates the degree of brain activity.
The results obtained with structural imaging are very stable (and therefore reliable) and only change when the structure of the brain changes (due to brain damage or a tumor, for example). The results obtained through the use of PET or fMRI scans, on the other hand, are very variable, because the brain activity depends on what an individual is doing at that time. PET and fMRI scans therefore measure activities in the present. For example, when someone is solving a puzzle, an fMRI scan can be used to see which areas of the brain are currently active and therefore determine what the function of a specific brain area is.
Neurons communicate to a large extent through chemical signals called neurotransmitters. Communication consists of two processes: communication within and between neurons. Neurons send a signal from one end of the cell to another neuron through electrical signals. This forms the base for the use of electroencephalography (EEG). With EEG scans, changes in voltage (electricity) are measured by means of electrodes that are placed on the scalp. The changes in voltage before, during and after offering a stimulus are called event-related potentials.
Researchers also use techniques to manipulate the functions of the brain. For example by introducing chemicals that interfere or promote the activity of specific neurotransmitters. Other studies use electrical inputs to stimulate specific sites in the brain.There are also studies in which direct manipulation of the genes is involved, in order to see how adding or subtracting genetic material affects the workings of the brain, these are almost always done with mice and rats.
With these studies ethical considerations are especially important.
All the named techniques have strengths as well as weaknesses. For example, CT and MRI scans tell something about the shape and size of brain structures, but they provide no information about the activity levels within these brain regions. PET scans and fMRI scans tell a lot about brain activity, such as where there is activity, but are less precise when it comes to determining when this activity took place (the activity is summarized within a few seconds, but it does not show when exactly the activity took place). EEG data, on the other hand, are more precise in timing, but less precise in determining where the activity took place.
Because of these differences per technique, which complement each other, researchers often use a combination of techniques.
A limitation of the use of brain scans is that the results of the scans tell nothing about causality.
Another technique that can be used to map brain functions is transcranial magnetic stimulation (TMS). This is a technique that temporarily disrupts activity in a brain area. By looking at which functions (behaviors) change, a lot can be discovered about the brain area.
Determining the functions of the brain regions is also called localization.
The cerebral cortex is the largest part of the brain and it is the thin layer over the cerebrum. The cerebral cortex is very important for researchers, because a lot of information processing takes place here. The cerebral cortex has multiple areas, each with its own function: motor areas, sensory areas and association areas.
There are specific areas in the cerebral cortex that are also referred to as "departure points", where signals leave the cortex and control the muscles (primary motor projection areas). Other areas are seen as "arrival points", where the information arrives (primary sensory projection areas). This is an example of a contralateral connection.
Information from the senses is transmitted to an area called the somatosensory area which is located in the parietal lobe. The primary auditory cortex is located in the temporal lobe. The primary visual cortex lies in the occipital lobe. Although sensory areas differ on various points, there are a number of important similarities. In the first place, all these areas offer a "map" of a sensory environment. Second, the location is not determined by the anatomical characteristics of the areas, but by the functions. Third, there are also contralateral areas in these areas with regard to body parts or physical space.
The remaining 75 percent of the cerebral cortex is called the association cortex. Contemporary this term is used less because this part can be further divided on the base of functions and anatomy. The existence of such subparts can be illustrated by the various disorders caused by damage to various specific locations, such as apraxia, agnosia, aphasia and neglect. Damage to the prefrontal areas leads to a variety of problems.
The human brain contains about a trillion neurons and even more glial cells. Glial cells have different functions: they support the development of the nervous system, help to repair the nervous system when there is damage and provide neurons with energy. In addition, there are certain specialized glial cells that form an isolated around parts of neurons, so that neurons can spread information faster. These are called myelin layers.
The cell body of a neuron contains the nucleus and all the elements that are necessary for the normal metabolic activities of the cell. The dendrites receive input from neurons through their axons, which send chemical signals (neurotransmitters) impulses from one neuron to another neuron. If a neuron is stimulated enough, it releases neurotransmitters. The part of the neuron that releases neurotransmitters is called the presynaptic membrane and the receiving part of the neuron is called the postsynaptic membrane. When an incoming signal reaches a certain limit (threshold), the cell starts firing: the cell produces an action potential, which is sent via an axon and causes the release of neurotransmitters in the synaptic cleft.
There are a number of other important factors that influence the transfer of information between neurons. In the first place there are two distinct information flows, a chemical and an electrical flow. In addition, the initial responses of the postsynaptic neuron may differ in size. It is important to keep in mind that signal transmission does not differ in strength: there is an all-or-nothing law. When a signal (an action potential) is sent, it is sent. This process cannot be reversed. An analogy for this is a horn in cars: pressing harder on the horn does not lead to a louder sound. That's also how it works with action potentials: a stronger stimulus does not produce a stronger action potential. A neuron fires, or it doesn't fire. There are also many different neurotransmitters, where a distinction is made between inhibiting (inhibiting) and exciting (enhancing) neurotransmitters. The synaptic gap is also important. This gap is actually very small, which causes neuronal signals to be transmitted slowly. Yet this also has advantages. An example is that this allows neurons to compare information from many different sources and signals between different sources. In addition, experiences can change the strength of the synaptic connection, which is seen the biological base of learning.
It is a remarkable idea that neurons represent specific ideas or events. For example, if you are listening to your favorite song, how do neurons represent or encode this information? There are several possible answers to this question. The first is that a specific group of neurons represent your "favorite song". This would mean that if you listen to your favorite song, these specific neurons will become active. Another answer to the question is that a general pattern of neural activity is triggered while listening to your favorite song. That would look something like this: Neuron X fires, when Neuron Y is weak and Neuron Z does not fire at all. Both answers are correct.
Vision or visual perception starts with light. Light is produced by several things in our environment (the sun, lamps, candles). This light then reflects on other objects and is reflected again. Part of this reflected light hits, passes through the cornea and the lens and then hits the retina.
The retina is a piece of tissue that is very sensitive to incoming light. The retina is located at the back of the eyeball. The cornea and lens focus the incoming light, resulting in a sharp image on the retina. All around the lens are muscles, which provide sharpness through movements. If the lens muscles contract, vision is better for nearby objects and if the lens muscles relax, focus is better for objects farther away. There are two types of photoreceptors within the retina. These are neural cells that respond directly to light that enters. One type of photoreceptors are rods. These rods are color blind, sensitive to low amounts of light and therefore important in distinguishing among different light intensities. The other type of photoreceptors are cones. Cones are less sensitive than rods and need more light to function. Cones are also sensitive to differences in color. There are three types of cones that each have a different sensitivity to wavelengths (colors). In addition to color, the cones are also important when discerning fine detail, which is also referred to as acuity. This explains the fact that we point our eyes toward a target whenever we want to perceive it in detail. By doing this, the image of the target falls onto the fovea, the center of the retina. This is the place where most cones are located and where visibility is therefore the sharpest. The bars are located more on the sides of the eyes, which in turn is an explanation for why we can see very dim lights out of the corner of our eyes.
Rods and cones do not report directly to the cortex. Instead, the photoreceptors stimulate bipolar cells, which in turn excite ganglion cells. The ganglion cells are spread uniformly across the entire retina, but all of their axons converge to form a bundle of nerves called the optic nerve. This nerve carries information to different parts of the brain. The information is first is sent to an area in the thalamus, called the lateral geniculate nucleus (LGN). From there, information is transmitted to the occipital lobe, the primary projection area for vision.
The optic nerve is not a nerve that connects the eye with the brain, because the cells that connect the retina to the brain are already analyzing the visual input. This is called 'lateral inhibition' and implies that cells, when activated, inhibit neighboring or lateral cells. Lateral inhibition also leads to edge enhancement. This means that neurons in the visual system show stronger responses when perceiving the corners of objects. For example, the contrasts at the edges of a figure are more strongly perceived. This is important because the shape of an object is important in recognizing it.
The single-cell recording is a technique that has yielded a great deal of knowledge about the visual system. This is a procedure in which researchers can see and record the electrical changes within a neuron from moment to moment. Hereby they mainly look at the frequency of firing a single neuron. With this, the receptive field of a cell can be determined.
An example of using a single cell recording is as follows. An animal is paralyzed and then electrodes are placed just outside a neuron within the optic nerve of the animal. The animal then looks at a computer screen on which different forms are displayed (circles, lines, angles). Researchers can then look at which specific forms cause specific neurons to fire.
Different cells have different firing patterns. Some cells (center surround cells) fire maximum when light enters a specific area like. Other cells (edge detector cells) only fire when a figure with a specific edge comes into view. For example, some cells fire when something with a horizontal border is detected and others fire at vertical borders. It is important to remember that cells have different preferences. They also fire at multiple types of edges, but respond most strongly to a preferred edge. The farther away from the cell's preference, the weaker the firing will be. There are also cells (motion detector cells) that fire strongly when an object is moving. These cells then also have a preference for specific movements.
Our visual perception is based on the "divide and rule" principle, which means that different types of cells are located in different parts of the cortex and specialize in a specific type of analysis. This is especially evident in Area V1, which is located in the occipital lobe, where the axons of the (LGN), the area for visual information perception, arrive for the first time. All the cells together make sure that each part of the stimulus is perceived. It is important to remember that each area of the brain (including within a specific lobe) has a specific function. In Area A4 the cells are specialized in detecting shapes and in Area MT, cells are specialized in detecting motion. These brain areas are active all at the same time and that is why this kind of processing is called parallel processing: a system in which many different steps are going on simultaneously.
The cones and rods are therefore also an example of parallel processing, because both receptors function simultaneously. This is also true for the optic nerve. Here you have two types of cells: P cells and M cells. The P cells provide the input to the parvocellular cells in the LGN and are specialized in processing shapes and forms. The M cells provide the input to the magnocellular cells in the LGN and are specialized in processing motion and depth.
A certain part of activation in the occipital lobe also leads to activation in the cortex of the temporal lobe. This path is called the "what-system" because it is important for recognizing objects. In addition, activation in the occipital lobe also leads to activation in parietal lobe, this path is called the "where system". This path ensures that one can determine where an object is located.
So there are several areas of the brain that are active at the same time and that each process other elements of the visual world. The question is how all these different elements lead to one clear picture. This is what researchers call this the 'binding problem'.
The visual system registers colors, but also shapes. For example, if you look at a coffee mug, you see both colors and shapes. How is it then possible that you see the coffee mug as a coffee mug, and not as distinct shapes and colors? There is a lot of discussion about this topic, but roughly there are three answers:
Spatial position: The part of the brain that identifies the shape is a different part than the part that processes color and movement. What all areas of the brain have in common is that each keeps track of which forms are where, which colors are where and which motion patterns are where.
Neural synchronization: Our brains use a special rhythm to identify which sensory elements belong where. There is evidence that by means of neural synchronization the brain merges different attributes into one object. This happens because different neurons in the different areas "fire" at the same time. Firing together creates synchronization, which means that various aspects are registered together and observed together.
Conjunction error: This is the correct detection of various elements of a visual display, but mistakes in determining how these features are bound together (conjoined). individuals who experience problems with maintaining and sustaining attention (concentration) are particularly impaired in tasks in which they judge how features are conjoined.
Gestalt psychologists state that the whole is different than the sum of the parts. An example of this is the Necker Cube. This is an ambiguous figure, because this figure can be perceived in a certain way at one time and in a different way at another time. The different perceptions of such a figure depend on what information your brain receives from your eyes. On the other hand, the lines drawn on the paper (the image or figure) are completely neutral and do not indicate how you should interpret them, in other words, there is a neutral figure/ground organization. This shows that your perception of something does not have to be the same as what is on paper.
In the examples that have just been given, the figure remains neutral or constant regardless of how you interpret it. All changes in perception are therefore due to your interpretation and not due to a change in the figure. Our interpretation process is so fast that we do not realize that we created an interpretation. All these studies tell us that our perception goes beyond the given information. Gestalt psychologists state that we perceive through two simple principles, namely the principle of proximity and the principle of similarity. People perceive objects that are close together (proximity) or similar (similarity) as being one object.
Two broad and important topics for recognizing objects are the detection of elements of the stimulus and the way in which these elements are organized. Organizing elements means that you first receive information about a stimulus (input). Once this information has been collected, this information is analyzed which means that the form, location and content is determined. This means that perception can be divided into two steps: information collection and interpretation (organization). Although this sounds logical, this order is not followed in many situations. For example, it happens often that people start interpreting something and then start categorizing. In other words, sometimes thoughts, expectations or the context influences perception.
Another crucial aspect of perception is perceptual constancy. This means that one observes the constant properties of objects, while the sensory information we receive about these properties change whenever our viewing circumstances change. For example, if you see an object far away, the object is small on your retina, but as it gets closer, it becomes larger. Although it seems to have a different size at that time, that is due to the distance and not to the object (size constancy). This also applies to the perception of forms (form constancy). We know what shape an object has even though we see it from a different angle, just as we know what kind of brightness an object has, even though we see it in a different light (brightness constancy).
These forms of constancy are achieved by focusing on the relationships between the objects with the background rather than focusing on the objects themselves. This is the reason that it is easier to estimate the size of something if you compare it with other objects. A German physicist stated that there is an inverse relationship between the distance and the size of a figure on the retina. If an object is twice as far from the viewer, it is twice as small on the retina. If an object is three times as far away, then it is three times smaller on the retina. Because we are not aware of this, this was called 'unconscious interference'.
The process of comparing information is crucial to achieving constancy. In addition, it shows that people do not just receive information, but that the information is first interpreted. Illusions are the result of misinterpretation (of depth).
Depth perception depends on distance cues. These are characteristics of a stimulus that show where the object is located. The eyes differ from each other in perception. The difference between what our two eyes see is called binocular disparity. Even though we have no clues about the distance of an object, due to this inequality in our eyes, we can still perceive quite a lot of depth. In addition, we can also perceive depth if we keep one eye closed. This is due to monocular distance cues. This means that the eyes adjust to see an object more sharply (the lens muscles tighten when an object is close and relax when it is far away). Other cues are pictorial cues, which are cues that are visible in pictures. In addition, there is the linear perspective (two parallel lines seem to move towards each other as they move further into the distance) and the texture gradient (texture seems to diminish the farther away an object is). The movement of objects can also serve as a cue. For example, objects in the distance often seem to move less quickly than objects that are closer by, which is called the motion parallax. Finally, as you get closer to an object, the object becomes larger, which is called the optic flow.
The diversity of cues lets us perceive distance in many different circumstances.
We perceive the world around us with the help of different modalities, although vision is most important to people. How can people perceive and recognize objects that people see every day? First of all, this is due to shape perception, a process that determines the basic shape and size of an object. Secondly, object recognition, the process in which one identifies what an object is, is very important.
Object recognition can be influenced by the object (the stimulus) itself. The recognition process starting from the stimulus or the object itself is called bottom-up processing (from the stimuli to the brain). The effect of context, however, shows that recognition is also influenced by one's knowledge and expectations. This is called top-down processing (from the brain or mind to the stimuli).
The process of object recognition starts with the observation of simple visual features (shape, size). However, the Gestalt psychologists noted in the twentieth century that our perception of the visual world is different from the objective the stimulus. That this is the true can be seen when observing ambiguous figures such as the Neckerman cube. The input (the object, the stimulus) is neutral for everyone, but the perception (perception) differs between people. Many other, everyday stimuli also depend on our interpretation, although this is less noticeable because this is a very fast, almost automatic, process. Our perception is guided by various simple principles. These principles are; equality, proximity, continuation, closure and simplicity. Everyone uses these organization principles when perceiving things and that is why people often perceive the world in an equal way.
In many studies, participants are presented with certain stimuli for a short time. Nowadays people use a computer for this. Each stimulus is followed by a mask, which ensures that the participants cannot process the stimulus properly. Whether people can recognize stimuli that are briefly presented depends on a number of things: In the first place, it is important how familiar people are with the stimuli, so in other words how often they have already seen it. Another factor that influences word recognition is the newness of perception, the more recently participants have seen the word, the sooner they recognize the word, which is also called 'priming'. If a word or stimulus is shown twice, the word or stimulus will be recognized faster during the second presentation. This is an example of repetition priming. It is also easier to recognize whole words than individual letters. This is called the word superiority effect (WSE) and is usually demonstrated with a "two alternatives, forced-choice" task. Nonwords like FIKE or LAFE have a related effect. These letter strings look like English words and are easy to pronounce, and strings like these produce a context effect that promotes recognition. This is called Well-formedness: the more well-formed (English like) the string is, the easier to recognize.
If you want to recognize a certain word then you use different letter detectors, which then activate the word detectors. So there is a network of detectors. This network is organized in the form of layers, in which each subsequent layer can detect larger, more complex objects. The information flow starts with analyzing the characteristics of a word and an example of bottom-up processing. That is why this is also referred to as 'the characteristic network'. At a certain moment, a detector has a certain activation level. If this level reaches a certain threshold, the detector will "fire", which means that a signal is sent to other detectors. This is the same for neurons, as described in Chapter 2. However, it is not the case that a detector is the same as a neuron, instead detectors consist of a combination of neural tissue. Some detectors are activated more easily than others. The activation level of a detector depends on recency (having seen something recently) and frequency (having seen something often).
A weak signal will be enough to trigger a well primed detector, for example if we present the letter A very briefly in the context of AT. The participant may only see part of the letter A, such as the horizontal line, which wouldn't be enough to distinguish it among A, F, and H. But if the participants did see the the second letter, the T, it is much more likely for the participant to see AT, as AT is much better primed than FT and HT. This can also lead to errors. If CQRN is briefly shown but the Q is unclear, it might not be enough to prime for CQ, but the signal will be enough for CO, as this bigram is often encountered. The network bias is likely to pull the network towards errors if the inputs are less common, this helps perception more often than it hurts.
The "knowledge" of the networks is not represented locally: it is not stored at a certain location or built into a certain process. In other words, the knowledge of the network is distributed: the knowledge is spread over the entire network.
A number of variations and improvements to the described network have been devised. The McClelland and Rumelhart Model states that activation of one detector can lead to inhibition of another detector. Detectors in this model therefore have excitatory and inhibitory compounds. This is called the two-way communication system and is also similar to the functioning of neural connections.
The recognition by components (RBC) model contains a level of detectors that are sensitive to geons, which are basic "building blocks" for all objects that we recognize. This model, like the other networks discussed, uses a hierarchy of detectors. Geons are merged into geon assemblies, which then activate the object model. There are several advantages of this. In the first place, recognition based on a geon's point of view is independent of perspective. In addition, most objects can be recognized by using only a few geons.
There are various theories that try to explain how object recognition works. For example, some theories state that people have stored different perspectives of an object that they can recognize in their memory, but that this is limited and that in some situations you therefore need to "rotate" this point of view. According to this idea, processing speed is dependent of perspective.
Evidence based on studies with prosopagnosia patients, patients who have difficulties in recognizing faces, implies that there is a special neural structure involved almost exclusively in the recognition and discrimination of faces. Some people even seem to be 'super-recognizers' and are extremely skilled in recognizing faces. They can remember faces that they have viewed briefly very well.
The networks that have been discussed so far all start with analyzing parts that later get assembled into larger wholes. During face recognition, however, there is not only the processing of different parts, but instead there is a holistic perception of the face, so face recognition depends on the overall configuration of the face. Of course, a face's features (nose, mouth, eyes) still play a role in face recognition. However, these characteristics cannot be observed separately from the face (the context). Some of the evidence for holistic processing comes from the composite effect in face recognition. This effect is demonstrated when the top half of one face is combined with the bottom half of another, and participants need to identify only the top half. The task is difficult when the two halves are properly aligned: participants see the top of the face as part of the whole.
Object recognition is not as simple as described so far, because there are also various top-down influences. External causes, for example things other than the stimulus itself (noise, light, expectations, individual differences, so the context), also influence the perception of individuals. To properly describe and explain this, an interactive model is needed (which shows interactions between bottom-up and top-down influences).
William James is a very famous psychologist who described selective attention. Selective attention is the skill through which a person focuses on one input or one task while ignoring other stimuli that are also on the scene.
Early studies of attention often used the dichotic listening task. This means that participants receive headphones and then receive two different types of input on the left and right. Participants are then instructed to focus their attention on one ear (the attended channel) and to ignore the input in the other ear (the unattended channel). To be sure that the participants adhered to this, they also had to 'shadow'. This means that they had to repeat what they heard in the attended channel.
When asked about what they have been told in the unattended channel, participants say they didn't hear anything. Yet they can often identify the physical characteristics of what they have heard. This means that they can say whether they heard a human or a musical instrument. And if they heard a person, they could also indicate whether it was a man or a woman. In one study, participants shadowed speech in the attended channel, while in the unattended channel they heard a text in Czech, read with English pronunciation. The individual sounds therefore resembled English, but the message itself was gibberish. After one minute of shadowing, only four out of thirty participants detected the peculiar character of the unattended message.
The same goes for visual input. An example of this is a video clip in which a few young people play basketball. There are two teams: one team wearing white shirts and one wearing black shirts. Viewers are instructed to ignore the team with black shirts. They must watch this video and count how often the players in the white shirt throw the ball at each other. Participants often perform well on this task, but they do not seem to notice unusual events in the video. For example, at a certain moment, there is clearly a gorilla in the video which the participants do not notice at all. Another remarkable finding of the dichotic listening task is that one-third of people hear their own name in the unattended channel, while they do not report to have heard anything else.
The question is how these research findings can be explained. An explanation coming from older theories about attention is that people use a filter. This means that things to which no attention is paid, get filtered out and do not get processed. These are often the things that people are not interested in. What people are interested in, does not get filtered. You could also say that this filter has an inhibitory effect: people inhibit their reaction to a distractor (irrelevant thing). However, research shows that this is not all that there is to it: people not only use inhibition to ignore distractors, they also ensure that more attention is paid to what they do want to pay attention to (which could be called excitation).
During an experiment in which participants had to look at a fixation target, almost ninety percent of the participants did not realize that this target changed into different shapes. According to some researchers, the participants had seen the shapes but did not notice it. This is called inattentional blindness. Inattentional blindness is a pattern in which people do not notice (see) something while they are looking at it. A related effect is that of 'inattentional deafness', which means that participants do not hear certain stimuli if they do not expect this stimuli. There is also something called 'inattentional numbness', which means that people do not feel certain stimuli when these stimuli are unexpected. When something is unexpected, there is no attention paid to it. All of these statements therefore emphasize the role of attention. So, attention influences our perception, which means that our perception is an active process. This is also reflected in 'change blindness', which means that people do not notice changes in scenes that they are watching.
Another finding related to attention is the cocktail party effect. This means that at a party, people are often able to focus on one conversation and filter out all the other noise (or noise). However, when people hear their name falling in another conversation, they often notice this immediately.
According to the early selection hypothesis, the attended input is privileged from the start, so that the unattended input receives little analysis and is therefore never perceived.
According to the late selection hypothesis, all input receive relatively complete analysis, and selection occurs after the analysis is finished. This selection can occur before the stimuli reach consciousness, or perhaps the selection occurs later, so that all input make it briefly into consciousness, but then the selection occurs so that only the attended input is remembered.
Priming also affects attention. The explanation for inattentional blindness was that people do not expect the stimulus and therefore do not notice it. In other words, when the stimulus is shown, the participants' detectors are not primed, not responsive and therefore the stimulus is not perceived. During selective hearing (dichotic listening task) this is the same: one is instructed to ignore a certain ear, which becomes the unattended channel. Therefore, no attention is paid to the input or stimuli, which means that the detectors are not primed. This makes it difficult to hear the distractor (the input in the unattended channel).
The detectors for hearing your name have been primed your whole life and are therefore very responsive. Because of this, people often notice when their name is heard, as it appears during the dichotic listening task and during a party. This explains the cocktail party effect.
Selection by priming is based on three ideas. First, it states that perception or perception depends on the degree to which detectors are primed. Second, the priming is sometimes stimulus-driven, which means that priming is done on the basis of what was previously encountered. This is called repetition priming: priming produced by a prior encounter with the stimulus. This type of priming requires no effort and no resources. Third, there is also another type of priming that is expectation-driven. This type of priming is controllable and occurs when people expect to hear certain things. This priming is not done for inputs that someone has no interest in and this type of priming cannot be done for inputs that someone cannot anticipate.
The two types of priming mentioned can be distinguished in different ways. In the first place, it takes longer for expectation-driven priming to take effect. Secondly, a distinction can be made between 'costs' or disadvantages: expectation-driven priming involves costs. This means that if one does not get what he or she has prepared for, this will be at the expense of the detectors (for example, people make more mistakes on a task when he or she gets unexpected responses). This can be seen in studies where spatial attention was used. This means that you have to focus on a certain place in the room (for example above, below, left, right) which leads to not perceiving a stimulus that is shown in the center. The disadvantages of expectation-driven priming are that if one is instructed to focus on a stimulus on the left side of a room, then less attention is paid to a stimulus that is shown on the right side. If the stimulus then appears on the right, it is often not noticed. This shows that people have a limited capacity system for processing stimuli. Stimulus-driven priming is seen as "free of costs", which means that priming a detector is not at the expense of other detectors.
The studies into (spatial) attention sometimes create the idea that visual attention can be compared to a 'spotlight' that can shine in any place in the visual field. The light from this spotlight then focuses on the area to which attention is paid, so that the input in this area is processed better. So this is about attention, not about one's eye movements! Focusing this light is done by specific areas of the brain, which are mainly located in the frontal cortex and the parietal cortex. According to one theory, there is an orientation system that ensures that attention goes from one stimuli or one object to another. Then there is the warning system, which ensures that people stay alert and focused. Finally, there is the executive system, which controls the actions of people. This theory also states that there is no question of a 'spotlight'. Instead, there are neural mechanisms that determine your sensitivity to certain inputs. This fits with the idea of 'priming': by preparing yourself for things, you make yourself more sensitive to these things and you can focus your attention more easily.
In addition to the question of how people focus their attention, the question is whát people focus their attention on. Multiple answers are possible. The first is that people focus their attention on input that is prominent or clearly present and on elements that they find interesting or important. In addition, beliefs are also important. If someone is watching a scene that is completely predictable, then it is not necessary to pay a lot of attention to it, because it will not yield much extra information. Even when something is unpredictable and unexpected, not much attention will be given to this (think of inattentional blindness). Related to this is the 'ultra-rare item effect'. This means that rare items, are less likely to be noticed. People also differ in what they pay attention to. For example, women pay more attention to how people are dressed, while men focus more on how people look (what their figure is).
There is also a difference between cultures, in which there often is a distinction between Western and Asian cultures. Western cultures are seen as 'individualistic' cultures and Asian cultures are seen as 'collectivistic cultures'. This means that people in Western cultures focus more on individual people and objects. People in Asian cultures think in a more holistic way, with an emphasis on the entire context and on the relationships between people and objects.
You can also look at how attention is focused, or in other words, what the target of the attention 'spotlight' is. Is attention paid to everything in a certain area, or is attention focused on specific objects? Both seem to be true. Patients with unilateral neglect syndrome receive all input from one part of the body. If someone has neglect on their left side, then this person will no longer see things on the left. A study with patients looked at whether they focused their attention on the position (left or right), or on the target (a circle). It was expected that if they could see something in their right side, they would also be able to see it if this object were now shown in the left side of their visual field. Normally this would not be expected, because the patients are only able to perceive things on one side (left or right).
The researchers' expectations turned out to be true. The patients were shown a red circle on the right (which they could observe). They were then instructed to focus their attention on this circle. The researchers changed the position of the circle. The patients were now still able to see the circle, even though it was now on the 'neglected side'. The symptoms of patients with unilateral neglect indicate that people perceive based on positions. However, this research shows that they also observe based on targets or objects. This mixed pattern is also visible in people with 'healthy brains'. The dorsal attention system seems to be especially important for spatial attention, while the ventral attention system appears to be important for non-spatial tasks.
Feature binding happens when you search for a target defined by a combination of different elements. According to the feature integration theory, early evaluation of input, the preattentive stage, involves parallel processing of the entire display. Then comes the focused attention stage, where expectation based priming allows you to prepare the detectors for just one location. If you prime your attention for a specific input, like a combination of the features orange + horizontal, then you'll only receive information from that input, and you'll know that the orange and horizontal features are coming from the same object. Priming makes it easier to pay attention to things, as such it is easier to pay attention to materials you know. Attention can be thought of as an achievement, rather than a process.
Sometimes people want to multitask: do multiple things at the same time. Multitasking is about paying attention to two (or more) tasks. Sometimes multitasking is easy, such as when you are knitting and listening to a podcast at the same time. However, listening to a lecture and reading a book at the same time is a very difficult task. This can be explained by looking at what is needed for attention. You need resources to hold attention. The proposition is that people can only perform tasks at the same time if the resources are available for this. If the two tasks cost more than you have in resources, then distributing attention will not work.
Tasks that resemble each other (for example, two things that have to do with language) probably require the same resources and this will make multitasking difficult. So the idea is that if two tasks are different, multitasking will be easier than if two tasks are very similar. However, there are exceptions to this. An example of this is calling while driving. These are two very different tasks, but research has shown that people who call while driving are more likely to cause accidents. So even two tasks that are different from each other probably require the same kind of resources. Several types of resources have been described. Some researchers state that there is one general energy source. According to them, tasks differ in the "pressure" they place on someone. The higher this pressure, the more energy it costs and the more it will interfere with other tasks.
Other researchers state that there are "mental tools" that serve as a tool. For example, there is executive control, which is about the mechanisms that are needed to control one's thoughts and keep an eye on goals. The executive control also ensures that what you do fits with your goals. And if actions do not match the goals, the executive control ensures that people adjust their strategy or their goals. People with damage to the prefrontal cortex, an area that is important for executive control, can lead relatively normal lives, but have a hard time changing their strategy when they know or are told that they are not doing well (with a task or goal). This is called the 'preservation error'. An example of this is when they perform a task and are then told that they are not doing well, that they then continue to do the 'wrong'. These patients therefore exhibit 'goal neglect', which means that they cannot properly adjust their behavior to work towards their goal and therefore neglect the goal.
By practicing, things can become a habit and this leads to that less is required of the available resources. If things become a habit through practice, then the executive control is no longer so important in carrying out that activity. Practicing makes distributing your attention easier.
However, when something becomes so common that one can speak of automation, then this can also be disadvantageous. An example of this is the Stroop task. Students, for whom reading takes place automatically, score very poorly on this. The Stroop task is about showing words and the students have to name the color in which this word is printed. However, the word itself is also a color. So for example the word ORANGE is shown in blue and then students must say 'blue' instead of 'orange'. This often goes wrong and shows that automation can also have negative consequences.
The conclusions about distributed and divided attention are, first, that tasks cost energy and you therefore need resources for them and, secondly, that you cannot pay more attention than you have in resources.
Acquisition, storage and retrieval are terms that relate to human memory. Acquisition is about obtaining information, which is then saved and later retrieved.
Cognitive psychology used to focus primarily on how information was perceived and then stored in memory. One of these models is the modal model. According to this model, different types of memory are involved during information processing. When information arrives for the first time, it travels to the sensory memory, where the content is stored in a "raw" sensory form. For visual information this is the iconic memory and for auditory information this is the echoic memory. After selection and interpretation, the information is sent to the short-term memory, where information can be retained while you are working on it. Some information is then stored in the long-term memory, a larger and more permanent repository where all your knowledge and beliefs are stored.
Although this model contains some important truths, it needs to be adjusted on some points. In the first place, sensory memory plays a much smaller role in modern memory theories. In addition, these theories use the term working memory instead of short-term memory to emphasize the function of this memory. Moreover, the working memory is not so much seen as a (storage) location, but as a status. These modern theories use the modal model to describe how long-term memory and working memory differ. First, the long-term memory has a larger capacity than the working memory. In addition, it is difficult to get something in the long-term memory, while it is relatively easy to get something in the working memory. Moreover, it is also easy to retrieve information from your working memory, because the information is being actively used. Retaining information in your long-term memory can take longer. Fourthly, the content of the working memory is more fragile than that of the long-term memory.
In many studies of memory, participants must remember a list of words and then list as many of these words as possible in the order they prefer, which is also referred to as a free recall procedure. These participants usually remember about twelve to fifteen words in a specific pattern: they remember the first few words (the primacy effect) and the last few words in the list (the recency effect). The resulting pattern has a U-shape that describes the relationship between the position in the series of words and the probability of retrieval. The recency effect is caused by the fact that the items at the end of the list are still in the working memory and therefore easier to remember.
The recency effect has a different explanation, which has to do with the repetition of the first few words in a series. When participants are reading or hearing words, they repeat them to remember them. This is easier when there is only one word, but the more words there are, the harder this process becomes. The first few words can therefore be remembered well, because they are often repeated. However, the more words there are, the less participants are able to remember these words.
Almost all mental activities require coordination of different parts of information that can be processed simultaneously in the working memory. People differ in the amount of information they can have in their working memory at the same time, which is called the working memory capacity.
For a long time the capacity of the working memory was measured with the 'digit span'. Based on these measurements, it was stated that the working memory has the capacity to remember at least five but no more than nine digits. It was therefore stated that the working memory has the capacity for seven plus-or-minus two digits or items. However, questions arose among researchers such as: does the same apply to sentences? Can people remember seven sentences at the same time?
Some scientists assume that people can keep seven plus or minus two chunks or "chunks" in their working memory. Instead of four 'items', for example 'p a s s', you could chunk these letters into the word 'pass', which counts as one chunk. This chunking process is useful because it can help to store more information in your memory.
Because it has been previously stated that the working memory must be seen as a status and not as a storage location, measurements must also reflect this idea. So there must be measurements that measure the activity of the working memory in the here and now. One example of such a task is a reading task, in which participants must remember and name every last word of sentences. If someone can do this with two sentences, more and more sentences are shown until the person is no longer able to name words. This then indicates the working memory capacity (WMC).
To find out if this is a valid (correct, good) measurement, a number of hypotheses have been tested. For example, people with a higher capacity or operating range were expected to be better at coordinating various tasks. This appears to be correct and therefore this appears to be a valid research method for studying the working memory.
The active nature of the working memory is also reflected in the structure of this memory. In Chapter 5 we saw that the central executive determines the selection and sequence of our thoughts. One of the working memory's most important helpers is The articulatory repetition loop, a system in the short-term memory with which people remember things by repeating them. Related to this is the phonological buffer: this is a passive storage system for a representation of sound, an auditory image of the numbers that quickly starts to fade away. But then by means of subvocalization (silent speech - repeating what is in the phonological buffer) information enters the articular repetition loop.
Other helpers of the working memory are:
When explaining the recency effect, the importance of rehearsal has already been demonstrated. Rehearsal means that someone thinks about something. In general, a distinction is made between two types of rehearsal: maintenance rehearsal and relational or elaborative rehearsal. With maintenance rehearsal, the items are simply repeated without further thinking about the meaning of the items. Relational or elaborative rehearsal does reflect on the meaning of the items. For example, with the number 26051998, someone can see this number as 26 May 1998. By using relational or elaborative rehearsal, one can remember much more, as is shown by studies with the measurement of brain activity during learning. Higher levels of brain activity, especially in the hippocampus and prefrontal cortex, are associated with a greater chance of remembering things. Research shows that it does not immediately matter whether you have the intention to learn something (intentional learning). Learning without intention, incidental learning, is just as effective if you approach the materials properly. It is important to make a distinction between surface-processing, which means that you pay no attention to the meaning of the material, and deep processing. Deep processing is about paying attention to deeper characteristics of the stimuli, such as the meaning. Deep processing leads to better memories of the studied material.
So it is clear that when you want to learn something, it is useful to pay attention to the meaning of what you are trying to learn. Why does this help to remember information? The answer has to do with the connections that you create by using this kind of (deep) processing.
By paying attention to the meaning of something, it gets placed somewhere in the memory. In this way, connections are established between this information and other information, such as with related concepts. For example, a banana is a type of fruit and an apple too. There is now a connection between banana and fruit, apple and fruit, and therefore also between apple and banana. By giving the word 'banana' a meaning, you make it easier for yourself to remember it later, through the connections you have created. These connections enable you to remember things better.
Elaborative coding also ensures more connections and therefore better storage of things in the memory. Elaborative sentences are 'rich' sentences, such as: The cat walks to the house and jumps over the fence. Non-elaborative sentences are: The chicken is white. Elaborative sentences ensure that things are better remembered, because it provides more connections (the cat, house, fence) instead of one connection (chicken, white).
George Katuna stated more than seventy years ago that by discovering a sequence in material (organization), things can be better remembered.
Mnemonics are techniques to aid memory. An example of a mnemonic is ROY, which is used to remember the colors of the rainbow (red, orange, yellow). There's hundreds of variations on these techniques; first letter mnemonics, visualization strategies, peg-word systems, etc, and they all work. They help remember individual items and the specific sequence of these items. There is a downside to mnemonics however. When using these techniques you focus on just a single aspect of the material, and you won't be able to fully understand the material or find multiple connections between the material and other things you know.
With complex material the best way to memorize it is through understanding. Students that got a high grade on a test are much more likely to remember the subject matter in ten years than students that performed just adequately. As memory depends on connections, the memorizer's contribution in memory learning is important to consider. A fan of sports will find it much easier to remember sports fact, in this way the memorizer's knowledge affects their success in memorizing.
Various studies have shown that the learning environment can influence the recall of material (context-dependent learning). For example, if you learn at home in your room every day, then you are expected to perform best when you take the test in your room as well. This can be explained by encoding specificity. This means that while learning something, someone also stores information about the context. When the person is then placed in the same context, this context ensures that connections are activated, which makes it easier to remember the things he or she has learned. What is very important here to mention is that it is not so much about the physical context, but also about the mental context. So things like the color, smell, size of a room and things like listening or not listening to music are also stored while learning. In one study, participants had to take a test in a context other than the context in which they studied. However, just before the test, they were instructed to imagine the context in which they studied. They then performed as well as the people who took the test in the same context. This is important to remember! Thinking back to the context therefore also has an effect on performance.
The memory can be seen as a collection of ideas that are represented in a certain way. These representations can be seen as nodes in a memory network. These nodes are interconnected through associations. A node is activated if it receives a strong enough signal. In other words, a node has a certain activation limit or level. Activity must go beyond this limit in order for the node to fire. If two types of input separately do not create a strong signal, but together they are called sub-limit activation. This also fits in with neurons and recognition detectors. This can be measured using the lexical decision task. This task entails that participants see combinations of letters, some of which are words and some of which are not. They must then determine what are words and what are non-words. A variant of this task is to offer two sets of letters, in which participants must indicate whether these are two words or not. If the two words are related to each other (doctor, nurse), then semantic priming (meaning-based priming) is involved and participants respond faster than when two words are not related to each other (cake, shoe).
In a number of memory studies, participants were asked to read a list of words without being told that they should remember these words. Then, with the help of the lexical decision task, it was examined whether these words would have primed the participant. Participants indeed noticed the words on the word list faster. This was true even when they could not remember these words! This was tested using a recognition task, in which the participants were shown a number of words and then had to say whether they had seen these words before or not. The participants therefore seem 'primed' by the word list, but are not aware of this and cannot recognize the words. This was also apparent from the 'word-stem-completion task', in which one gets a number of letters as the start of a word and has to finish the word. This also showed that if participants have seen a word recently, they are more likely to mention this word!
These results make researchers think that there are two types of memory: an explicit and an implicit memory. The explicit memory can be tested with the help of tests that test the direct memory (remember, identifier). The implicit memory, on the other hand, can only be tested by indirect testing of the memory and is often viewed in priming effects. This type of memory can be tested using the lexical decision task and the word-stem-replenishment task.
Participants were shown a list of names in a classical study. They were instructed to read these names aloud and they were not told anything else about the study. A while later, the participants were shown a new list of names and they had to indicate how famous the people on this list were. Some names on this list were indeed celebrities, some were common people and some were entirely made up. These made-up names were of two types: some were on the first list of names to be read by the participants and others were new, made-up names.
Some participants were shown the second list immediately after their first list and others had to wait a whole day before they were shown the second list. The first group, the group that was immediately shown the second list, knows that if they see a known name, they have just seen it in the first list. So if they have seen the name Pjauey on the first list and then on the list with the 'celebrities' too, then the name feels familiar to them, but then they also have the memory of the list, which gives them this feeling of' 'familiarity'. However, when participants see the second list a whole day later, they also experience a sense of 'familiarity', but they can no remember that this is because they have just seen this name. These participants are therefore more inclined to judge the 'familiar' names as celebrities. Familiarity and the interpretation of the feeling of being familiar with something, therefore, determines the assessments of the participants. If one could place the feelings of familiarity then there was no mention of "false celebrities," while if one does feel these feelings but does not know the source of them, they are more likely to judge people as being "famous". So they make a misattribution. Which attributions people make naturally depends on their interpretation. The implicit memory is about something 'causing a bell to ring' or causing a feeling of familiarity. It then differs per person and per situation how this person ultimately interprets this feeling.
How great is the potential for misinterpreting? Phrases that we have heard before are more likely to be true, so familiarity increases credibility. This is also called the illusion of truth. In fact, this effect occurs even when you are explicitly warned in advance not to believe these sentences. The misattribution can go even further by assigning it to the wrong stimulus. If a known sentence is presented simultaneously with a loud sound, people perceive this sound as softer than when it is presented with an unknown sentence. Implicit memory can also be allocated to the wrong source. An example of this is the selection of the perpetrator in a line-up of possible suspects on the basis of "mug shots." Here too the feeling of 'familiarity' is important. If people have seen someone's mug shots before, they feel familiar. However, people often cannot trace the source of this feeling of familiarity and therefore they often make a misattribution of this feeling. So these are examples of the implicit memory: people have memories of something that they are not aware of. This implicit memory can therefore, without people being aware of it, have a lot of influence on people's lives (such as wrongly naming perpetrators). In short it means that people almost always choose for what (sentences, people, objects) they have seen before.
The implicit memory can be understood when looking at different steps. First, when a stimulus is detected, it activates certain detectors, which in turn activate other detectors, until you recognize the stimulus. So this is a stream of activation that flows from detector to detector, also called a 'processing path'. When remembering something, these paths are activated. Using these paths often, ensures that these connections between the detectors (or nodes) are strengthened. Having and using a processing path therefore increases the processing speed.
So when someone in a lexical decision task has seen the word "bubble" on a word list, this processing path has already been active and has already been "warmed up or primed". So if someone sees the word bubble again, the processing of this word will be faster through the primed processing path and the person will recognize the word sooner.
Other effects can be explained by the fact that people are sensitive to how good their processing speed is. They can therefore see whether recognizing something went easy or difficult. Did it cost them a lot of effort or not? However, when they find that something takes very little effort, they don't think: "I have a fast processing speed". Instead, they think, "Gee, that was very easy! I must know this word / object / etc." However, this feeling is not so much caused by the current processing speed, but by noticing changes in ease of processing, so they notice, for example, that it was very difficult at first to calculate something, while it is now much easier. People also notice discrepancies between how difficult it was to do something and how difficult they expected it to be. This noticeable change and discrepancies ensure that people get a sense of 'specialness', which they seek to explain and attribute to a source. They often manage to do this well, which mean that they can often attribute correctly where they know someone from ("oh, that's the woman I saw at the dentist yesterday"). In other situations (such as designating the perpetrator on the basis of displayed mosquito shots) people misattribute the feeling of specialness or familiarity.
Research shows that a stimulus will seem familiar if the following points are met:
Only the last step happens consciously, the rest is automatic. It can also be that a stimulus is objectively known (someone's father), but it does not feel familiar (think of the Capgras syndrome). In this case, the last two steps are skipped. It may also be that a stimulus feels familiar, but is not. The person has never seen the stimulus before and has not been primed. This is called the 'illusion of familiarity'.
So there are two types of memory: an explicit, conscious memory and an implicit, unconscious memory. These types of memory can also be subdivided. The explicit memory can be subdivided into episodic memory (memories of specific events) and semantic memory (general knowledge). The implicit memory is often viewed in four subcategories: procedural memory, priming effects, perceptual learning and classical conditioning.
The most evidence for this distinction between explicit and implicit memory comes from studies with patients who suffer from different types of brain damage.
Some types of brain damage can lead to memory loss, which is also called amnesia. There are different types of amnesia: retrograde and anterograde amnesia. Retrograde amnesia means that the memory for things that happened before the damage is disrupted. Anterograde amnesia means that the memory or the memories for the things that happen after the accident are disrupted. Retrograde amnesia is usually caused by blows to the head.
Research shows that there is a double dissociation between episodic and semantic memory, because after an accident some people can forget many things that belong to the semantic memory (words, celebrities, historical events), but still can name specific things they belong to their episodic memory (such as their marriage, the death of a loved one, etc.).
Anterograde amnesia can also occur due to excessive alcohol consumption. Alcoholics often do not have a good diet and actually only get nutrients from drinking beer. Beer does not contain vitamin B1 (thiamin), which eventually leads to a vitamin B1 deficiency. This in turn can lead to the Korsakoff syndrome, in which patients often also show symptoms of anterograde amnesia. What is striking is that patients with anterograde amnesia often have an implicit memory of something! The implicit memory therefore appears to be intact. There can also be an explicit memory, without any implicit memory.
If you want to remember something (for a test), it is recommended to study the material in many different ways (contexts). In this way you create many connections, which makes it easier to retrieve the content later. This is also called the use of multi perspectives.
There are a number of known cases of people who have incorrect memories. For example, after the Bijlmer disaster in 1992, researchers asked a number of people about the crash. For example, they asked them if they had seen the 'film' about the crash on television. However, there was no film at all. Yet half of the participants responded that they had seen the film. In a subsequent, similar study, researchers asked if participants had seen the film and asked them for details. Two-thirds of the participants responded that they had seen the film and most of them also mentioned details (such as whether the aircraft was on fire or not). Other studies, such as a study of a bombing in a club in Bali, also showed that people mention having seen videos that did not exist. They could also name details of these non-existent videos.
Even when participants are interviewed just after an event, there are errors in their memory. An example of this is an experiment in which students were invited to participate in an experiment. They had to wait in a waiting room for this and after about thirty seconds they were told that there was actually no experiment. Instead, they were told that researchers were interested in the participants' memories of the waiting room in which they waited. This waiting room did not contain any books, but almost one third of all participants mentioned that they had seen books. This can be explained by expectations: because an experiment requires an academic environment and there are often books in academic environments, they think that this was also the case in this environment. Their memory is therefore based on their expectations.
Memory errors can occur at different times and in different forms. In the previous chapter, memory was described as a network with different nodes connected by connections. These connections serve as retrieval paths for memories. By adding connections (through information that you get to hear from others, for example), information is added and a memory can be distorted. A transplantation error can also occur if information is used that does not actually belong to the specific memory, but to a different memory. The connections can therefore cause intrusion errors where other knowledge is added to the event. When participants get to read a prologue before reading a text, they can understand the text much better and therefore remember it, but at the same time they make four times as many intrusion errors compared to participants who didn't read a prologue. Similar effects occur when remembering word lists and during recognition tests. A well-known paradigm is used for the latter tests: the DRM, Deese-Roediger-McDermott procedure. This means that participants are shown a list of words such as bed, pillow, dark, night, dreams and then later state that they have seen the word "sleep" through the associations they have with this word. Even when people are warned in advance, they make mistakes. This shows that some of these mechanisms are so automatic that people are not able to inhibit them.
People use schemas. Schemas are ideas about what 'normal' or 'typical' behavior is in a certain situation. For example, people have schemas about going to school, going to the dentist, getting to know a new person, etc. Schemas are very useful because it helps you understand the world around you faster. If you are in a restaurant, you will not find it strange that someone comes to bring you a menu and asks if everything is to your liking, because this fits your schedule of restaurants. Schedules can also help you remember things. Human memories often contain 'gaps', things that are not remembered so well anymore. These gaps can be filled in with the most logical explanation. So, if you go to a restaurant and later someone asks you if you have received a menu, then you say yes, even if you no longer remember correctly, because this fits your schemas of restaurants. However, schemas can also cause memory errors because not all situations are the same.
Memory errors can have major consequences. An example of this is the mistakes people make during testimonials. Eyewitness errors account for at least three quarters of these false convictions.
Memory errors can occur in several ways:
People are therefore sensitive to false memories. This can be seen in a study in which one different word made a big difference in answers (how fast was the car driving vs. how fast was the car racing). The use of suggestions can therefore cause people to create erroneous memories. Complementary information is also better remembered than contradictory information. Finally, it is easier to plant false memories when participants are instructed to imagine something.
The misinformation effect means that memory errors occur after people receive information after an event has occurred. Creating such errors is a fairly easy process. In one study, students were told that researchers were investigating how well different people could remember the same events. The students were given a list of events that (they were told) had been reported by their parents. They were asked to recall these events as well as they could, so that the investigators could compare the students' recall with their parents'. Some of the events on the list actually had been reported by the participants' parents, other were bogus (made up by the experimenters). The college students were easily able to remember the genuine events, but none of the students recalled the bogus events. After repeated attempts at recall, this pattern changed. After three interviews, twenty-five percent of the students were able to remember the bogus events.
Another researcher has gone one step further and has used various techniques that stimulate false memories. The interviewer repeatedly asked participants to recall an event that she had learned about from their parents. She assured participants that she had detailed information about the event, and she applied social pressure with comments like "Most people are able to retrieve lost memories if they try hard enough." She offered smiled and encouraging nods whenever participants showed signs of remembering the events and she showed signs of disappointment when participants could not recall the events. She also encouraged participants to use a memory retrieval technique (guided imagery) that is known to foster false memories. With these factors, the interviewer persuaded many of their participants that just a few years earlier the participants had committed a crime that led to police contact. This felony never happened, but the participants recalled it anyhow.
Even though it seems that memory errors are very common, most of the memories people have are not wrong. So the question is how to notice whether a memory is correct or not.
When people evaluate memories, they rely on expressions of certainty. People tend to trust memories that are expressed with confidence. However, evidence suggests that a person's degree of certainty is not a good indicator of whether a memory is true or false. In fact, researchers suggest that there are simply no indicators that can reliably guide us in deciding which memories are true. It seems that memory errors may often be undetectable.
People often use the word 'forgetting' in an incorrect way. They often refer to a situation similar to this: you meet someone at a party and they name his or her name. After a few minutes you will not remember this name. People then say that they have 'forgotten' the name, while this is more of a problem that occurs in the acquisition phase. It is much more likely that people have not forgotten the name, but simply have not paid enough attention to it. Really forgetting is about acquiring information once, but being unable to retrieve it later.
One of the best predictors for forgetting this is the passage of time. The term "retention interval" refers to the time between learning information and repeating information. The greater this interval, the more likely you are to forget more and more things. The 'decay theory of forgetting' explains this effect. It states that memories fade or disappear with time. This may be because relevant brain cells die, or because the connections are not strengthened often enough, which eventually makes them weaker. Another explanation for forgetting is that new information interferes with old information, as the interference theory states. A third hypothesis about how forgetting works states that there is a problem with retrieving the memory. It is stated here that the memory is still in the long-term memory, but that the memory cannot be retrieved. As already mentioned in Chapter 7, retrieving memories is easier if you are in the same context as the memory was created through encoding specificity. The more time goes by, the greater the chance that the context has changed and the greater the chance that retrieving the memory will be more difficult.
All these hypotheses are correct. Sometimes there is also partial recall of memories, as is the case with the tip-of-the-tongue (TOT) phenomenon, in which people often remember the first letter of a word and roughly how the word sounds, but do not remember the whole word.
One technique that is often used for the purpose of recalling old memories is hypnosis. The idea is that under hypnosis a person can 'return' to an earlier event and remember virtually everything about the event, including aspects that the person did not notice at first. However, reality is different. Hypnotized participants often do give detailed reports of the targt event, but not because they remember more: instead, they are just willing to say more in order to comply with the hypnotist's instructions. As a result, their memories are a mix of recollection, guesses and interferences.
A technique that does seem to help is the cognitive interview. This interview is designed to help police in their investigations and, specifically, is aimed at maximizing the quantity and accuracy of information obtained from eyewitnesses to crimes. The cognitive interview has several elements, including an effort toward context reinstatement: step that put witnesses back into the mindset they were in at the time of the crime. The cognitive interview also builds on the simple fact that retrieval of memories from long-term storage is more likely if a suitable cue is provided. The interview therefore offers a diverse set of retrieval cues with the idea that the more cues provided, the greater the chance of finding one that triggers the target memory. The cognitive interview is quite successful, both in the laboratory and in real crime investigations.
Instead of recalling memories, one can also try to avoid forgetting. One way for this is to periodically think back about the event, so that the memory remains. An example of this is seen in the testing effect: students have better long-term retention for materials they were tested on, compared to materials they were not tested on. That is also the reason that in many books there are 'test yourself questions'. The intention is that readers remember the information better.
There are three different factors that influence how well someone can remember something and that are central to autobiographical memory: involvement in an event, emotions and decay. People are better in remembering material that is related to themselves, which is also called the self-reference effect. Often the information about yourself is quite accurate, because your self-schedule is a reflection of your normal behavior. People want consistency in their self-schedule. Moreover, they want to maintain a positive image of themselves, which can also disturb their memories. For example, students were asked to recall their high school grades as accurately as they could, and the data showed a clear pattern of self-service. When students forgot a good grade, their self-serving reconstruction led them to the correct belief that the grade must have been a good one. Consistent with this, eighty-nine percent of the A's were correctly remembered. However, when students forgot a poor grade, reconstruction led them to the false belief that the grade must have been okay: as a result, only twenty-nine percent of the D's were correctly recalled.
Another important part of autobiographical memory is emotion. In general, emotions make it easier for you to remember information, because emotional arousal (excitement) contributes on a biological level to the process of memory consolidation, which is the process in which memories become permanent. But emotions also play an important role through other mechanisms. In the first place, people pay more attention to an emotional situation. In addition, emotional events often involve repetition, because people often think about emotional events (such as during worrying). Another theory is that emotions determine what people pay attention to.
Flashbulb memories are a special kind of emotional memories. These memories are special because they are often very clear and detailed. People remember these events "as if it were yesterday". Flashbulb memories are often about special, emotional events. For example, many people remember the moment when they heard that Michael Jackson had died. Some factors, such as how often someone talks about a memory and who the person talks about, affect how accurate the flashbulb memories are. Sometimes people adjust the original memory based on what they have heard from others; information is then replaced by newer information. People also sometimes adjust their memory based on what a 'good story' is. In this way they can omit certain details or, on the contrary, mention certain details, to make the story more beautiful.
Traumatic events sometimes lead to vivid and long-lasting memories in the memory of victims. This can be explained by the way in which (physical) arousal contributes to memory consolidation. However, this does not mean that all traumatic events are always well remembered. This depends, among other things, on a person's age, sleep deprivation, head injuries substance use or extreme stress associated with the event. Stress experienced at the time of an event seems to enhance memory for materials directly relevant to the source of the stress, but has the opposite effect for other aspects of the event. Also, stress experienced during memory retrieval interferes with memory, especially if the target information was itself emotionally charged.
A controversial claim about traumatic memories is the one that traumatic memories are "repressed" or "suppressed". The memories are 'buried' and won't be consciously available, but still exist in the long-term memory storage and can be recovered under the right circumstances. Most memory researchers are skeptical about this claim. Recovered memories might appear to have been lost, but their recovery might be a sign of the person finally talking about them, a pattern of retrieval failure, or even false memories. These are already documented memory mechanisms, so we need to be careful about the claim of some sort of self-protection mechanism like suppression can hide memories.
We have previously seen that longer retention intervals usually lead to more forgetting. In some cases, memories from very long ago are remarkably accurate. An explanation for this is that the better someone has learned something, the slower this person will forget the information.
There are also general patterns to be discovered in people who have to do with age. For example, most people recall very little from the early years of childhood. However, people can often clearly remember their late adolescence and young adulthood, which is also referred to as the 'reminiscence bump'. Also, all memories depend on connections that simplify retrieval and enable interference, as mentioned earlier. These connections can deteriorate over time, creating gaps that are often filled with general knowledge or schemas. All of these characteristics apply whether we are talking about recent memories or remembering events that took place long ago, emotional or calm events, and complex or simple memories. However, these principles do not seem to apply to all types of memory. That is why it is necessary to describe the memory at different levels.
Ordinary concepts, such as 'shoe' or 'spoon', are the building blocks out of which all knowledge is created. As we've seen in previous chapters, you depend on your knowledge in many aspects of day-to-day functioning. So, you know what a restaurant is, because you understand the basis concept of 'restaurant'. So the idea is that you need concepts in order to have knowledge, and you need knowledge in order to function. In this chapter, the hypothesis is that understanding a concept is like knowing a dictionary definition. However, this hypothesis quickly runs into problems, so there are other, more complicated proposals needed.
Wittgenstein (1953) argued simple terms don't actually have definitions, but members of a category have family resemblance. There are no 'defining features' in a family, but there are features common in a family. You could thing of an 'ideal member' for the family, someone who has all the family features. Each member shares at least some features with the ideal and therefore has some feature overlap with other members, making family members resemble each other.
Prototype theory attempts to characterize a concept by specifying the prototype of a category. In most cases, this prototype will be an average of the various category members you have encountered. So for example, the prototype dog will be the average color of the dogs you have seen, the average size of the dogs you have seen, etc. So, whenever you use your conceptual knowledge, your reasoning is done with reference to the prototype.
In a sentence verification task, participants are shown a series of sentences. Their job is to indicate whether each sentence was true or false. Participants' responses were slower for sentences like: "A penguin is a bird", than for sentences like: "A robin is a bird". According to the prototype perspective, participants chose their response by comparing the thing mentioned (penguin) to their prototype for that category. When there was close similarity between the test case and the prototype, participants could make their decisions quickly. In contrast, judgments about items distant from the prototype, took more time.
In the production task, people are asked to name as many birds or dogs as they can. According to a prototype view, they will do this task by first locating their bird or dog prototype in their memory and then asking themselves what resembles this prototype. So birds close to the prototype should be mentioned first and birds farther from the prototype later.
Rosch and colleagues argue that there is a 'natural level of categorization' that is used by people, that is neither too general nor too specific. This categorization is often based on one word, such as "chair" or "dog". When people have to describe an object, they often use this basic-level categorization.
The data that has emerged from research into conceptual knowledge can also be explained in a different way than just on the basis of prototype theory. For example, in some cases categorization takes place on the basis of specific members of a category instead of on the basis of general information about the entire category. This is also called exemplar-based reasoning, where there is a specific example of the category. As with prototype theory, you use a mentally represented standard, but what this standard is varies for the categories. According to the exemplar-based theory, this standard is determined by which example first comes to mind.
Which theory is correct, the prototype theory or the example theory? Research shows that people rely on prototypes as well as on exemplars when thinking about categories. Prototypes provide an economical representation of what is typical for a category, and this is useful in many cases. But exemplars provide information that is lost from the prototype, including information about the variability within the category.
However, there are also results that do not fit into the image of prototypes and exemplars. In the theory that we were gradually developing, we saw that category membership and "typicality" both stem from the resemblance to an example or prototype. Sometimes, however, typicality and membership of a category do not go hand in hand. Robins for example seem closer to the typical bird than penguins, but we are certain both are birds. These results also indicate that judgments may be based on information other than typicality. For example is a lemon was painted, flattened and injected with sweetener, people would still consider it a lemon, even though it looked and acted completely different. People seem to make a judgment about this on the basis of "deeper characteristics", such as their understanding of the objects. What is essential for a certain category therefore depends on understanding.
Sometimes category memberships are independent of typicality. In this case you focus on attributes that are essential for the category. Which attributes are essential are based on your beliefs about that category. Resemblance depends on whether objects share important, essential properties and what is considered essential is based on your belief of the concept in question. Your background knowledge gives crucial guidance when thinking about resemblance.
It seems logical to use strategies that are as accurate as possible in our daily lives. However, such strategies are often slow and require a lot of effort. That is why you are sometimes better off with heuristics or rules of thumb, in which you specify part of the accuracy in order to be able to categorize more effectively (faster). Resemblance categorization is also a heuristic strategy.
Four claims on conceptual knowledge are
Category knowledge guides your thinking about new cases. An example of this is that when you are at a party and you see someone jump into the water, you probably assume that this person is drunk. Even though jumping into the water is not part of the definition of being drunk, nor is it part of the prototype of "being drunk," because of the knowledge you have about being drunk (the theories that you have), you can conclude that this person is most likely drunk. Your various beliefs about a category ensures coherence in how the characteristics of a concept are related, and this coherence makes it easier to learn a theory.
The idea that people are guided by their belief about categories has another implication: People think about different concepts in different ways. People will reason differently about natural objects and artifacts. In one study kids agreed that with the right adjustments a toaster could become a coffee machine, however a skunk could never become a raccoon. In general people assume more stability when it comes to natural concepts.
The difference between different types of concepts can also be seen in brain scans. For example, fMRI scans have shown that different brain regions are active when people think of living things compared to when people think of non-living objects. This shows that different concepts are represented in different brain regions. Some people with brain damage exhibit anomia, which means that they are no longer able to name objects or to answer simple questions such as "does a whale have legs?" Anomy is often category-specific, which means that some patients are able to name living but not non-living objects. This may be because the recognition of living things depends on perceptual processes, while the non-living things mainly process the functional characteristics. Brain scans also show that there is activity in motor and sensory areas when people think about certain concepts. This indicates that conceptual knowledge is linked to information about how we perceive an object and how we interact with it.
In earlier chapters we have seen that information is represented in a network, where associative connections connect the nodes. These associations also help to represent the knowledge. Knowledge is also stored in this memory network, so when you use your knowledge, you use this network.
Propositions are the smallest unit of knowledge that can be either true or false. i.e. "children love candy" is a proposition, "children" is not. Propositions can be used to define the difference between "Sam has a dog" and "Sam is a dog".
According to some models, individual ideas are represented with local representations: each node represents a certain idea and when you think about that idea, that node is activated. This is called connectionism. Connectionism is based on distributed representations (again spread activation), where ideas are represented by a pattern of activation of the network. This processing and representation takes place in parallel (parallel distributed processing, PDP).
PDP models refer to a potential and not to a state. According to connectionism, learning is about adjustments (in the strength) of the connections between nodes, so after learning, the activation is spread in such a way that it fits in with the newly learned information. In other words: learning arises from changes in the activation pattern of nodes.
Using language is important for people, because language is essential for the transfer of ideas. It is important that this idea is expressed and then correctly understood by a recipient. Language is essential for many types of human actions. Language depends on clear patterns, patterns for the way in which words are used, but also patterns for the way in which words are combined into sentences. It also has a certain structure. At the highest level are the ideas that the speaker wants to convey. These ideas are generally represented in sentences that consist of words from different word groups. Words are formed by morphemes, the smallest language units that are meaningful. There are also "bound morphemes" that are placed at the end of words and thereby change the meaning of the word. Think of 'ed' and 's' at the end of words.
Phonemes are smallest sound units that can be used to distinguish words. Language is also organized in a different way: within each of these levels (from high to low, from ideas to phonemes), people can invent new sentences or new words.
Sound is produced when the air flow is changed or interrupted or altered. Within the larynx there are two flaps of muscular tissue, called the 'vocal folds'. These folds can be rapidly opened and closed, producing a buzzing sort of vibration which is known as voicing. When you put your palm on your throat while you produce the letter 'z', you can feel the vibration. You can also produce sounds by narrowing the air passageway within the mouth itself, such as is the case when producing the letter 's'. We can distinguish sounds, first, according to how the airflow is restricted: this is referred to as manner of production. Second, we can distinguish between sounds that are voiced and those that are not. Finally, sounds can be categorized according to where the airflow is restricted: this is referred to as place of articulation.
The characteristics of speech perception correspond to what listeners perceive when they listen to speech. Yet speech perception is more complex than it appears. In the first place, this has to do with the fact that speech is fast. There are no markers within the stream of speech that indicate when one phoneme starts and the other ends. So you have to divide this stream into the right particles, which is called speech segmentation. Speech is also becoming more complex due to a process called co-articulation. This makes speech smoother and faster, but this also results in a different acoustic pattern in different cases. If there are complications, how is it possible that we seem to perceive speech so easily? Part of the answer lies in the fact that the speech you encounter, day by day, is surprisingly limited in its range. Most of the words that we know, we never use. In fact, we've known for many years that the fifty most commonly used words in English make up roughly half of the words you actually hear.
Speech perception is also supported by a process called categorical perception. This refers to that people are much better better at distinguishing sounds between categories than within categories. In other words, this means that one is better able to hear the difference between the 'k' and the 'g' than to hear the difference between different types of the letter 'k'.
There are rules for combining phonemes to produce thousands of different morphemes, which can themselves be combined to create word after word.
Language contains different parts:
Sound (the order of the phonemes that make up the word)
Orthography (the sequence of letters with which you can spell a word)
Syntax (the way you should use words in sentences; grammar rules)
Semantic representation (the meaning of the word)
Phonological representation (the sound of a word).
People are able to create new words. As a result, the human vocabulary is always subject to change. Changes in social habits and politics also influence vocabulary. Often new words arise by combining existing words or by modifying existing words. Language is therefore generative: it has the capacity to create many new combinations and thus create new words and sentences.
That language is generative becomes even more apparent when looking at the upper levels within the hierarchy of language. Here too, there are limitations on which combinations are acceptable and which are not. Syntax are rules that govern the structure of a phrase or sentence. An example of such a rule is that a sentence consists of a noun part and a verb part. Exactly how this is organized is shown in a tree structure.
Some rules are prescriptive and they describe how something is supposed to be. Phrase-structure rules are not prescriptive, they are descriptive. This means that these rules characterize language as it is ordinarily used by fluent speakers and listeners.
People usually parse sentences at the same time as they listen to it. This leads to more errors compared to when people first hear the whole sentence and only then try to parse and understand the sentence. This strategy can also lead to garden path sentences. Garden path sentences are very ambiguous (unclear). An example of such a garden path sentence is: "I saw people with those big binoculars".
Another mistake that people make when listening to sentences is that they usually assume that a sentence is formulated in an active way. This means that they see the noun in the sentence as "doer" instead of a "recipient" (which is called a passive formulation of the sentence).
Parsing is also guided by background knowledge, and in general, people try to parse sentences in a way that makes sense to them,
Prosody is also important in understanding sentences. Prosody is about the pattern on which people pause or change their intonation. The prosody can be used to emphasize certain words in a sentence, to emphasize the structure of a sentence or to indicate the difference between a 'normal' and a 'questioning' sentence.
Pragmatics is about the knowledge you have about how people communicate with each other. Remember this conversation: Woman: "I am leaving you". Man: "Who is he?" You use pragmatics, the knowledge that you have and you understand what is meant. Another example that demonstrates the importance of pragmatics is when someone asks: "Do you know what time it is?" Because of your knowledge, you know that this person does not ask this question literally and you should not answer with only yes or no.
Brain damage that leads to problems with language is also called aphasia. Damage to the left frontal lobe and in particular to the Broca area, is also called non-fluid aphasia. People with non-fluent aphasia can understand the language they hear, but are unable to write or speak. Damage to the Wernicke area leads to fluid aphasia. In this case, people can usually speak fluently, but the language they produce is meaningless. Brain areas other than the areas of Broca and Wernicke also seem to play an important role in language.
In addition to a biological basis, language learning also has to do with the environment in which someone grows up. For example, a baby in Turkey learns Turkish, a baby in Beijing Chinese, and a baby in the Netherlands learns Dutch. Children appear to be extremely sensitive to patterns and regularities that they hear in a language. In addition, they derive principles from what they hear (for example, in English words end in -ed in the past tense). What often goes wrong is that children generalize these rules: they use them for almost all words, which is not correct. Children also use their existing vocabulary to understand new, more difficult words. In addition, children use 'semantic bootstrapping', which means that they use their knowledge about semantics to understand the syntax of a sentence.
Research shows that animals also use language, but to a much less complex extent than humans. Even after animals have had an extremely high amount of training, their abilities to produce language do not even come close to those of humans.
Studies with 'wolf children' (children raised by a wolf) also show that people indeed have a biological basis for language development, but that it is also very important that they have the right environment for this (an adequate parent). If this is not the case, it is highly unlikely that children will develop language.
The anthropologist Benjamin Whorf came up with the "Whorfian hypothesis". He stated that when people use a different language, they also think differently. This is called linguistic relativity. Studies into this have shown that people who use a language with a rich, broad vocabulary, perceive colors differently and make a 'finer' distinction between colors. The language that people use also influences how they remember things. In addition, languages differ in how they describe events. For example, an active voice is often used in the English language, while this is not the case in the Japanese or Spanish language. These results can lead to different conclusions. First, it can be thought that language has a direct influence on our knowledge, which would mean that a Japanese person could think about things in a way that an Englishman could not. Another, more realistic explanation is that language determines what you focus on. In this case, the use of another language has an indirect influence on your thoughts.
Another interesting aspect of languages is that some people speak several languages. In the first years of their life, bilingual children have a smaller vocabulary, but this is quickly caught up. In addition, they seem to be developing good executive control, allowing them to avoid certain confusion and easily switch between tasks. These results can have important implications for education, education and politics.
People describe their thoughts in different ways. One says that his or her thoughts are formulated in words, while the other sees his or her thoughts as something "abstract". Some people state that their thoughts consist of images or photos. This chapter examines these visual images.
People have written about imagery and the mind's eye for hundreds of years. Galton was the first to systematically study this imagery. He was particularly interested in how people differ with regards to these visual images. To investigate this, he asked people to describe the images in their minds and to judge them for vividness. This is an example of the use of introspection: people must 'look inside' and then report what they see or think. Many of the participants in Galton's study said the images in their brains were as clear as photos.
What was striking in this study is that participants differed a great deal. Some stated that the images they saw were very clear, and others stated that they saw much less or nothing at all. These claims are difficult to study, because it requires a translation of an inner experience in words and everyone has their own interpretations. The data from Galton's study could therefore show that people differ in how they talk about their visual imagination, rather than reflecting real differences between people's visual imagination. Because of this, it is necessary to use other objective research methods.
Because of these concerns, imagery researchers rarely ask participants to describe their images. Instead, imagery experiments require people to do something with their images, which is usually making judgments based on the images. Researchers can then examine how fast people are in making these judgments, and these measurements can be used to test hypotheses. In other words, many such studies are chronometric (time-measuring) studies.
It has been found that when describing an object, people identify the most striking features. However, when they draw an object, more attention is paid to the size and position of the object and this is drawn. Visual imagery shows that people follow the same rules as when drawing something, instead of the rules used when describing something.
Image scanning procedures that look at people's eye movements while looking at mental images show that people scan images at a constant pace. When the image becomes twice as large, scanning also takes twice as much time. The same applies to when people are asked to zoom in or out on a certain task. The response times are therefore proportional to the amount of zooming in which is required, indicating that "traveling" or searching through mental images resembles traveling in the real world. These results tell us something about the nature of mental images: images are 'natural', which means that the images reflect the real world.
Other results point to a similar conclusion with regard to the transformation of images. In a mental rotation task, participants seem first to imagine one of the forms rotating into alignment with the other. Then, once the forms are oriented in the same way, participants can make their judgment. This step of imagined rotation takes some time, and the amount of time depends on how much rotation is needed. The response times are clearly influenced by how far apart the two forms were in their initial orientation, so imagined "movement" resembles actual movement.
We have seen that you need more time when you "travel" further in mental rotation and scan images. We have interpreted this as the way in which images represent a natural layout. However, there is another way in which this data can be explained. Perhaps participants consciously check their timing to create a "normal" response pattern. This can be caused by demanding characteristics of an experimental situation: participants want to do all they can to give the experimenter "good" data.
It appears that imagination and perception overlap. Research by Segal and Fusella shows that when people are imagining an image, they are less able to perceive an image. When they are busy imagining a 'sound', it is difficult for them to listen to other, real sound at the same time. This also means that imagery and perception can prime each other.
The structures needed for vision are the same types of structures that are important for imagination. It appears that the occipital cortex is involved in things that have to do with vision and also with imagination. The brain regions that become active with movement also become active when participants have to imagine movement. In addition, the same kind of brain areas are active when people see faces and when they imagine faces. Based on this information, researchers have even been able to tell what the participants were thinking about based on brain activity!
Researchers have also learned more about visual imagination with the help of transcranial magnetic stimulation. Transcranial magnetic stimulation means that a strong magnetic field is created at a specific location in the brain, which temporarily disables this brain area. This causes problems with vision, but also problems with imagery. For example, area V1 appears to be important for both things (vision and imagination). In addition, it appears that patients with brain damage who are therefore unable to perceive as well as imagine details.
Research with blind people yields the same results as 'healthy' people with regard to visual imagery. However, this can be explained by spatial information. It is assumed that blind people use spatial information instead of visual information. This means that a distinction can be made between visual imagery and spatial imagery.
When do we use visual images and when do we use spatial images? This depends on the task. If the task is to think of colors, then one has to imagine what something looks like. It is then not sufficient to think about the shape or spatial position of an object. When the task is to navigate in thought, it is not realistic to expect to be able to imagine the entire map. Spatial processing would therefore be better suited for this task.
With other tasks it does not matter much which form of imagination someone uses. Someone's skills also influence which type of imagination is best applied. Some people are good imaginers, while they have poor spatial insight. The opposite can also occur.
People with vivid imagery indicate that they see their mental images as photographs, while others indicate that they see nothing at all in their imagination. About ten percent of all people in the population indicate that they don't see anything. This phenomenon has been known for some time, but it wasn't until 2015 that term for this was invented, namely 'aphantasia'. The degree of vividness that people name corresponds to brain scans in which the activity of the visual cortex is measured.
Now that the term "aphantasia" has been mentioned, it is also useful to see what exactly this means for these people. What is the difference between people who have visual imagery and people who don't have visual imagery? Research shows that this difference can be quite large and that there is also a difference between visual and spatial imaginers. For example, visual imaginers are more often successful in the arts, while spatial imaginers are more often successful in science or architecture. Research also shows that scientists who conduct research into visual imagination often formulate their hypotheses based on their own experiences.
In addition, it appears that visual imagery also influences the autobiographical memory of people. Thus 'non-imaginers', are less able to relive their memories. Research also shows that severe amnesia (memory loss) can occur as a result of damage to brain areas that are important for visual memory. However, the authors of the book argue that more research is needed to determine the consequences of being an imaginer or non-imaginer.
Lay people (non-scholars) use the term 'photographic memory', while this is actually not correct. Often these are people who have developed good strategies with regard to memory and learning. This is therefore not about the memory itself, but about well-applied memory strategies, such as mnemonics. However, there are people who have a photographic memory. These people have an extremely detailed picture of what they have seen. Researchers usually call this an eidetic memory. It is often individuals with autism who have this eidetic memory. They can often look at a scene (picture, image) for a very short time and can then draw this scene very accurately. So it is as if they have taken a picture of the scene. An example of a person with an eidetic memory is a woman (without autism). She was able to memorize a poem in a language unknown to her, and read it aloud, even years after she had seen this face. Another example is that of a boy who was shown a picture for thirty seconds. After seeing the picture, he was asked unexpected questions, such as "how many stripes did the cat have?" or "how many leaves did the flower have?". The boy answered these questions accurately. Not much is known about this type of memory, but researchers know that it is a rare type of memory that not everyone owns.
In this chapter 'mental images' is often referred to as 'mental pictures'. It has also been mentioned that mental images can be three-dimensional. In addition, a distinction has been made between visual and spatial images. Research shows that the images that people have in their mind are unambiguous and often not open to re-interpretation.
As mentioned in Chapter 10, words are stored as a kind of nodes in a network. The same could be said about the visual memory, but this appears to be incorrect. Instead, visual images seem to be represented in memory one by one. Thus, when an image is saved, a node is activated that is called "image frame". This frame contains the general information about an image, such as the shapes. This also means that images that are larger or contain more parts take longer to create. Images with more detail also take more time to be created. Imaginers also have control over how complete and detailed their images are, which causes differences in images (vague or very detailed). On the basis of 'image files' from the long-term memory, one creates mental images. A suggestion is that these files contain 'instructions or a recipe' for creating an image.
Visual information is therefore not stored in memory in a visual manner. Instead, there is a kind of 'recipe' that is used to form images. Sometimes visual information is represented by means of a verbal label, a simple format. Related to this, research has shown that individuals with a higher vocabulary for colors have better memories for color. An explanation for this is that they remember the name of the color, rather than the color itself, which is easier. Other proof of this is that when participants see two neutral balls and are told that it is a pair of glasses, they later draw these balls as glasses. Participants who are told that it is a 'barbell', draw it more often as a barbell.
Images have a significant impact on memory and generally improve memory. For example, materials that evoke images are easier to remember than materials that do not evoke images. "Bizarre" scenes can also provide a stronger memory, but only if not all images are presented that way. The advantage of images is probably caused by a pattern of dual coding. Based on this hypothesis, it is to be expected that at least two types of information are used in the long-term memory. Although some researchers argued that there were also different memory systems for these types, others believe there is a system that contains these types. Based on this, we expect that different types of information have many similarities in characteristics, and this appears to be true. For example, visual knowledge is influenced by schemas. When people try to understand a story, they place the story within a schematic frame. This can often lead to errors.
A pattern that can arise as a result is also referred to as 'boundary extension', in which participants remember more of an image than was depicted. Intraub stated that this shifting boundary arises because people create their own perception of images. They understand images through a schema. This schema places the image in a larger context. People therefore store the experience (perception) of an image in their memory, instead of the objective image.
People can learn a lot from experience, but there are defects in this form of learning. For example, sometimes the information given in the world around us is ambiguous or incomplete. It is also possible that our memories are selective or distorted.
Frequency judgments are often crucial for forming judgments. If you have difficulty estimating frequencies, you are likely to use attribute substitution, which means that you use easily available information and hope that it is a good basis for judgment. This is called the availability heuristic. The availability heuristic is based on how quickly and easily certain relevant examples come to mind. When you want to judge a probability, you use the representativeness heuristics. As we have seen before, heuristics can lead to errors, although this is usually not the case.
Certain events, such as very emotional events, are generally well stored in memory and are therefore easy to remember, so it is readily available. The availability heuristic is about the ease with which you can come up with examples.
Similar statements can be made about representativeness heuristics. Let's start with the fact that most of the categories that you encounter are relatively homogeneous. The representativeness heuristics uses this homogeneity: we expect that each individual is similar to all other individuals in the category. That is why we use probabilities to make judgments. This often leads to good judgments, because many categories are (to a large extent) homogeneous. However, this can also lead to errors, as is the case with the gambler's fallacy, where the assumption of homogeneity leads us to think that a representative case of a category has the same property as the overall category, while this is not the case. However, this assumption can also lead to an opposite error, whereby an entire category is expected to have the same characteristics as specific category members. This effect even occurs when people are explicitly warned about this. People also seem to make the same of an "I know someone who .." argument, where the error lies in the fact that they do not take sample sizes into account.
Errors made through the use of heuristics can also lead to other errors, such as errors in covariance. Covariance is about the extent to which two variables (stimuli, input) are related to each other. This covariance can be strong or weak and positive or negative. Covariance is important for a variety of reasons, including the fact that it's what you need to consider when checking on a belief about cause and effect. But how accurately do people judge covariation? Sometimes people seem to perceive relationships that don't exist, even though people have had a lot of training and experience. These errors can be caused by people being distorted in their judgment. For example, people are often guided by the confirmation bias, which is the tendency for people to accept beliefs that fit their ideas rather than beliefs that don't match their ideas.
Problems with assessing covariance also arise when people neglect base rate information, which is information about how frequently something (a disease, syndrome) generally occurs. Although this information is very important, people often ignore this information when making statements. Instead, they often use short descriptions that fit certain stereotypes instead of using information about the base rate. Part of this problem is caused by people making use of representativeness heuristics.
So there are many sources of errors and even experts make mistakes. Moreover, people even make mistakes if they actively try to prevent this. How can this be explained? One proposal is that people have two forms of thinking: a fast and simple automatic system (System 1) and a slower, more effortful and more accurate system (System 2). This is also called a dual process model. As we have seen, people seem unable to prevent errors, so it is unlikely that the use of System 2 happens voluntary. It is more likely that System 2 is used when certain cues are offered and only when the circumstances are right.
Type 1 thinking can sometimes be quite sophisticated on its own. Sensitivity to base rates can be demonstrated even in type 1 thinking. For example if the role of chance is noticeable in a problem, people may realize the "evidence" may just be a fluke. The quality of someone's thinking can also be shaped by education, thinking about chance can be improved by training in statistics. Some people are also more likely to resist the obvious answer and think about why it may be wrong, this tendency is measured by the Cognitive Reflection test (CRT). Nobody is immune to making judgment errors, but people who do well on the CRT are less likely to make them.
Many errors can be prevented if the base rate is kept in mind. It is easier for many people to take base rates into account when they are described in terms of frequencies.
The use of System 2 also becomes more likely if people perceive opportunities. Also when data is described in statistical terms, people are less likely to make mistakes. The use of the different systems seems to depend on situational factors. However, it also seems to depend on the knowledge and skills that people possess, such as their statistical skills.
There is a big difference between what people should do and what they actually do. This is reflected, for example, in the confirmation bias, which is the tendency to neglect disconfirming evidence. However, the confirmation bias is an 'umbrella term', because it can take many forms. What all the forms have in common, is that it is about a tendency to protect your beliefs from challenge.
Even when counter-arguments cannot be refuted, people sometimes do not change their beliefs, which is called belief perseverance. So they hold on to their own beliefs. Even when participants are told that certain feedback about them is made up, they still use this feedback to judge themselves. This is probably caused by the confirmation bias.
Several people have suggested that thoughts proceed logically. In practice, this often does not seem to be the case, as can be seen in research using categorical syllogisms. A valid or invalid conclusion can be drawn based on certain premises (propositions). Logic errors, for example, are caused by the belief bias, where the conclusion of a syllogism is often based on what people themselves find logical or normal, which causes people to assume that the conclusion follows logically from the premises. Similar conclusions can be drawn based on another aspect of logic: reasoning about conditional statements.
We all have our own values and goals, which we use to make decisions. There are certain costs and benefits associated with each decision. In general, you weigh the benefits and costs of the decision. In some cases, however, it is very unclear what exactly the costs and benefits of a particular choice are. When this is the case, one can use the principle of subjective utility, or what the value of something is to them. With most decisions, however, there is also another difficulty, namely uncertainty. To deal with this uncertainty, one can calculate the expected value of a choice. These are choices that are based on utility maximization.
However, it is very easy to find examples where decision-making is not guided by the principle of utility maximization. Part of this can be explained by the fact that there are many other powerful factors that can influence your decisions. For example, it has a huge influence on how a certain decision is framed: If emphasis is placed on loss, people tend to take more risk, but if the emphasis is on yield or profit, people tend to show risk aversion, which means that they prefer to avoid risks. This framing of choices can also lead to manipulation (making people choose what you want them to choose).
We have just concluded that it is important to take into account the way in which choices are framed. Similar effects occur when changes occur during framing question. In addition, people are influenced by the way in which evidence is framed. Whether something is framed to have a 50% success rate or a 50% failure rate, will affect the choices that are made.
A related pattern is found in how decisions are presented. For example the opt-in opt-out format of organ donation. Relatively few Americans decide to be organ donors. The reason might be that in the United States decisions about organ donations are 'opt-in'; the potential donor has to explicitly say they want to donate. In countries that use an 'opt-out' system, many more citizens agree to be donors.
How can we explain the choices of people? One possibility is that people try to maximize utility, as mentioned earlier. Another possibility is that people are not led by utility at all, but make decisions that seem reasonable and justifiable.
Another factor that has an important influence on decision making is emotion. For example, people are regularly influenced by feelings of regret. Another way in which emotions influence decisions is through somatic markers, certain physical sensations. A crucial area of the brain that is involved in this is the orbitofrontal cortex.
When emotions have an influence on decisions, this usually follows from a prediction about the future. However, research shows that people are surprisingly bad at predicting their own emotions, which is called affective forecasting.
Some researchers argue that people may not know what makes them happy and are therefore better off if someone else makes decisions for them. There is a lot of discussion about this. In this way, the study of how people make decisions leads to research into how people should make decisions.
Researchers describe problem solving as a process of searching, just like a rat in a maze. People are looking for a way to achieve a goal. An example of this is the Hobbits and Orcs problem: "Five Orcs and five Hobbits are on the east bank of the Muddy River. They need to cross to the west bank and have located a boat. In each crossing, at least one creature must be in the boat, but no more than three creatures will fit in the boat. And, of course, if the Orcs ever outnumber the Hobbits in any location, they will eat the Hobbits! Therefore, in designing the crossing we must make certain that the Hobbits are never outnumbered, either on the east bank of the river or on the west. How can the creatures get across without any Hobbits being eaten?"
A frequently applied problem-solving strategy is the "hill-climbing". You should imagine that you are hiking through the woods and trying to figure out which trail leads to the mountaintop. You need to climb uphill to reach the top, so whenever you come to a fork in the trail, you select the path that's going uphill.
The problem-solving strategy works the same way: at any time you choose the option that leads upwards (to your goal). However, this strategy can only be used to a limited extent, because it is sometimes important to move away from your goal. Only then, from this new position, the problem can be solved. Yet many people apply this heuristics. They often have difficulty withdrawing, even when this is necessary to achieve their goal.
Another problem-solving strategy is a means-end analysis. In this strategy, people compare their current state with their end state (or ideal state, their goals). They then wonder: what means do I have to make my current state and my end state more equal?
There are also other ways to solve problems. For example, it is often helpful to translate a problem into concrete terms, relying on a mental image or a picture.
A new problem often resembles a problem from the past. Then you can solve the new problem through the experience you have with the old problem. The old problem then serves as an 'analogy' for the new problem.
An example is 'the tumor problem'. The problem was: "Suppose you are a doctor who is confronted with a patient who has a malignant tumor in the stomach. It is impossible to operate on the patient, but if the tumor is not destroyed, the patient will die. There is a radiation that can destroy the tumor, but only if the rays can reach the tumor with a sufficiently high intensity, but unfortunately the intensity must be so high that the rays also destroy the healthy tissue they have to go through. are the rays harmless for healthy tissue, but they also have no impact on the tumor, what procedure could be followed to destroy the tumor with the rays without destroying the healthy tissue at the same time?"
The subjects were also told a story, an analogy: "A dictator ruled a country from a strong fortress, and a rebel general, hoping to liberate the country, vowed to capture the fortress. The general knew that an attack by his entire army would capture the fortress, but he also knew that the dictator had planted mines on each of the many roads leading to the fortress. The mines were set so that small groups of soldiers could pass over them safely, since the dictator needed to move his own troops to and from the fortress. However, any large force would detonate the mines, blowing them up and also destroying the neighboring villages. The general knew, therefore, that he could not just march his army up one of the roads to the fortress. Instead, he devised a simple plan. He divided his army into small groups and dispatched each group to the head of a different road. When all were ready, he gave the signal and each group marched up a different road to the fortress, with all the groups arriving at the fortress at the same time. In this way, the general captured the fortress and overthrew the dictator."
People are more likely to solve this problem if they are encouraged to use this analogy. Also, using analogies in general seems to make problem-solving easier for most people.
Experts often seem to tackle problems by looking at the deeper, underlying structure of the problem. They also make more use of analogies. To do this, one needs to have specific expertise in a relevant domain. Experts are also good at using sub-goals to solve a problem and they are good at organizing relevant information and knowledge, so that they can remember more. For example, they focus their attention on the relationships between certain units, which gives them an idea of the broad structure. Also, experts simply know more about a domain.
Many problems are well-defined, which means that the goals and the options are clear. However, there are also problems that are ill-defined, which means that it is not clear how the goal could be achieved or what the goal exactly is. An example of ill-defined defined problems are: "saving money for study" or "having a good time on vacation". It is not clear what the end goal looks like and how this end goal could possibly be achieved. In other words, it are vague defined goals and wishes.
To deal with ill-defined problems, it is useful to create sub-goals. This is because poorly defined problems often have different components and by solving these components one by one, an overall solution can be worked towards.
There are often several ways to solve problems. Sometimes solving a problem requires that you think further than you initially did. An example of this is the 'candle problem': You have two candles, countless pins and a box of matches. The instruction is to use these items, to find out how to attach the candles to a wall.
Most people use the pins immediately to attach the candles to the wall. Because of functional fixedness, people only think of one way to use the pins. However, there is another solution, namely that the matches are used to melt the lower part of each candle and use the hot wax to stick the candle to the matchbox. Once the candles are attached to the box, the pins can be used to attach the box to the wall. To arrive at this solution, one must distance oneself from 'functional fixedness' and view the matchbox as a 'box', rather than as matches intended to light the candle.
The beliefs and assumptions that someone has about a problem is called his or her problem-solving set, or einstellung (German for 'attitude'). A problem-solving set ensures that the number of options for solving a problem is limited. Useless or crazy options are hereby excluded, which makes it easier to come to a solution.
However assumptions about how a problem should be solved can be limiting, and to find the solution the problem-solving set needs to be adjusted. A set can blind you to important options and be an obstacle, but a set can also blind you to futile strategies and thus enable you to focus on options that are more likely to work out.
By studying extremely creative artists such as Pablo Picasso, Charles Darwin, Johann Sebastian Bach and Marie Cure, researchers have learned a lot about creativity. It appears that these people have many things in common. These shared characteristics can also be seen as "requirements" for creativity. These similarities are that these individuals generally have a lot of knowledge and skills (to be a good chemist you need to know a lot about chemistry). In addition, one must possess a number of characteristics to be creative. These characteristics are: being willing to take risks, being able to ignore criticism, tolerating ambiguous findings or individual situations and a tendency to separate from the crowd. In addition, highly creative individuals often become motivated because they enjoy their work, rather than being motivated by external rewards (such as money). This makes them work extremely hard and produce a lot. Fourth, these highly creative individuals have generally been "in the right place, at the right time". These findings therefore represent two things: the influence of the person himself, as well as the influence of the environment. The environment is of influence because it offers knowledge and resources. Through this observation, researchers have argued that a systematic "socio-cultural approach" is needed to explore creativity. This approach takes into account the social and historical context and processes within an individual himself.
According to Wallas, creative thinking follows four phases. In the first phase, the preparation phase, the problem solver collects information about the problem and works a little on the problem, but there is not much progress. In the second phase, the incubation phase, the problem solver puts the problem aside and does not work on it. However, according to Wallas, the problem solver is still unconsciously solving the problem. Ultimately, this leads phase three, the illumination phase. In this phase the individual receives a new insight or new idea, which leads to the fourth and final phase, the verification phase. In this phase, the person decides that this solution is the right one and continues to implement the solution.
However, research has shown that many creative events do not follow these four phases. Also, the phase of illumination that Wallas describes would be more of a myth than reality. Discoveries often arise through 'mini-insights', which eventually lead to a discovery.
According to Wallas, the person who puts the problem aside is unconsciously still solving the problem, which is called incubation. However, the results of studies into this have been mixed. For example, some authors state that incubation does not take place if people experience a lot of pressure. Others argue that the way someone spends his or her time during the incubation period is important. For example, it would be better to let your mind wander during the incubation period. This 'mind wandering' instead of directing thoughts could lead to spreading activation, which in turn could lead to new insights and ideas.
The highly creative individuals described all made use of analogies, hints, heuristics and all worked very hard. Sometimes people can come to very creative insights, simply by means of 'trial-and-error'. Research also suggests that creative individuals use a different way to search through their memory. For example, some authors argue that convergent thinking is important: seeing and recognizing patterns and connections between ideas. Other authors emphasize the importance of divergent thinking: the ability to come up with new, unknown thoughts or ideas.
Alfred Binet and his colleagues were the first to describe intelligence. They described it as a capacity that is important for many different aspects of cognitive functioning. They created a test based on this idea, which included many different tasks: re-drawing something, remembering a sequence of numbers, understanding a story, etc. The performance on these tasks was then determined on the basis of a composite score: the sum of the score for the different tasks. The test score was then seen as the ratio between someone's 'mental age' and his or her chronological age. This ratio was then multiplied by one hundred and then you got a quotient. This eventually also became the term IQ: intelligence quotient. Modern tests no longer measure this ratio, but are still called IQ tests.
Today, the Wechsler Intelligence Scale for Children (WISC) and the Wechsler Adult Intelligence Scale (WAIS) are often used as a test. These tests also consist of subtests, which measure, for example, general knowledge, vocabulary and understanding.
There are also intelligence tests with other formats, such as the Raven's Progressive Matrices test. This test is purely intended to help someone measure their skill in analyzing figures and discovering patterns.
When designing tests, the terms reliability and validity are important. Reliability refers to how consistent a test is over time. If you take a test more often, this test is called reliable if you always get the same score. This is called the test-retest reliability. Intelligence tests have a high test-retest reliability. However, IQ scores can change, especially with changes in the environment.
The validity of intelligence tests, which means whether the test really measures what it wants to measure (is intelligence really measured or is anything else measured?) is also important. The predictive validity is about whether the test scores that one has on an intelligence test are predictive (predictive) of how that person is doing in certain areas or situations (school, work). There appears to be a correlation of .50 between someone's test scores and his or her academic performance, which is a high correlation. IQ scores also seem to predict one's performance at work. However, it is important to take into account differences between professions. The correlation between IQ score and performance is small for non-complex professions, but high for complex professions.
IQ scores are also correlated with other outcomes in life. For example, people with a higher IQ score often end up in higher careers and it also seems true that people with a higher IQ score live longer. This could be explained by the fact that they have fewer car accidents and adhere better to what the doctor recommends.
There are two types of answers to the question: "what exactly is measured with an IQ test?". One answer is that it measures general intelligence: a capacity that is important for every mental task. The other answer is that there is no general intelligence. Instead, each person has a collection of specific talents, which can lead to that someone is very skilled mathematically, but a lot less skilled in verbal stimuli. According to this perspective, an IQ score is therefore a summary of all the specific talents that someone possesses.
To find out which of the two perspectives is correct, the subtests of the tests are examined. Charles Spearman came up with the factor analysis. The factor analysis is used to determine whether a test has common factors (one type of intelligence), or whether it measures all kinds of different factors (multiple types of intelligence). With the help of factor analysis it has been confirmed that there is a common factor, because the subtests all correlate with each other. Spearman called this "general intelligence", which he designated with the letter g. He stated that people with a high g are very smart in all sorts of ways.
Yet g is not everything. People also have specialized skills, such as verbal and numerical skills. This creates a hierarchical structure of intelligence. At the top of the hierarchy is g, which contributes to all tasks. At the next level down are the specific abilities (linguistic and numerical). Then, at the next level are more specific capacities, each useful for a narrow and specialized set of tasks. This hierarchical conception leads to a prediction: if we choose tasks from two different categories, we should still find a correlation in performance, because these tasks have g in common. However, if we choose tasks from the same category, then we should find a higher correlation, because these tasks have two things in common: they both draw on g, and they both draw on the more specialized capacity needed for just that category. Data confirms both of these predictions.
There is also a distinction between fluid and crystallized intelligence. Fluid intelligence is about the extent to which someone can deal with new, unknown problems. Crystallized intelligence, on the other hand, is about the knowledge that someone has (through education, parents, books, etc.) about things that he or she has experienced before. These forms of intelligence are highly correlated. There are, however, important differences. First, crystallized intelligence increases with someone's age. Fluid intelligence, on the other hand, peaks in young adulthood and then decreases. There are also factors (alcohol, smoking, depression) that have more influence on fluid intelligence than on crystallized intelligence.
There are several answers to the question what exactly g is and what determines whether someone has a high or lower g. One answer is simple: mental processes go fast but take time. Perhaps people we see as 'intelligent' are people whose mental processes happen especially fast. Evidence for this perspective comes from tasks that measure inspection time: the time that someone needs to decide which of the two lines is longer, or which of the two tones is higher. It is true that the longer the inspection time, the lower the IQ score. So there is a negative correlation. This also means that the shorter the inspection time, the higher the IQ score.
Another answer to the question is that the memory is important. The working memory is about executive control: the extent to which people can monitor and control their own thoughts. People with a higher working memory capacity, score better on intelligence tasks. This states that people who are intelligent have better control over their thoughts and can therefore suppress impulses and come to better solutions.
Practical intelligence is also called 'street intelligence'. This is about intelligence in handling everyday situations.
Sometimes people are very intelligent according to an IQ test, but ignore facts and make decisions too quickly. They also have too much confidence in their decisions. As a result, Stanovich has said that a distinction must be made between rationality and intelligence. According to him, rationality is the capacity to deal critically with information that someone receives.
Other researchers point to emotional intelligence. This is the ability to understand one's emotions and those of others and to control their own emotions. People who score high on emotional intelligence often create a positive working atmosphere and also have more leadership potential.
The biggest challenge for the general IQ tests comes from Gardner. He came up with the theory of multiple intelligences. He states that there are eight types of intelligence. Of these eight, three are measured using normal intelligence tests: linguistic intelligence, logical-mathematical intelligence, and spatial intelligence. According to Gardner, there is also musical intelligence, bodily-kinesthetic intelligence, interpersonal intelligence, intrapersonal intelligence and naturalistic intelligence. Evidence for Gardner's theory comes from savants: people who score extremely low on IQ tests and are therefore considered as gifted, but who have enormous talents. For example, some savants can answer this question very quickly: "What day was it on the 12th of March in 1540?".
Both genes and environment influence how intelligent someone is. However, these are not separate influences. Instead they interact with each other and depend on each other. An example of this is living in poverty. The longer a child lives in a poor environment, the more negatively this will influence their IQ. A positive finding is that stimulating the environment can increase the IQ score.
Another proof of environmental influences comes from the Flynn effect. This is the phenomenon that all over the world, IQ points increase by three points every ten years, which can be explained by improvements in the environment, such as better nutrition or increased education chances. But it cannot be explained genetically. In recent years it appears to have slowed or even reversed.
There is a difference between the average IQ scores of white Americans and black Americans. Part of the explanation can be racial bias in intelligence tests, built into he vocabulary used and the assumptions on what facts are known. Another cause can be that black Americans don't have the same opportunities or access to the same resources. The median net worth of white families in the US is much higher than that of black families. Besides that, black people are often treated much differently by the people they encounter. This leads to stereotype threat. A black person knows they are expected to perform much more poorly on an intelligence test. While making a test, they are aware of this stereotype, and they experience stress that is likely to make them score lower compared to white Americans. Stereotype threat also affects other groups, the stereotype that women are worse at math than men can itself shape their performance.
We can improve intelligence by making sure everyone gets adequate nutrition and healthcare, or through directed training. Also, shifting expectations of what people are good at will affect will affect intelligence. This won't be easy as stereotypes are held and reinforced by many social institutions. But considering the great benefits of having a higher intelligence, efforts on all these fronts are very important.
People with higher IQ live longer. One reason is that people with higher IQ are likely to end up with more education and better jobs, and thus more wealth, this in turn gives more health benefits. Another reason is that people with higher IQ tend to avoid more risky behaviors. Individuals with higher IQ also seem to understand the importance of medical and are more likely to make use of it, remember to follow the doctor's advice, and are better at following instructions about how to take medications.
Intelligence influences health, but health also influences intelligence. It is likely that both health and IQ are byproducts of the overall functioning of the body. As both health and IQ are influenced by the body's functioning, it is no surprise IQ and longevity are related. It is important to note however that IQ scores do not determine someone's destiny, many people with low IQ live a long time and many people with high IQ get diseases or engage in risky behavior.
Psychology emerged as a separate discipline alongside philosophy and biology at the end of 1800. In these years, consciousness was an important topic. For example, Wilhelm Wundt tried to discover the elements of consciousness in his laboratory in Germany. In America, William James was busy trying to understand the "flow" of consciousness. Yet this emphasis on consciousness disappeared quickly, because researchers in the field of psychology argued that it was a subjective and non-scientific topic. This led to the fact that in the early 1900s, there was virtually no research being done into consciousness. However, in the past decades, consciousness has been studied again and much progress has been made in understanding what it is. Still, questions about consciousness are very complicated, because it is a phenomenon that is invisible to everyone except an individual himself. Nevertheless, we now know quite a lot about consciousness.
However, there are still a number of ambiguities, such as the definition of consciousness. For now, this definition is used: "Consciousness is a state of awareness of sensations or ideas, which means that one can reflect on these sensations and ideas, that one can know what it feels like to experience these sensations or ideas and that one can tell others that he or she is aware of the sensations and ideas".
Activities such as thinking, remembering and categorizing happen quick and effortless. However, these activities can only be carried out by the fact that processes and mechanisms are working "behind the scenes". Psychologists call this activity the cognitive unconscious: the broad set of mental activities that people are not aware of, but that are needed for everyday interactions in the world.
A distinction must be made between the products that one creates (beliefs, conclusions) and the processes that led to these products. The difference here is that people are aware of the products, but not of the processes. For example, if you have had dinner in a restaurant and think back to this dinner later, this is a product (your memory). However, the way you made up this memory is a process, which you are not aware of. As a result, you may not be able to know what is really in your memory and which parts you may have supplemented (think: was there a menu present? Even if this was not the case, chances are that you think so, because this is usual at a restaurant). Partly because of this, memory errors are often unnoticeable: the process that leads to the memory is unconscious, so you cannot immediately know if your memory is wrong and what exactly went wrong (because this happened in the process).
Another example is that of two groups of participants. One group was shown the word CORN and the other group was shown the word CQRN. However, through unconscious processes, both groups of participants reported that they had seen the word "CORN." They really thought that this was the case, which in turn reflects the influence of unconscious processes.
There is also unconscious reasoning. An example of this is that of an eyewitness in a crime. The police show this witness a line of people. Suppose the witness chooses the second person as the perpetrator. If the person then receives feedback (Chapter 8), which means that the police says, for example, "well, this person is also our suspect", then seventy-one percent of the witnesses are sure of their case, compared to forty-one percent when the police says nothing. This can be explained by unconscious reasoning. The eyewitness probably thought something like: "The police say I gave the right answer. So I can now put my doubts aside". However, the eyewitnesses themselves are not aware of this change in their "certainty": it is therefore an unconscious thought process. In addition, this feedback leads eyewitnesses to adjust their memory: they now really think that they have seen the perpetrator longer, clearer and better than they actually did.
Experiments also reveal the influence of unconscious thinking. One of these experiments was conducted by Nisbett and Schachter. In this experiment, participants were asked to undergo a number of shocks. Every subsequent shock would be stronger than the previous shock. The goal of the researchers was to investigate how far the participants would go: what was the maximum shock they would accept? Before the experiment began, some participants were given a pill and were told that this pill would reduce the pain, but had side effects such as shaking, the feeling of butterflies in the abdomen, irregular breathing, and so on. Nothing of this was true. Instead, the pill was a placebo and had no analgesic properties (no pain reducing properties).
Nevertheless, taking the pill was enormously effective: the participants who took the pill accepted four times the strength of the shocks compared to the participants who did not take the pill. The reason that this placebo was so effective is that the participants in the control group (without the pill) noticed that their hands were shaking and that their stomach was annoying. These are expressions of fear, which is normal, because the participants have the anticipation of shocks! The participants used these somatic markers (shaking hands, abdomen, basically everything that goes on in the body at that moment), as an indication that they should be afraid of the shocks. Because of this fear, they accepted fewer shocks.
However, the group receiving the placebo (the pill) also noted these somatic markers. Think back to the side effects of the pill: exactly the same effects as those of the other group. By being told that the pill could cause these side effects, they also noticed the somatic markers, but they attributed this to the pill. As a result, they were less likely to find that they were scared and this led them to accept more shocks. However, this reasoning happened unconsciously.
A distinction must therefore be made between unconscious processes and the conscious products that arise from these processes. In other words, you come to a conclusion (an idea), but the way in which you came to this conclusion is often not clear. However, it sometimes feels as though you have consciously thought about and consciously reached a conclusion. If someone asks you how you came to a certain choice, then sometimes you are able to give reasons and arguments for this. Yet it seems that this "consciousness" is an illusion. An example of this is an experiment in which participants read a short story. Later they had to indicate what kind of emotional impact this piece of text had and why it was. One group of participants got to read something about the crying of a baby and another group got to read the same story without this piece. One group of participants indicated that the piece had touched them and that this was because of the piece about the baby's crying. The other group responded to the story in exactly the same emotional way. So, the reasons that people mention for their (emotional) reaction seem to be incorrect. The introspects that people have are therefore not correct.
The question is how people sometimes make such mistakes when determining their thoughts or feelings. The answer is that the processes that people use to reach these conclusions are unconscious. People cannot view and assess these processes (introspection is not possible). So when they judge their behavior, they need a different source of information. This goes as follows: after an event, people wonder: "Why did I behave this way? I don't know exactly, but maybe I can use the knowledge I have about how people normally behave in this situation. On that way I can try to understand why I behaved so much in the situation. " This often leads to plausible reasoning. An example is: "Why am I angry with Zara? She just insulted me and I know that people often get angry when they are insulted. That is why I think I am angry with Zara now because she has offended me." However, this reconstruction can also be wrong.
Everyone would like to believe that he or she knows themselves well. However, researchers contradict this idea. They state that we do not know where our beliefs, emotions and actions come from. We do not know which memories are real and which are made up (interferences). However, people are sometimes very aware of their thoughts and make decisions based on an 'inner dialogue' with themselves. Nevertheless, mechanisms or a 'supporting system' are also needed that make this inner dialogue possible. Even during this inner dialogue, people are influenced by unconscious processes. Consider the problem-solving set that people have. This set is often helpful because it helps to stay focused. However, this set can sometimes be an obstacle to solving problems. Because this set is unconscious, it is very difficult to overcome this obstacle. It was also stated in Chapter 12 that it is important how decisions are framed. For example, a person can be enormously focused on his or her decision and be aware of his or her possibilities, yet be influenced by the way in which the choice is framed. Yet people are often not aware of this.
Other evidence for unconscious processes comes from patients with Korsakoff's syndrome. These patients have no conscious memories of things they have seen or done. If they are asked about the events, they state that they have no memories. When they are asked to perform tasks (such as walking to a store), this often goes wrong. However, these patients normally score on tests that measure implicit memory. So they seem to remember things when tested indirectly. This is also called "memory without consciousness".
There is something similar with perception, for example in patients with blind sight. Patients with blind sight say they don't see anything and for them it also really feels as if they do not. They also do not respond to flashes of light and they feel anxious while walking because they are afraid of encountering obstacles. In these patients, too, tests that measure implicit memory show that they can actually see things. Yet it feels to the patient as if he or she cannot see anything. An explanation for this is that there are "islets" with tissue in the damaged areas of the brain that are still intact. Another explanation for blind sight is that there are multiple neural pathways that transport the information from the eyeball to the brain. Damage to one of these roads is the reason that patients seem to be blind. However, information can still be transported by other neural pathways, which leads to that patients receive visual information, but that they are not consciously seeing this. So it seems possible to observe without being conscious.
Research shows that people can be influenced by visual input that they do not consciously see. This is called subliminal perception. An example of this is the very brief display of a stimulus (such as: Eat Popcorn) in a cinema, which leads to a fifty percent increase in the sale of popcorn. Another example is an experiment in which participants were shown three words. The first two words were shown briefly and were followed by a 'mask', which led to the words not being consciously seen. A third word was then shown. This word was shown for longer and without a mask, so that participants consciously saw this word. This experiment was about measuring the 'N400' wave. This wave was mentioned in Chapter 10 (What is language?), in which it was mentioned that this wave occurs when participants read nonsense sentences, such as: "He drinks his coffee with milk and dog". In this study, the N400 wave was greater when the word "war" was shown as the third word and the word "happy" as the second word. So, even though people do not consciously perceive the word "happy", it does have an impact. When they see "happy" and "war", there is an N400 wave, which shows that the subliminal words have been detected and observed and have influenced perception.
So it appears that people do many things unconsciously. The question is why they do this. In other words: what are the functions of being unconscious and what are things that people cannot do unconsciously?
Unconscious judgments and interferences are quick, efficient and often fit the situation in which people find themselves. However, this also immediately demonstrates a limitation of the unconscious. Unconscious processes can be complex and good, but are strongly influenced by the situation or people their habits. This means that when people (unconsciously) draw conclusions or make a selection, these are often conclusions that are based on 'familiarity' or that are driven by the situation. This would mean that unconscious processes cannot be controlled. This seems to be correct and often leads to errors, but knowing this does not lead to a reduction in errors. Just as people cannot choose not to see an illusion, they cannot choose to avoid a memory error or to avoid interference. This is explained by the fact that the process of making interferences is unconscious. This can be difficult too. An example of this is when people are 'reading' their thesis. It is very difficult to read this thesis from the perspective of someone else. The writer of the thesis knows exactly what he or she intended to write down, and therefore automatically makes interferences while reading the thesis.
Habits are also important. This is seen during 'action slips'. These are times or situations in which you want to do something, but instead of doing this thing, you do something completely different (what is normal, or what you always do). An example of this is when you are driving to the hospital. You don't go there often, so in a moment of inattention, you take the route you normally take (which is the route to your school).
So the idea is that unconscious processes are a kind of 'mental reflexes'. These reflexes are influenced by the situation and therefore often fit the situation. However, these reflexes are very 'inflexible', which can be very useful, because it means that many processes can take place simultaneously, which ensures more speed and efficiency because you do not have to pay attention to these unconscious processes.
John Flavell noted that children who are developing, also develop metacognition. Metacognition is about being able to monitor and control one's own thoughts. This metacognition is needed for many different things, but especially for memory. As a result, researchers focus on 'meta-memory': the knowledge, awareness, and control that people have over their own memory. Metacognition is also important for adults. An example of this is when you study for an exam. As you are studying, you notice that some things are easy and others are more difficult to remember. Then you choose to pay more attention and time to the things that you feel are more difficult. Meta-memory also influences your beliefs, such as whether you believe mnemonics are useful, or that deep processing leads to better memories.
In the last ten years, much research has been done into consciousness and brain functions. Some studies have focused on brain damage (amnesia or blind vision). Other studies have focused on what a normal brain is and what happens in the brain when people become aware of a stimulus. This involves looking at the neural correlates of consciousness.
With the help of PET, fMRI or EEG scans, researchers looked at how the brain activity changes when someone shifts their attention. Researchers also looked at what changes take place in brain activity when people become aware of a stimulus that has long been visible to them. These studies have shown that there are many different areas of the brain that are crucial for consciousness. So, there is not one group of neurons or one area in the brain that can be labeled as "the area of consciousness".
However, there are two categories of brain regions that fit two types of consciousness. First, there are areas of the brain that are important for someone's level of alertness (think of being sleepy or tired and extremely alert). This aspect of consciousness is impeded when someone has damage in the thalamus or in the reticular activation system. The latter is a system which regulates the overall levels of arousal in the forebrain and is important in the distinction between being sleepy and being awake.
Second, there are areas of the brain that are important for the content of consciousness. Sometimes you think about your immediate environment, sometimes you think about things that happened in the past and sometimes you think about things that still have to happen. This content depends on different areas of the brain. For example, brain areas in the visual system are especially active when you observe things in your immediate environment. Areas in the forebrain are especially active when you think of things that are not in your immediate environment (past, future).
There are various theories as to what makes consciousness possible in the brain. The neural workplace hypothesis states that different areas of the brain are highly specialized in their functions. For example, there are different brain areas for hearing compared to seeing. There are also differences for different elements within these areas. To come to perception and awareness, it is necessary that the different areas of the brain communicate with each other and that all these different elements (for example, hearing and seeing) are combined. In other words, all different elements must be integrated into one. For example, a stimulus that moves right in front of your eyes will activate a certain brain area. A red stimulus activates another area of the brain. Now, attention is important. If you do not pay attention to these stimuli, this leads to two independent neural responses. However, if you pay attention to one stimulus that moves as well as is red, then the neurons fire simultaneously (in synchronization). When this synchronization takes place, the brain connects these different areas of the brain to each other and this causes you to perceive the stimulus as a whole.
Before this synchronization can take place, it is necessary for neurons in one area to be able to communicate with neurons in another area. This communication is made possible by "workplace neurons", which literally connect one brain area to another. However, this process is selective, so not every piece of every neuron connects to every piece of every neuron. Instead, there is a competition between the different processes and the "winner" (the most active process) is communicated to the other brain regions and other information is not. Attention determines the winner. If you pay attention to a stimulus, then there is activity in the prefrontal cortex that causes attention in other neural systems and this determines what stimulus will be 'the winner'. Therefore, there a limited transfer of information and this transfer of information is controllable, because someone can choose what he or she pays attention to. So the final hypothesis based on this information is that there is an integrated activity which is made possible by the workplace neurons. These workplace neurons integrate or 'glue' all pieces of information together.
Every idea one has is represented in the brain through a specific pattern of activity. Here, each different brain region represents an element of the idea. However, people only become aware of the idea when the various elements are connected to each other. Here are a few facts about consciousness: the experience one has feels unitary and coherent. You are therefore not aware of 'orange, red and round', but of 'an apple'. This is due to the work of the workplace neurons. Conscious experience is selective: people are therefore aware of a limited number of things. In addition, this conscious experience is controllable: it is therefore possible to choose what you want to pay attention to.
The workplace serves as a base for executive functioning. In addition, the workplace means that people can compare what goes on in their mind and that they can detect conflict. There even seems to be a specific place for detecting conflict: the anterior cingular cortex (ACC). This structure is connected to the frontal cortex and other structures that affect emotion, motivation and reward feelings (such as the limbic system which contains the amygdala, the nucleus accumbens and the hypothalamus).
The difference between being awake and sleeping is also influenced by the neural workplace. It is not that people feel nothing during sleep, because the brain is inactive. Instead, communication is no longer taking place, as a result of which the different brain regions are no longer in contact with each other. The idea then is that this communication is required for consciousness and therefore people are in a state of being unconscious during their sleep.
In short, the neural workplace provides a comparison between different processing systems and these comparisons ensure that the executive can monitor mental processes and ensure that there are no conflicts. Based on these comparisons, processes that fit the goals that one has get chosen. In addition, the neural workplace ensures that neural activity is sustained.
Various researchers believe that we must distinguish between different types of conscious experiences. This chapter mainly addresses access consciousness, which is defined as someone's sensitivity to certain types of information (and thus the person's access to that information). Less has been said about the subjective experience of consciousness (what it actually feels like to have certain experiences), which is also called phenomenal consciousness. Philosophers use the term qualia to refer to these subjective experiences. However, much remains unclear regarding this topic.
Qualia is therefore difficult to study because no one but the individual can experience these subjective experiences. An example of qualia is processing fluidity. Some mental processes take place fast and others take more time and effort. People notice this degree of fluency: they know when they have easily come to a conclusion and when they have not. This processing has a lot of influence on how people interpret things. Often when they find that they have processed something quickly (the name of a person), they give it a special meaning, such as that this person must be famous, or that he or she knows this person. However, this is not always correct.
There is still a lot that researchers don't know about consciousness. For example, it is unclear what exactly changes in consciousness during meditating and when taking drugs. In addition, the question is how something weighing about 1.4 kilograms (the brain) can create conscious experiences. Also, the brain is a physical entity, but thoughts are not. Therefore, the question is how it is possible that a physical entity can lead to non-physical thoughts and feelings. It is also unclear is how thoughts and feelings influence the brain and the body and vice versa. This is called the mind-body problem, which refers to the fact that the mind is very different from the physical body, yet these two things can influence each other very strongly.
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
Add new contribution