Handbook of Qualitative Research Methods - Mortelmans

This summary of Handbook of Qualitative Research Methods - Mortelmans is written in 2015 and donated to WorldSupporter


Chapter 1. The Basics of Qualitative Research

Qualitative research is defined as the way in which people being studied understand and interpret their social reality is one of the central motifs.

Key characteristics of qualitative research:

  • Aims that are directed at providing an in-depth and interpreted understanding of the social world of research participants.
  • Small samples that are purposively selected based on a salient criterion.
  • Data collection methods which involve close contact between the researcher and the research participants.
  • Detailed and rick data.
  • Analysis which is open to emergent concepts and ideas.
  • Outputs which tend to focus on the interpretation of social meaning.

Historical development of qualitative research:

1.  Development of empiricism and positivism

Empiricism = all knowledge about the word originates in our experiences and is derived through the senses.

Positivism = social world can be studied in terms of invariant laws. Characteristics of positivism are:

  • Methods of the natural sciences are appropriate for the study of social phenomenon.
  • Only those phenomena which are observable can be counted as knowledge.
  • Knowledge is developed inductively through the accumulation of verified facts.
  • Hypotheses are derived deductively from scientific theories to be tested empirically.
  • Observations are the final arbiter in theoretical disputes.
  • Facts and values are distinct, making it possible to conduct objective research.

2.  Development of interpretivism

This concept states that knowledge about the world can be acquired in other ways than direct observation. Arguments in favor for this concept are:

  • Perception relates not only to the senses but to human interpretations of what our senses tell us.
  • Our knowledge of the world is based on ‘understanding’ which arises from thinking about what happens to us.
  • Knowing and knowledge transcend basic empirical research.
  • Distinctions exist between ‘scientific reason’ (causal determinism) and ‘practical reason’ (moral freedom).

Weber argued that the key difference between natural (positivism) and social science (empiricism) is that the purpose natural science is to produce law-like propositions whereas the aim of social sciences is to understand subjectively meaningful experiences. Interpretivism stresses the importance of both interpretation and observation in understanding the social world.

3.  Development of qualitative research methods and challenges to the scientific method

Since positivism became dominant in quantitative research, qualitative research was criticized as ‘unscientific’. In response to that some qualitative researchers tried to formalize their methods, called the modernist phase. However, in 1970 positivism was questioned concerning:

  • Whether it is possible to control variables in research involving human subjects.
  • Whether the elimination of contextual variables in controlled experimental conditions is an appropriate way to study human behaviour.
  • Whether it is appropriate to disregard the meaning and purpose of behaviour in controlled experimental studies.
  • Whether overarching theories of the world have any relevance to human lives.
  • Whether emphasis on hypothesis testing neglects the importance of discovery through alternative understandings.

These challenges encouraged the use of qualitative research. In the last decade of the twentieth century qualitative methods were more widely accepted within British psychological research practices. Since then, there was an explosion of interest in qualitative research.

Ontology = a philosophy that tries to categorize all existing things. Main ontological debate is about whether there is a captive social reality and how this should be constructed. Three distinct positions:

  1. Realism: there is a distinction between the way the world is and the meaning and interpretation of that world held by individuals.
  2. Materialism: there is a real world but only material features of that world hold reality. Thus, values, beliefs or experiences do not shape the material world.
  3. Idealism: reality is only knowable through the human mind and through socially constructed meanings.

Epistemology = ways of knowing and learning about the social world. Three main issues are:

  1. The relationship between the researcher and the people being researched. Natural science view, phenomena are independent and unaffected by the behaviour of the researcher. Thus, the researcher can be objective and the investigation is value free. Social science view, the relationship between researcher and the people being researched is interactive. Thus, the researcher cannot be objective and findings are value-mediated. ‘Empathic neutrality’ is a position that recognizes that research cannot be value free but which advocates that researchers should make their assumptions transparent.
  2. Issues around ‘truth’. Natural science view, there is a match between observations or readings of the natural world and an independent reality. Social science view, if several reports confirm a statement then it can be considered true as a representation of a socially constructed reality. Pragmatic view, an interpretation is true if it leads to actions that produce the desired or predicted results.
  3. The way in which knowledge is acquired. Inductive view, looking for patterns and association derived from observations of the world (using evidence for a conclusion).

Deductive view, propositions or hypotheses are reached theoretically through a logically derived process (using evidence in support of a conclusion).

For the differences between ontological and epistemological stances see Box 1.3 (p. 18).

Pragmatism = qualitative and quantitative research should not be seen as competing and contradictory, but should instead be viewed as complementary strategies appropriate to different types of research questions or issues.

‘Multi-method, trans disciplinary’ research = employs a range of different methods and draws on expertise from a range of alternate disciplines, as appropriate to the research questions.

Chapter 2. Social Research And Qualitative Methods

Theoretical research is concerned with the aim of testing, generating or enhancing thinking within a particular discipline. Applied research is concerned with using the knowledge acquired through research to contribute directly to the understanding or resolution of a contemporary issue.

First theoretical research was more popular. However, researchers began to see the benefits of applied research in the latter twentieth century when they searched for a more fully understanding of the nature of problems that social policies had to address.

Functions of qualitative research:

  • Contextual: describing the nature of what exists.
  • Explanatory: examining the reasons for what exists.
  • Evaluative: appraising the effectiveness of what exists.
  • Generative: aiding the development of theories, strategies or actions.

CONTEXTUAL research aids to:

  1. Map the range of elements, dimensions, classes or positions within a social phenomenon (e.g. what perceptions of politics do young people hold?).
  2. Display the nature or features of a phenomenon (e.g. how does racism manifest itself?).
  3. Describe the meaning that people attach to an experience, event, circumstance or other phenomenon (e.g. what does it mean to be a grandparent?).
  4. Identify and define typologies (e.g. what are the different models for organizing ‘self-help’ groups?).

EXPLANATORY research aids to identify:

  1. The factors that underlie a particular attitude, belief or perception (e.g. what shapes people’s beliefs about poetry?)
  2. The motivations that lead to decisions, actions or non-actions (e.g. why do people gamble?).
  3. The origins of events, experiences or occurrences (e.g. why does homelessness occur?).
  4. The contexts in which phenomena occur (e.g. in what environments does traffic blight occur?).

EVALUATIVE research aids to:

  1. Identify the factors that contribute to the (un)successful delivery of a program, service or intervention.
  2. Identify the effects of taking part in a program and how they occur (e.g. how do environmental conservation schemes change behaviours?).
  3. Examine the nature of the requirements of different groups within a target population (e.g. how do different groups of older people respond to home security initiatives?).
  4. Explore a range of organizational aspects surrounding the delivery of a program, service or intervention.
  5. Explore the contexts in which interventions are received and their impact on effectiveness (e.g. how does the nature of relationship breakdown affect receipt of family mediation services?).

Patton’s list of evaluative functions for qualitative research:

  • Goal free, looking at actual rather than intended effects.
  • Responsive, to diverse stakeholder perspectives.
  • Connoisseurship, use the evaluators’ perceptions to draw conclusions.
  • Utilization-focused, intended use and users of the evaluation.

Formative evaluation = designed to provide information that will help to change or improve a program or policy, either as it is being introduced or where there are existing problems with its implementation.

Summative evaluation = the impact of an intervention or policy in terms of effectiveness and the different outcomes that have resulted.

GENERATIVE research aids to:

  1. Develop new conceptions of social phenomena (e.g. the nature of ‘social exclusion’).
  2. Develop hypotheses about the nature of the social world and how it operates.
  3. Generate new solutions to persistent social problems.
  4. Identify strategies to overcome newly defined phenomena or problems (e.g. ways of restoring declining rural economies).
  5. Determine actions that are needed to make programs, policies or services more effective (e.g. changes that are needed to help reduce hospital waiting lists).

The factors that determine whether qualitative methods should be the sole method used are centrally related to the objectives of the research. Certain conditions or factors are:

  • Ill-defined/not well understood: qualitative research is useful when the subject needs to be more clearly understood before it can be measured.
  • Deeply rooted: the phenomena that need to be studied are deeply set within the participants’ personal knowledge. These might be related to the origins of long-standing values or beliefs.
  • Complex: the subject is complex and there is a need to understand the phenomena which are conceptually difficult to relate. The complexity may lie in the nature of the subject itself, or in the level of unpacking that is needed to formulate a position, view or belief.
  • Specialist: the collection of information from individuals or groups that have a highly specialized role in society.
  • Delicate or intangible: certain subjects in social research are difficult to capture because they are so fragile in their manifestation.
  • Sensitive: it is hard to predict the subject matters that might prove distressing or emotive to individual participants.

There exist two approaches for collecting qualitative data 1) naturally occurring data, and 2) generated data through the interventions of the research.

Naturally occurring data is a research method developed to allow investigation of phenomena in their natural settings. They provide data which is an enactment of social behaviour in its own social setting, rather than a recounting of it generated specifically for the research study.

Approaches:

  1. Participant observation: researcher joins the study population to record actions, interactions or events that occur. The researcher experiences the phenomena for themselves.
  2. Observation: record and analyse behaviour and interactions as they occur, although not as a member of the study population.
  3. Documentary analysis: study of existing documents, either to understand their substantive content or to illuminate deeper meanings which may be revealed by their style and coverage. This is particularly useful when the history of events is relevant.
  4. Discourse analysis: the construction of texts and verbal accounts to explore systems of social meaning. It examines ways in which versions of the world, of society, events and inner psychological worlds are produced in discourse.
  5. Conversation analysis: a detailed examination of talk in interaction to determine how conversation is constructed and enacted. The aim is to describe people’s methods for producing orderly social interaction.

Generated data involves reconstruction and requires re-processing and re-telling of attitudes, beliefs, behaviour and other phenomena.

Ways in which data can be generated:

  1. Biographical methods: use life stories, narratives and recounted biographies to understand the phenomena under study. It allows participants a high degree of freedom to shape and order the reconstructions in their own way.
  2. Individual interviews: a detailed investigation of people’s personal perspectives, for in-depth understanding of the personal context within the research phenomena are located, and for very detailed subject coverage.
  3. Paired (or traid) interviews: in-depth interviews but carried out with two people at the same time. This allows participants to reflect on and draw comparisons with what they hear form others.
  4. Focus groups or group discussions: several respondents brought together to discuss the research topic as a group. It provides an opportunity for reflection and refinement, and for direct and explicit discussion of differences as it emerges in the group.

Mixing qualitative approaches can be useful, for example when using interviews in combination with observation methods. When mixing qualitative and quantitative methods it must be emphasized that the purpose of bringing the different approaches together is to yield different types of intelligence about the study subject rather than simply to fuse the outputs. In social policy research combining the methods can be very useful since many of the questions that need to be addressed require measurement of some kind but also greater understanding of the nature of an issue.

Three ways of using qualitative and quantitative research together are shown blow.

Preceding statistical research

Using a preliminary qualitative research before a quantitative research can be of value when the subject matter is complex and where some identification of the underlying constructs is needed before relevant questions can be structured. Besides that, it can be used to generate hypotheses for statistical testing. Due to the in-depth investigation, qualitative research can discover possible connections between phenomena. Lastly, preliminary qualitative research can define the dimensions of sample segmentation.

Alongside statistical research

Qualitative and quantitative methods can be used to study the same or different phenomena. There are also occasions where qualitative and quantitative research are brought together in the study of the same phenomenon but then divide in terms of what is explored. Furthermore, qualitative and quantitative research combined can be used to study different phenomena in the same field of research. The need to use both methods is particularly evident in evaluative studies.

As a follow-up to statistical research

Doing qualitative research after quantitative research is useful when the quantitative findings need further explanation, detail or in-depth information. When using both researches, it is important to recognize that each offers a different way of knowing about the world. Therefore, it should not be expected that the evidence will replicate each other.

Triangulation involves the use of different methods and sources to check the integrity of inferences drawn from the data. In this case there is a debate about the extent to which triangulation offers qualitative researchers a means of verifying their findings.

Chapter 3. Problems Concerning Design

A good qualitative research study design is one which has a clearly defined purpose, in which there is coherence between the research questions and the methods or approaches proposed, and which generates data which is valid and reliable. Design in qualitative research is a continuing process which calls for constant review of decisions and approaches.

Five key aspects of research design are:

  • The development of research questions
  • Building design around research settings and populations
  • Selecting the time frame for data collection
  • Choice of data collection
  • Negotiation of research relationships

The development of research questions

Requirements of research questions, they need to be:

  • Clear, intelligible and unambiguous
  • Focused, but not too narrow
  • Capable of being researched through data collection
  • Relevant and useful
  • Informed by and connected to existing research or theory, but with the potential to make an original contribution or to fill a gap.
  • Feasible concerning the resources
  • Of interest to the researcher

Building design around research settings and populations

Selecting research settings and populations involves identifying those which are able to provide the most relevant, comprehensive and rich information.

Two main issues related to the way in which design needs to be built around the research settings and populations selected are 1) comparisons in qualitative research, and 2) the role of case studies.

  • Building comparison: comparison should inform the selection of research locales and populations, aid theory building, and enhance the solidity of research findings. An often discussed issue is control, a feature in which groups are constructed to differ in respect of one key variable so that the effect of that variable can be understood. Comparison in qualitative research can contribute by:

§     Identifying the absence or presence of particular phenomena in the accounts of different groups.

§     Exploring how the manifestations of phenomena vary between groups.

§     Exploring how the reasons for phenomena vary between groups.

§     Exploring the interaction between phenomena in different settings.

§     Exploring more broadly differences in the contexts in which phenomena arise or the research issue is experienced.

Control groups are more strongly associated with quantitative designs and with measurement of the effect of the intervention.

  • Building case studies and structural linkage: the term case study is sometimes used as a synonym for qualitative research. Features associated with case studies are:

§     The fact that only one case is selected, although it is also accepted that several may be.

§     The fact that the study is detailed and intensive.

§     The fact that the phenomenon is studied in context.

§     The use of multiple data collection methods.

Case studies raise a number of questions at the design stage. Decisions should be made about which cases to select, the composition of the case, and the amount of consistency between cases.

Selecting the time frame for data collection

Determining the appropriate timing of research in relation to the event involves considering what perspectives on that event are implied by the study objectives.  In investigations of new initiatives, services or policies, the appropriate timing will depend on whether the focus of the study is.

  • Single research episodes: many studies involve only one episode of fieldwork, even in a changing quality of what is being studied. This is often sufficient since qualitative research involves probing and clarification. However, if the process of change is an important aspect of what is being researched, involves complex processes or a substantial timespan, a single episode of data collection might not be enough.
  • Longitudinal qualitative research studies: this involves more than one episode of data collection. There are two broad forms:
  1. Panel studies: people are interviewed more than once. Used to explore micro-level change, where the focus of change is the individual.
  2. Repeat cross-sectional studies: subsequent samples of new participants are interviewed.  Used to explore macro-level change, where the focus of change is on the wider context within which they are situated.

Questions concerning the design stage in longitudinal studies:

  • The number of research episodes and their timing
  • Initial sample selection
  • Field work methods
  • Selection for follow-up interviews: the follow-up stage can be designed to allow intensive study of particular groups or issues, returning to a purposively selected sub-sample of those interviewed at the first stage. Alternatively, the follow-up sample can be shaped around events or experiences that have occurred since the first stage fieldwork.
  • Analysis: the analysis of the first stage fieldwork needs to be organized in a way which will make it possible to integrate later stages of data, to make comparisons and identify stages.

Choice of data collection

Three considerations should be made:

  1. Choosing between naturally occurring and generated data

The research will need to consider the following:

  • The importance of context: generated data allows participants to describe the personal context in which the research issue is located and how they relate to it. Naturally occurring data the context is more important since then observing the research phenomenon in its natural context is critical for understanding.
  • Whether a recounting of the research phenomenon is likely to be sufficiently detailed, accurate or complete: if people are unable to give a truthful account since they are unwilling to talk frankly about something, or if they are bound up with social rules and expectations, naturally occurring data may be useful.
  • Whose interpretation is paramount: naturally occurring data relies on the researcher’s interpretation of what is observed. Generated data collection gives participants a direct and explicit opportunity to convey their own meanings and interpretations through the explanations they provide.
  1. Choosing between in-depth interviews and focus groups

The selection depends on three key factors:

  • The type of data sought: a key feature of in-depth interviews is their depth of focus on the individual. Focus groups are used where the interaction between participants (group process) are important.
  • The subject area: very complex systems, processes or experiences are best addressed in in-depth interviews as well as understanding motivations and decisions. Intangible or conceptual topics are well suited to focus groups as well as studies focusing on attitudes and views.
  • The nature of the study group: in-depth interviews allow the participant’s to choose the location. Furthermore, in-depth interviews are more appropriate if the participants have nothing in common. Although focus groups benefit from diversity, it is helpful to have some commonality between the participants related to the research topic. Small groups might provide a good balance between the group and the individual context.

Combining in-depth interviews with focus group can be very useful.

Focus groups might be used as an initial stage to begin to explore relevant issues which will then be taken forward though in-depth interviews. Furthermore, focus groups could be used after in-depth interviews to discuss the issues at a more strategic level.

  1. Secondary data analysis

This can be a valuable resource, providing an opportunity to bring a new perspective to existing data, to use elements of the data that have not been fully analysed, or to form a base for comparison with newly collected data. However, the adequacy of the original data for the new research needs to be considered carefully.

Points of attention:

  1. It may be that certain subject areas were not central to the original objectives, and that this is reflected in the data available.
  2. The sample may not be ‘comprehensive’ for the purposes of the secondary analysis and may have important constituencies missing.
  3. The original data needs to be of high quality in terms of the conduct of the original data collection.

Negotiation of research relationships

The way in which access (approval of research) is negotiated on the ground can be critical to the success of a study. Engaging in research settings can be aided in a number of ways:

  1. Being sensitive to the hierarchy.
  2. Providing clear information about the objectives of the study and why that setting has been chosen.
  3. Being open and consistent about what is required.
  4. Being clear about how the findings will be used.
  5. Anticipating and being responsive to concerns and sensitivities raised.
  6. Having a single point of contact within the organization.
  7. Being flexible about shaping the study approach in response to the precise setting and accepting advice.
  8. Considering how findings can be shared and at what stage.

Making studies accessible to the researched groups involved requires consideration of the appropriate language to use in approaching them, anticipation of the possible barriers to participation, and provision to help to overcome them. The particular questioning techniques that will make the study accessible to the participant should also be considered.

It is also useful, at the design stage, to think about some reward in return for the assistance, time and thought given by the participants. However, the requirement of being objective and neutral must stay important.

Researchers try to present themselves objectively and neutrally. However, this is not the only thing that is relevant: a broader cross-perception between participant and researcher also takes place. A solution might be a researcher and a participant that are matched on key socio-demographic criteria, such as gender, social class or ethnicity. A limitation to this approach is that insufficient explanation or clarification is sought by the researcher because of assumptions created by their shared experience. One must weigh up the relative risk of cultural collusion versus unhelpful power dynamics.

In doing research one should consider the amount of information provided to the participants. This must be balanced since giving too much information may deter the potential participants and giving too less leads to inadequately prepared participants. Furthermore, the proposed conditions for anonymity and confidentiality should be given particular though, and made very clear to participants. Confidentiality means avoiding the attribution of comments to identified participants.

It is important to give consideration to ways in which taking part may be harmful to sample members, and to take aversive action. This may be relevant in studies on sensitive topics. A particular difficult ethical dilemma arises when information is disclosed, which indicates that the participant is at risk of harm. An appropriate answer would be for the researcher to encourage the participant after the interview to report it themselves, or to seek help in some other way.

Researchers who conduct fieldwork also place themselves at risk. In public places, this will involve decisions about appropriate modes of transport. In private, this involves the interview content and dynamic which might spark anger or raise risk.

Most research benefits from teamwork as working in teams provides more opportunities for reflection and review. The time involved in collaboration needs to be considered realistically and built into the research budget. A final consideration is the importance of managing the timetable and budget once work begins.

Chapter 4.  Sample Issues

When sampling strategies for social research are described, a key distinction is made between probability and non-probability samples. In a probability sample elements in the population are chosen at random and have a known probability of selection. Often the probability of units being selected is equal. However, unequal probability samples exist as well and then the data has to be re-weighted during the analysis. In a non-probability sample units are deliberately selected to reflect particular features of the sampled population. The sample is not intended to be statistically representative but instead the characteristics of the population are used as the basis of selection.

Main sampling approaches:

  1. Criterion based or purposive sampling: the sample units are chosen because they have particular features or characteristics which will enable detailed exploration and understanding of the central themes the researcher wishes to study (e.g. socio-demographic characteristics). This approach has two aims 1) to ensure that all the key constituencies of relevance to the subject matter are covered, and 2) to ensure that some diversity is included so that the impact of the characteristic concerned can be explored. Different types of purposive sampling are:
  • Homogeneous samples: to give a detailed picture of a particular phenomenon.
  • Heterogeneous samples: to include phenomena which vary widely from each other in order to identify the central themes this cut across the variety of cases.
  • Extreme case/deviant sampling: cases are chosen because they are unusual or special and therefore potentially enlightening.
  • Intensity sampling: extreme cases of specific phenomena of interest.
  • Typical case sampling: cases which characterize positions that are normal or average are selected to provide detailed profiling.
  • Stratified purposive sampling: to select groups that display variation on particular phenomena but each of which is fairly homogeneous, so that subgroups can be compared.
  • Critical case sampling: cases are chosen on the basis that they demonstrate a phenomenon, dramatic position or are central in the delivery of a process.
  1. Theoretical sampling: a type of purposive sampling in which the researcher samples incidents, people or units on the basis of their potential contribution to the development and testing of theoretical constructs. It is an iterative process, which means that the researcher picks an initial sample, analyses the data, and then selects a further sample in order to refine the emerging categories and theories. This goes on until the ‘data saturation’ stage has come (no new data is obtained through expanding the sample).
  2. Opportunistic sampling and convenience sampling: opportunistic sampling involves the researcher taking advantage of unforeseen opportunities as they arise during the course of fieldwork. Convenience sampling lacks any clear sampling strategy, the researcher chooses the sample according to ease of access.

Although the two main approaches used in qualitative research, purposive and theoretical sampling, differ, they also have much in common. The characteristics are:

  1. The use of prescribed section criteria: samples need to be selected to ensure the inclusion of relevant processes that can inform that understanding. Units are chosen because they typify a circumstance or hold a characteristic that is expected or known to have salience to the subject matter under study. Furthermore, the sample must be as diverse as possible within the boundaries of the defined population. This optimizes the changes of identifying the full range of factors associated with a phenomenon and allows some investigation of interdependency between variables.
  2. Sample size: qualitative samples are usually small in size because if data is properly analysed very little new evidence is obtained from each additional fieldwork unit. Besides, statements about incidence or prevalence are not the concern of qualitative research and the type of information that qualitative studies yield is rich in detail. Lastly, qualitative research is highly intensive in term of the research resources it requires. Issues that need to be considered in determining the sample size:
  • The heterogeneity of the population: if the population is very diverse, the sample size must increase.
  • The number of selection criteria: the more criteria, the larger the sample.
  • The extent to which nesting of criteria is needed: if criteria need to be interlocked (nested), the sample size will increase.
  • Groups of special interest that require intensive study: if groups require intensive study, the sample size will increase.
  • Multiple samples within one study: for reasons of comparison or control, this increases the sample size.
  • Type of data collection methods: if the data collection methods involve single interviews, paired interviews or group discussions (in this order), the sample size will increase.
  • The budget and resources available
  1. Additional and supplementary samples: this may occur when it is found that important constituencies are not sufficiently well represented to derive sound qualitative evidence.

Theoretical sampling is particularly appropriate for exploratory studies in unfamiliar areas, since it may be difficult to identify in advance the groups and characteristics that need to be included in the sample.

The first stage in sample design involves identifying exactly what it is that is to be sampled. In social research this usually involves people. Therefore, it is necessary to define the parent population (the population from which the sample is to be drawn).

Three key questions in defining the population for study:

  • Which group or subpopulation is of central interest to the subject matter of the study? The population that will provide the richest and most relevant information.
  • Are there subsets of the central population that should be excluded?
  • Are there additional groups or subpopulations that should be included because their views, experiences would bring contrasting or complementary insights to the research?

Defining a population involves two stages, specifying the characteristics of the collective units required (1) and then specifying those of the individuals required within them (2).

Key criteria by which any potential sample frame will need to be judged:

  1. Does the sample frame provide the details required to inform selection?
  2. Does the sample frame provide a comprehensive and inclusive basis form which the research sample can be selected?
  3. Will the sample frame provide a sufficient number of potential participants to allow for high quality selection, particularly given that not all will be eligible or willing to participate in the study? The sample frame will need to generate a group of eligible potential participants which is around three to four times the size of the required study sample to allow scope for selection.

Two key types of sample frames:

  • Existing sources: most convenient type of sample frame. Key types are:

§     Administrative records: principal shortcoming is that, because they are not generally designed for research purposes, they are unlikely to contain all the information that qualitative research sampling requires.

§     Published lists: it is important to investigate the criteria for inclusion, and to consider whether the list is sufficiently comprehensive.

§     Survey samples: very useful if it is available since it will meet all the requirements for comprehensiveness, diversity and lack of bias. Offers the opportunity to know how certain variables are distributed within the study population before sample composition decisions are finalized. However, one should avoid over-complex categorizations or making assumptions about likely attitudes based on survey responses.

  • Generated sampling frames: time-consuming

§     A household screen: conducting a short interview in households in order to identify whether the household contains an individual who belongs to the study group and if so to collect further information relevant to sample selection.

Often the researchers are unable to do the interviews themselves and therefore hire others which need to be fully briefed and possessed with all documentation (p. 94).

§     Through an organization: useful when populations are too scattered or too small to be identified easily through a household screen. A difficulty is how to include people who are not in contact with organizations. Another one is that when the selection criteria are includes complex or involve information that individuals may not want to disclose to the organization.

§     Snowballing or chain sampling: this involves asking people who have already been interviewed to identify other people they know who fit the selection criteria. A shortcoming is that the diversity of the sample frame might be affected.

§     Flow populations: samples are generated by approaching people in a particular location or setting. This might be effective in identifying a specific population.

Choosing a sample frame:

  • General population samples à household screen (less useful for very rare groups).
  • Groups that have an administrative significance à administrative records.
  • Groups which are rare or hard to find à survey sample (1), generating a sample through organizations (2), snowballing (3).
  • Specific minority ethnic groups. E.g. household screen in high density of ethnic community, snowballing in low density areas.
  • Samples of organizations or professional à published lists.

If the sample is being generated through someone else than you, it will be necessary to consider carefully what arrangements will be required for asking permission from potential sample members for their details to be passed on to the research team. A solution might be an ‘opt in’ approach which gives individuals an opportunity to indicate that they do not want their details to be passed on, but treats inaction as consent.

Steps in designing a purposive sample:

  1. Identifying the population for study (p. 13 of summary)
  2. The choice of purposive selection criteria
  3. Prioritizing the selection criteria
  4. Deciding on the locations for the study
  5. Designing a sample matrix
  6. Setting quotas for selection
  7. Area allocations
  8. Sample size
  9. Purposive sampling for group discussions

The choice of purposive selection criteria

Within the parent population, which constituencies need to be represented and with what level of diversity?

Factors that need to be considered:

  • A review of relevant literature will identify characteristics that are shown to have an impact on the subject being investigated.
  • There will be variables which need to be covered simply to achieve a balanced sample.
  • There may be hypotheses that the research is exploring that will require coverage of particular subgroups.
  • There may be subgroups about which little is known and whose circumstances or views need to be explored.

Prioritizing the selection criteria

The selection criteria should be prioritized in some way because otherwise the sample scale would be beyond what is manageable.

The first step is prioritizing the criteria based on which are the most important in terms of achieving a sample that is inclusive of the demographic structure of the population being studied, that contains the key constituencies, and that is sufficiently diverse for comparative analysis. Secondly, the extent to which the variables should interlock (nested) should be considered. Furthermore, it is useful to consider whether any of the criteria identified are highly correlated with each other.

Deciding on the locations for the study

Locations are usually selected because of their salience to the subject under research.

Designing a sample matrix

This is the most useful way to convert decisions about the remaining sampling criteria into a sample design. An example is shown at page 103. The matrix will include a number of items relating to the primary sampling criteria. These will yield a number of cells, each of which will be assigned a number of sample units to be selected (a quota). The secondary criteria will then be considered in relation to each of the dimensions or cells to identify ways in which these variables can be controlled. Before placing the selection criteria items in the matrix, each needs to be divided into categories that are meaningful to the subject.

Setting quotas for selection

Quotas specify the precise number of people that will be needed with each of the characteristics set out in the sample matrix. They are used to control the final selection of participants, so that the study sample matches the sample design set out in the sample matrix. Quotas can be specified as exact numbers but it is more usual to use ranges. Two general features of quota specification are 1) the order in which the variables are listed takes account of the way that data might be collected ruing screening, and 2)the ranges given in each of the categories for patterns of attendance are slightly different so that they add up to achieve the right numbers overall.

Area allocations

If there are features that vary between areas, quotas may need to be specified separately for each area.

Sample size

If having drawn up the sample matrix the sample size falls outside the manageable range, some questions arise. Have too many variables been included or too many given top priority? Is the level of nesting proposed necessary?

Purposive sampling for group discussions

The sample size will be larger, but all the steps described need to be followed as well. There is one further decision to make, which involves specifying the composition of each of the groups. The group composition needs to be translated into a specification of quotas for each of the discussion groups.

Steps for implementing the sample design:

  1. Selection to meet quota requirements: unless the sample frame provides all the information relevant to selection and assignment to quotas, a screening exercise will be needed with a short screening questionnaire. People that fall out need to be replaced by others with as similar as possible characteristics.
  2. Documenting outcomes: when recording the outcomes of approaches to potential participants one will see whether there is attrition among specific groups or constituencies in the sample frame. Furthermore, it must be recorded in categories why people fall out:
  • Ineligible or out of scope: fall outside the detailed definition of the study sample.
  • Non-contacts
  • Not meeting quota requirements
  • Refusals to participate
  • Agreement to participate

Chapter 5.  Fieldwork

All qualitative data collection will have some structure, however, the extent to which the structure of data collection can be usefully depends on the specific purposes of the study. Although qualitative research often follows the direction taken by participants, there will be a strong sense in advance of the issues that needs to be explored.

Data collection is more structured in an evaluative or investigative study and less structured in a very exploratory study.  In general, too, focus group data collection is less structured than in-depth interviewing since data emerges though interaction within the group.

Two main types of qualitative interviews are discovered:

  • Unstructured, non-standardized/in-depth interviews (exploratory), more probing
  • Semi-structured/semi-standardized interviews (evaluative), less probing

An important aspect of an interview is the order in which issues and topics might be approached. This involves mentally picturing the interview or group discussion and working out the most natural way to structure it. An order is useful for several reasons:

  • Interviews are processes with their own dynamic, which means that different issues are best addressed at different stages of the process.
  • Understanding something of the personal context early in the interview will be important to make sense of what they later say, and to probe effectively.
  • It will be easier for the research team to become familiar with the topic guide.
  • In general, a chronologically order will aid in-depth exploration.

Useful general principles in ordering an interview, Box 5.1 (p.116):

  1. The opening topics should be easy and unthreatening.
  2. The opening topics is an opportunity to collect information that will provide important information for alter stages of the interview.
  3. Move from general to more specific topics to create an unthreatening atmosphere.
  4. Questions about experiences, circumstances and behaviours should precede motivational or attitudinal information since people talk easier about the first.
  5. Introduce a discussion of definitions at an early stage in order to hear participant’s initial reflections rather than asking these questions later when their definitions has been influenced by the discussion.
  6. The interview should have a clear and expected end and lead to a positive note (e.g. questions about the future).
  7. Towards the end, it can be helpful to include questions which summarize somebody’s attitudes or experiences.

A topic guide provides documentation of subjects to investigate that serves as an interview agenda. A well-designed topic guide will provide flexible direction to field-work process and essential documentation of a central aspect of the research. This ensures that the relevant issues are covered systematically.

Designing topic guides consists of several processes:

  1. Establishing subject coverage
  2. The structure and length of the guide
  3. Language and terminology
  4. Specification of follow-up questions and probes
  5. Making the guide easy to use

Establishing subject coverage

Which subjects will be covered in the data collection is often clear to the researcher from the stated objectives of the research and the exiting literature in the field. It is important to maintain clarity about the central objectives of the study and not to allow specific questions to shift the focus too far. Therefore, part of this process involves ruling certain topics as outside the scope of the research.

The structure and length of the guide

Concerning follow-up issues, Rubin and Rubin distinguishes two types of interviews 1) tree and branch model (the branches are issues being pre-specified for follow up), and 2) rivers and channel model (researcher follows channels wherever they lead).

A very detailed topic guide may include:

  • Suggested wording for opening and closing
  • Specific subjects to be covered
  • Suggestions for prompts and direction for probing
  • Suggested wording for questions addressing sensitive topics.

The amount of detail of the topic guides reflects the extent of pre-structuring that is desirable. Difference in length is also determined by individual styles of creating topic guides. Shorter guides generally encourage more in-depth data collection. However, more detailed topic guides are preferred for advisers and research commissioners. For focus groups less key topics are desirable compared to an interview in order to allow time for all group members to be drawn into each topic.

Language and terminology

In general, topic guides work best when items are not worded as actual questions, but instead use single words to indicate the issues and leave the formulation to the researchers. Since the researcher needs to be responsive to the language used by participants, the items should be phrased in language which is as neutral as possible.

Specification of follow-up questions and probes

Topic guides usually include some indication of issues for follow-up questions and probing. These follow-up questions are vital to ensure full exploration of the issues. One way in which topic guides can vary considerably is the degree to which follow-up questions are included in the guide or left to the researcher.

Making the guide easy to use

  1. Objectives: begin with a brief statement of the objectives as a reminder of the underlying purpose of the study.
  2. Introduction: to remind the interviewers what to say at the start. This may include details of the research team or reason of conduction.
  3. Summary of topics: provides a quick and easy-to-read reminder.
  4. Layout: a lot of space makes it easier to read and to annotate if needed.
  5. Instructions: the following can be useful to note on the guide:
  • Suggested wording for questions that are difficult to introduce.
  • How to carry out a task or when to show a visual aid.
  • Description of the rationale behind a particular question.
  • Indication of the relative priority of different topics (e.g. ‘key topic’ and ‘briefly’).
  • Instructions for where certain sections are relevant to subgroups only.
  • Instructions to explore particular dimensions
  1. Ending: reminder for talking about confidentiality and how the data will be used.
  2. More than one topic guide: creation of more than one topic guide is easier if a study involves subgroups whose circumstances differ and therefore need to be asked a separate set of questions. One must ensure that both guides cover the common areas in the same way.

In the early stage of the topic guide structure one must consider incorporating other, additional research instruments. Other research instruments are:

  1. Collecting structured data: this can be important when relatively complex and detailed background information is needed in order to have a clear understanding of participant’s situations. Furthermore, using calendars or diaries can enhance data collection since it acts as a memory and supports greater precision in the dating of events.
  2. Using case illustrations and examples: rooting discussion in specific examples can add depth and richness to data collection. It ensures that information obtained is not general, but a description of actual behaviour.
  3. Enabling and projective techniques: techniques that are used to aid expression and refinement of views. They can help to focus the discussion, enabling people to consolidate their views, or promote further thought.
  • Vignettes: these are short descriptions of particular circumstances, persons or events, which might be described verbally by a researcher or a written version shown. They introduce an element of consistency which can be useful, allowing comparison between the reactions of different participants to the same hypothetical example.
  • Card-sorting: participants are shown a number of written or visual examples of an issue, and asked to order them. This facilitates discussion of the reasons for choices and priorities.
  • Giving information or showing written material: when reflections on different proposals are required, discussion must be stimulated further, or if the topic is one about which knowledge is likely to be limited among participants, there is a need to introduce information.
  • Mapping emergent issues: this is more appropriate for focus groups as it displays what is generated already and encourage them to take ownership of it and to move it forward.
  • Projective techniques: a range of strategies designed to stimulate discussion and communication, and to access thinking or beliefs that are less conscious or that may be difficult to speak about. Five types of projective techniques are distinguished.

§     Association: word or brand association

§     Completion: compete sentences or stories

§     Construction

§     Expressive: drawing and role enactment

§     Choice-ordering: selecting or ranking items

Projective techniques provide a means of cutting through self-consciousness and can draw out views that are otherwise less acceptable, less conscious or are based on strong underlying emotions. Disadvantages are that it is time-consuming, disruptive to the flow of discussion and open to misinterpretation.

  1. Field notes: field notes provide an opportunity to record what researchers see and hear outside the immediate context of the interview.

A full briefing for the whole research team is one of the most critical elements for successful data collection. It is a very good opportunity to discuss how the topic guide will work in practice, to identify any potentially difficult areas, and to think about different ways in which questions might be phrased or issues approached.

Written information will also be an important aspect of the briefing of team members. It may also be necessary to obtain comments on the topic guide from a research commissioner or advisory group.

After briefing one should study the guide and become familiar with its structure, detailed contents, thinking about how different issues might be addressed, the type of responses they might yield and how they will need to be followed up.

Initial interviews and focus groups will be an important test of the scope of the topic guide. It is important to review whether the topic guide allows participants to give a full and coherent account of the central issue and incorporate issues. A useful aid in the refinement of fieldwork strategies and topic guides is for members of the team to conduct initial interviews. This is helpful for discussion of how well the guide is working. Furthermore, it is very valuable to review the topic guide after several interviews since it provides researchers the chance to refine the guide.

Chapter 6. Interviews

In-depth interviews can be regarded as being ‘conversations with a purpose’, appearing naturalistic yet bearing little resemblance with ordinary conversations. Two metaphors can describe the phenomenon: miner (knowledge is given and interviewer must ‘mine’ this information to surface). This resembles a mine worker who must extract the raw materials. Secondly, a traveller metaphor (knowledge is created and interviewer must ‘accompany’ the interviewee on its route) is used. The stories of the interviewee are interpreted by the researcher can may lead to new insights.

There are six key features of in-depth interviewing. Firstly, it is intended to combine structure with flexibility. This way, there is space for structured questions but also for spontaneous probing. Secondly, the interview is interactive in nature.  Based on the interventions of the researcher, the interview can be guided. Thirdly, the researcher uses a range of techniques to acquire depth.  This generally refers to follow-up and probing questions. Fourthly, the interview is generative so that new knowledge is created. Both the researcher and the participant can discover original avenues of thoughts. Fifthly, interview data must be captured in its natural form. Therefore, the interview is ordinarily taped since note-taking would alter the form of data.  Lastly, qualitative interviews are generally conducted face-to-face.  Because of its intensity, a physical encounter is essential for it is flexible, interactive and generative.

Researches must be skilled to execute these kinds of interviews. They must be able to listen very well in order to be able to probe; they must have hat the researcher can come back to a previous point at any time. During the interview, however, the researcher must avoid analysing the information immediately and must focus solely on interviewing.

The staging of the interview must meet the purposes of the research. The researcher must be aware of these stages, must understand the terms of contract between the interviewee and himself, and can decide on the role of the participant during the interview. Generally, the stages are as follows:

  1. Arrival: establish immediately a relationship and comfortable atmosphere
  2. Introducing the research: what is the research topic about?
  3. Beginning the interview: start neutrally to distill contextual information about the participant
  4. During the interview: guide the participant through key themes
  5. Ending the interview: introduce the end and make sure everything has been said
  6. After the interview: thank the participant and reassure confidentiality issues

The contract between both parties is vital. Interviewees must have given their consent to be interviewed within the terms both agreed upon. Those terms generally consist of the length, the venue, the topic, and confidentiality conditions. Participants have the right to change their mind at any time. Terms are continuously re-negotiable even during the interview.

The researcher and the participant have different roles. Both roles should be clear prior to the interview. The interviewer can be seen as a facilitator or enabler, who participates actively by choosing questions and structuring the interview. The interviewees role is to give fulsome answers, reflect and think, answer probing questions and raise other issues they are unsure about. 

In order to achieve breadth and depth, the research can asked content mapping questions and content mining questions. The content mapping questions entail the following types:

§     Ground mapping questions: open up the subject

o    ‘Have you ever applied for a benefit?’

§     Dimension mapping questions: signpost, structure, and direct the interview. It usually refers to the process or experience the participant underwent.

o    ‘What happened next?’

§     Perspective-widening questions: let interviewees give more than their first thoughts and widens their initial perspectives.

o    ‘Are there any other factors that would influence your decision?’

On the other hand, the content mining questions deal with the following types:

§     Amplificatory probes: enable the interviewee to provide a full description of a subject

o    ‘Can you tell me a bit more about…?’

§     Exploratory probes: exploring impacts, effects and consequences

o    ‘How did you respond when…?’

§     Explanatory probes: understand the reasoning of the participant

o    ‘What makes you say that?’

§     Clarification probes: achieving high degree of clarity and precision

o    Clarify terms and language

o    Clarify details and sequences

o    Clarify through expressing a position

o    Challenge inconsistency

With respect to the question formulation, the researcher should seek to use both broad and narrow questions, avoid leading questions, and ask clear questions. With respect to the content mapping and content mining questions, the amount of questions varies. There are both broad (how did you…) and narrow (yes/no) questions that constitute an in-depth interview. The researcher should seek to avoid asking questions that suggest a possible answer, like ‘Were you furious when he said that?’ The last criterion refers to not prefacing a question, not asking double questions and avoiding abstract or theorized questions.

Obviously, listening is very important and can serve to increase depth, together with remembering the answers. To further enhance that depth, the researchers should facilitate a relationship with the interviewee by expressing interest and attention; emphasizing there cannot be right or wrong answers; being sensitive to non-verbal characteristics; allowing the participant time to reply; and handling extraneous information. A researcher should furthermore turn assumptions and interventions into questions. That is, he must not assume; refrain from commenting on, summarizing, and finishing off an answer; and avoid extraneous remarks. Lastly, the interviewer must be neutral.

From page 163 onwards, a number of scenarios is presented in which is explained how the interviewer can respond to different situations. Among them are sensitivity, emotion, and anxiety.

Practical considerations prior to and during the interview are appointment scheduling, venues, recording, and audience. Generally, an interview should last for at least an hour. However, this duration can change so that the interviewer and interviewee must be flexible regarding appointments. Moreover, the participant generally chooses the venue of the interview and the venue should be conducive to concentration. It is highly desirable to audiotape the interview so the interviewer does not have to take many notes and can be entirely devoted to the answers given. Because of the clarity, the interview should preferably be carried out by one researcher primarily.

Chapter 7. Focus Groups

A focus group is not just a collection of individual interviews with comments directed solely through the researcher. This is better described as a ‘group interview’, and lacks both the depth of individual interviews and the richness that comes with using the group process. Instead, Focus groups are synergetic in the sense that the group works together: the group interaction is explicitly used to generate data and insights.

A further feature of focus groups is the spontaneity that arises from their stronger social context. In responding to each other, participants reveal more of their own frame of reference on the subject of study. All this emerges from discussion within the group, the perspective is less influenced by interaction with the researcher than it might be in a one-to-one interview. In a sense, the group participants take over some of the interviewing role, and the researcher is at times more in the position of listening in.

Typically, focus groups involve around six to eight people who meet once, for a period of around an hour and a half to two hours. Since the last decade of the twentieth century, there has been an emphasis on using research for consultative purposes. This led to some innovations in the application of research methods, and particularly of group discussion methods. Consultative panels have been conducted in different forms and involve drawing people together in a series of sessions to deliberate and contribute to decision making. Focus group research is an interesting application to decision-making, particularly useful in more unfamiliar, technical or complex areas where information provision is more important.

Although group based research usually involves a physical coming together of participants, this is not always the case. Nominal groups have been used for some time. Here, views are gathered from group members individually and collated and circulated for comment. The Delphi technique is a particular application of this. A panel of experts is asked individually to provide forecasts in a technical field, with their views summarized and circulated for iterative forecasting until consensus is reached.

Group process

Based on examination of studies of small groups, Tuckman and Jenson identified five stages in small group development which demonstrate a sequence that groups tend to pass through. The model was based on examination of studies of small groups which were mainly therapy and training groups.

In the forming phase, individuals may be guarded, tense and anxious and concerned about inclusion and acceptance. They tend to address comments solely to the moderator, not yet engaging with other group members. This is the stage at which background information is usefully collected so that participants are on familiar ground.

Storming is a period of tension or criticism that may be shown up in a number of ways. It may  be typified by dominance or one-upmanship from some individuals, by silent aloofness from others, or by the adoption of particular roles as a defensive position. Strong differences may emerge in this phase of the group which may provide useful material to return to, but these differences may diminish later as people express themselves with more complexity and subtlety.

This is followed by the group settling down to a calmer phase of sharing, similarity and agreement, or norming, in which the norms of the group are established. The group begins to work cooperatively and may be particularly keen to find common ground, to agree with each other and to reinforce what others say. Participants may in this phase begin to put into practice the ‘ground rules’. This is the stage at which social norms will be most influential, revealing what are seen as socially acceptable views or behaviours.

The performing phase which follows finds the group working interactively in open discussion on the research issues. This is likely to be with energy, concentration, enjoyment and a less guarded stance, allowing both agreement and disagreement between participants. This is the most productive phase of the group process, but it takes time to reach it.

Finally, in the adjourning phase, the group works towards ending. Participants may take the opportunity to reinforce something they have said earlier or to give their final thoughts. There may be a circular process, with the group dynamic perhaps reverting back from the performing stage to the storming stage.

Stages of a focus group :

  1. Scene setting and ground rules
  2. Individual introductions
  3. The opening topic
  4. Discussion
  5. Ending the discussion

The researchers aim is to allow as much relevant discussion as possible to be generated from within the group while at the same time ensuring that the aims of the research are met. The researcher will remain as non-directive as possible but will nevertheless be pacing the debate to ensure that all the key issues are covered as fully as possible.

How to control the balance between individual contributions :

  • Create space for everyone to contribute
  • Address the dominant participants, reduce their influence in the discussion
  • Try to actively let the ‘’silent’’ participants join the discussion
  • Avoiding simultaneous dialogue

A good focus group is more than the sum of its parts. The researcher harnesses the group process, encouraging the group to work together to generate more in-depth data based on interaction.

To encourage the group to build on what they have generated, we can apply several approaches :

  • If a potentially interesting issue has been raised by one group member, the researcher may allow the discussion to continue, seeing whether others will pick up on it
  • The researcher may decide to draw attention more directly to the point, asking for more comments on it or asking a specific question about it of the group.
  • They may encourage the group to reflect on the links or relationships between what individual participants are saying.
  • If divergent views are being expressed, the researcher may ask whether these are in conflict with each other or can be reconciled
  • They may encourage respondents to focus on the implications or consequences of what has been raised in individual examples

Group composition

The size and composition of a group will be critical in shaping the group dynamic and determining how, and how well, the group process works. As a general rule, some diversity in the composition of the group aids discussion, but too much can inhibit it. Very heterogeneous groups can feel threatening to participants and can inhibit disclosure. If the group is too disparate, it is difficult to cover key topics in depth. The ideal is therefore usually a point of balance between the two extremes of heterogeneity and homogeneity, with as much diversity as the group can take but no more.

Three further issues need to be considered in weighing up the extent of diversity to build into group composition.

  • First, it is usually necessary for respondents in each group to have broadly the same proximity to the research subject.
  • Second, the socio-demographic makeup of the group can influence how frank and fulsome discussions will be.
  • A third consideration is that it may be a specific requirement of the research to look at differences between subgroups within the sample.

Group size

Focus groups typically involve around six to eight participants, but the optimum group size will depend on a number of issues :

  • The amount of group participants are likely to have to say on the research topic
  • The sensitivity or complexity of the issue
  • The extent to which the researcher requires breadth or depth of data
  • The population group involved
  • The structure and tasks involved in the session

If the group is larger, above about eight participants, not everyone will be able to have their say to the same extent. In groups that are small than about five or six, the researcher may similarly need to be more active, but in the sense of energizing or challenging the group. If the group is very small, with fewer than four participants, it can lose some of the qualities of being a group, particularly if there is a lot of difference between respondents.

  • Practical arrangements are also key to the success of group discussions: the time, the venue, the layout of the room and the quality of recording equipment are all important.

Chapter 8. Focus on Investigation

Observation in research content is more systematic and formal. Ethnocentric research is predicated on the regular and repeated observation of people and situation. Observation is defined as ‘the act of noting a phenomenon, often with instruments and recoding it for scientific purposes.’ In everyday usage, observation is restricted to the visual but a good ethnographer must be aware of information coming in from all sources. Ethnocentric observation is conducted in the field of natural settings.

According to Gold (1958) there are four categories of roles adopted by the ethnographer:

  • Complete observer role: observers are neither seen nor noticed
  • Observer-as-participant role: the researcher is known and recognized but relates to the ‘subjects’ of study solely as a researcher.
  • Participant-as-observer: his or her activities as a researcher are still acknowledged by being fully integrated into the life of group under study
  • Complete participant: he or she disappears completely into the settings and is fully engaged with the people and their activities
  • Most ethnographers position themselves somewhere within the second two roles.

Roles can also be discussed in terms of membership:

  • Adopting peripheral membership: researcher observes and interacts closely with the people under study and thereby establish identities as insiders but do not participate in those activities constituting the core of group membership.
  • Adopting an active membership role: researcher engages in core activities although they try to refrain from committing themselves to the group’s values, goals and attitudes.
  • Adopting complete membership: study settings in which researchers are active and engaged members. They are often advocates for the positions adopted by the group.

Observational techniques suitable for research dealing with

  • Specific settings
  • Events
  • Demographic factors

It is necessary to have the following qualities: language skills, explicit awareness, a good memory, cultivated naiveté and writing skills.

The process of observational research:

First step is site selection. As the site is selected, it is necessary to gain entrée into the community. In less inviting settings, added preparations must be made and gatekeepers must be approached and their approval and support gained. Then, researcher my begin observing immediately. The more exotic the locale, the more likely will it be that the researcher suffers from culture shock. An understanding of what is and is not central comes only after repeated observations. The method is best that helps the individual researcher retrieve and analyse whatever has been collected is the bottom line. With the progress, observations will gradually fall into discernible patterns.

James Spradley (1980) has referred to the stages of observation as a ‘funnel’ because the progress gradually narrows and directs researchers’ attention more deeply into the element of the setting. Observations continue until a point of theoretical saturation is achieved.

Reliability is the measure of the degree to which any given observation is consistent with a general pattern and not the result of random change. There are some ways in which observation-based researchers can achieve something approaching criteria of scientific reliability. For instance, observations that are conducted in a systematic fashion or that are repeated regularly over time.

Validity is the measure of the degree to which an observation actually demonstrates what is appears to demonstrate. Qualitative ethnographic researchers are not usually concerned with reliability because there is not expectation that one researcher observing a community one time will exactly duplicate the findings of a different researcher. If there is no basis for trusting the observation the research is meaningless. Observations are susceptible to bias from subjective interpretations. Observational findings are rarely confirmable. The most deployed means of achieving validity are:

  • Work with multiple observers or teams (represent various viewpoints)
  • Methodology of analytic induction (emergent propositions are tested in a search for negative cases. Goal is to achieve assertions that can be taken as universal.
  • Techniques of verisimilitude (style of writing that draws the reader into the world that has been studied to evoke a mood of recognition (use of descriptive language)

à more than other types of scientific ‘data’ ethnographic observations only become ‘valid’ when they have been rendered into some sort of coherent, consistent narrative

There are five categories that help us to judge the quality of research:

  • Objectivity/confirmability (‘external reliability): degree to which conclusions flow from the information that has been collected
  • Reliability/dependability/auditability: degree to which the process of research has been consistent and reasonably stable over time
  • Internal validity/credibility/authenticity (‘true value’): degree to which the conclusions of a study make sense
  • External validity/transferability/fittingness: degree to which the conclusions of a study have relevance to matters beyond the study itself
  • Utilization/applications/action orientation (‘pragmatic validity’): degree to which programs or actions result from a study’s findings and/or the degree to which ethical issues are forthrightly dealt with

Observer bias

‘Observer effects’= tendency of people to change their behaviour because they know they are being observed. Ways to minimize the bias:

  • Naturalness of observation provides some inoculation against bias
  • Observation research is emergent: to avoid predetermined categories, the research can shift the question(s) he or she is pursuing.
  • Combination with other techniques for the collections of information. The process of triangulations is a good hedge against the biases that result from ‘pure’ observation.

Observations in public spaces

Nature of setting is almost always the preferred technique. Some public spaces are fairly clearly delineated (e.g. airport) others less so, but all provide the context for studies involving moral order, interpersonal relations, and norms for dealing with different categories for individuals. Observational studies in public spaces allow researchers to gather data on large groups of people and thereby to identify patterns of group behaviour.

Most famous example of space observation: Humphreys (1975) who adopted a covert observer-as-participant role in a public bathroom. He observed men engaging in impersonal homosexual encounters and concluded that men in this setting adopt one of several possible roles.

Ethics: a researcher can be guilty by entering into places that can be construed as private even though they have a public character. The usual answer is that studying sensitive subjects is not taboo but doing so without the express permission of the participants is ethically wrong. Generally, it applies:

  • Unethical to deliberately misrepresent his or her identity for the purpose of entering a private domain
  • Unethical to deliberately misrepresent the character of the research in which he or she is engaged.

Chapter 9.  The Concept of Analysis

There will be two chapters devoted to analysis. This chapter will cover the different approaches and practices of analysis. Also the features analytic methods need to hold for effective and penetrative investigation and the stages and processes involved in analysis will be covered.

Approaches to qualitative analysis are compared according to the way they address a number of different issues such as:

  • The status of the data: data are referred to phenomena such as feelings, perceptions and events, which exist apart from the setting in which the data were captured.
  • The primary focus of analysis: analyses are different in what they focus on. Discourse analysis, conversation analysis and narrative analysis focus on the language and the construction and structure of talk, text and interaction. Content analysis, grounded theory and policy analysis are concerned with interpreting common sense and substantive meaning in the data.
  • The way data are reduced: for example paring down statements to their core meaning, thematic summaries, collective analytical categorization (categorizing the data), identification of the overall structure in the data or graphic displays of synthesized data.
  • The kinds of concepts generated: identification of key themes, concepts or categories. The nature of these concepts and the way in which they are generated varies between approaches.
  • The way concepts are applied to the data: two main ways; cross-sectional ‘code and retrieve’ methods and non-cross-sectional analysis. The first means devising a common system of categories and applying it to the data. The second means looking at different parts of the data separately and conceptualizing categories on these different parts.
  • The extent to which data are retained in context: the way in which approaches treat the context and retain links to the original data. Code and retrieve approaches, for example, group and compare data chunks outside the context. This is often criticized as it does not provide the full picture.
  • The way ‘analysed’ data are accessed and displayed: researchers take different approaches to accessing their already analysed data. Code and retrieve methods view the data in textual chunks, sorted by category. Other methods work directly from raw data.
  • The explicit level of abstraction: the way in which the researcher describes the data and patterns. The extent to which this is detailed varies among approaches.
  • The status of categories and the logic of explanation: the way in which categories are explained. Some researchers treat categories as variables explain a causal relationship with those variables (X leads to Y). Others say this is not possible with qualitative data.
  • The place of the researcher in the analytical account: the way in which the researcher is considered in the analysis. Some approaches treat the researcher as not being of influence to the evidence while others see the researcher as an integral part of the interpretation.

Miles (1995) categorized five different types of computer-assisted qualitative data analysis software (CAQDAS).

  1. Text retrievers: searches large amounts of data for words or phrases.
  2. Text base managers: provide a structure to the data and are searchable like text retrievers.
  3. Code and retrieve programs: allow you to label or tag passages to make them easier to find.
  4. Code based theory builders: allow one to create links between different aspects of the data.
  5. Conceptual network builders: facilitate graphic display of the data set concepts.

It is important to notice that there is not one best computer program. The researcher should see how well each program fits to their needs. However, the usage of CAQDAS methods offers the researcher more speed, consistency and eases the linking of data. A downside is that these programs take the segments of text out of their context and encourage the researcher to take shortcuts.

Researchers need certain tools to carry out their analysis as the data collected is likely to be very raw and difficult to interpret at first. These tools make the task of managing the evidence in a proper way easier for the investigators. It is important to choose a tool or ‘analytical support’ that maximizes the full potential of a qualitative analysis. Therefore, there are certain crucial characteristics of tools in any method used to investigate qualitative data:

  • Remains grounded in the data: concepts should be rooted within the data, rather than simply being imposed on the data. Therefore, a structure that allows concepts or patterns to emerge from the data should be present in every method. The tools should thus allow patterns to be shown rather than a researcher devising patterns himself and forming them to the data.
  • Permits captured synthesis: reducing the raw data to usable concepts still need to be done carefully in order to capture the original views of the participants. Therefore, the synthesis should be captured so that it can always be checked back against the original material.
  • Facilitates and displays ordering: ordering the evidence should be done in such a way that it can be inspected in related blocks at any time during the study.
  • Permits within and between case searches: the researcher should be able to easily search through the data to find relationships. This requires tools for three different types of search, namely (1) thematic categories and patterns across different cases, (2) associations between phenomena within one case, and (3) associations in phenomena between groups of cases.
  • Allows systematic and comprehensive coverage of the data set: the analysis needs to be systematically applied to all cases. This means all cases are analysed in the same way.
  • Permits flexibility: the method should allow the model to be amendable if new ideas emerge.
  • Allows transparency to others: the content should be accessible to others in order to perform a follow-up study. Others should be able to review how the data was analysed.

In order to make the data set more manageable for the researcher, an analytical hierarchy was developed. It is made up out of different parts which involve various analytical tasks. This makes it easier for the researcher to gain an overview and make sense of the data. He or she can move between the stages of the hierarchy which helps to produce more reliable results of the analysis. There are three main stages, namely data management, descriptive accounts and explanatory accounts.

At the start of the analytic process is data management. Here, the researcher sorts the data to make it more manageable. It involves generating concepts to which data are labelled and sorted. It may be carried out manually or through CAQDAS programs.

Secondly, the researcher makes use of this ordered data to identify key dimensions. Two features of qualitative data are essential to the analysis, namely the language participants use (which shows how strongly they feel about a phenomenon) and the substantive content of people’s accounts (the importance of the content of each case). After the data have been classified into refined categories, typologies can be developed. These are specific forms of classification and can be divided into two major forms; indigenous and analyst constructed. The first are classification systems devised by the participants themselves. The latter are created during the analytical process and classify patterns emerging from the data.

Finally, explanatory accounts are developed. The analyst must try to find patterns of associations within the data to move from descriptive to explanatory accounts. This means that, after finding the main concepts described in the data, the researcher will investigate the patterns and relationships among concepts and try to explain why these patterns occurs. Some researchers find that qualitative analysis cannot be used to find causal relationships (X leads to Y). However, it is an important research method in order to clarify the nature of different factors and the interrelationship between concepts.

Chapter 10. Grounded Theory Approach

Grounded theory is a relatively new approach to research. The process is described as the discovery of theory in social research. It is discovered, developed and verified through systematic data collection and analysis of data concerning a phenomenon. So data collection, analysis and theory stand are interrelated.

It is discovered empirically, through induction. Grounded theory is focusses at the contextual values and not the values of the investigator. This may lead to influences of contextual factors, such as time and culture, but nevertheless grounded theory produces generalizable data. Preconceived data should not be taken into account. A general understanding of the phenomenon is enough. Evidence is then being gathered by the researcher, resulting in an “emerging” theory.

Theoretical sensitivity

Research must pay attention “theoretical sensitivity” of the data, or the relevance of the categories as they emerge from the data. These categories should make sense in comparison to already existing theories. Theoretical sensitivity involves repetition in data collection and analysis and refuses to focus on any theoretical perspective in advance of the concepts generated by evidence alone. In the end, the discovered concepts and hypothesis can then be combined with existing literature.

Process of Ground Theory Research

The research is holistic, naturalistic and inductive. Several assumptions about the research are widely shared. First, the aim of research is to generate a theory. Next, it focuses on how individuals interact in relation with the phenomenon. Furthermore, theory is derived from data through fieldwork, interviews, observations and documents. Moreover, data analysis proceeds through identifying categories and connecting them and proceeds from open coding, to axial coding to selective coding. Next, theoretical ideas have to be set aside, so a substantive theory can emerge. Theory also asserts a plausible relation between concepts and set of concepts. The data analysis is systematic and further data collection (sampling is based on emerging concepts. Finally, the resulting theory can be reported in a framework or as a set of propositions.

Process of grounded theory research:

1.  Initiating Research

First involves a selection of an area of interest by the researcher and a suitable site for the study. It is important, as mentioned before, that the researcher avoids preconceptions about the subject. Researcher should focus on relaying initial observations and maintain a theoretical sensitivity.

2.  Data Selection

Involves the location and identification of potential data sources. First, a broad, unstructured approach to selecting a sample will be used. The rest of the samples are dependent on the emergence of categories and the theory. The further samples are chosen based on their potential to offer important variation in comparisons.

3.  Initiation and Data Collection

Interviews are most often used, but some argue that a combination of methods, including observation and documentary resources must be used.

Data collection is combined with data analysis until saturation has been reached. The direction of data collection gains focus over time, as the theory emerges. Throughout the collection phase, the methods of collecting data become also more specific (interviews shorten).

4.  Data Analysis

Data analysis in ground theory research consists of a constant comparative method for generating and analysing data. Is woven with data collection.

See figure 1 on page 239. Data analysis has 9 sub-steps. These steps can be divided into 2 categories. The first category, consisting of a, b and c, follows after the interview, when the coding starts. Coding involves the process of naming, comparing and memoing.

Naming: attempts to conceptualize and develop abstract meaning for the observations in data

Comparing: development of a common category for multiple observations

Memoing: act of taking notes for elaboration. Has two forms: 1) notes that capture insights gained in the field and 2) recording of ideas generated later in the research process.

The second part of the data analysis starts with searching for the emergence of categories. If they emerge, they are organized into sets. Naming of categories and their properties follows.

Then the level of elaboration that is needed is defined, based on the clarity of the categories. As the data collection becomes more focused, clarification of the concepts that are already found becomes the priority. Then time is spent clarifying the analytical rationale for the research process.

Property = part of a category. May very in degree of abstractness. Categories are not representatives of the data, but are indicated by the data.

5.  Concluding the Research

Grounded theory research is concluded when the point of saturation has been reached and sufficient theory has emerged from the data. Data saturation = data collection no longer contributes to elaboration of phenomenon being studied.

Once the saturation is evident, a structural framework is developed through the clarification of associations between central categories and the supporting categories and properties. The framework is likely to contain relationships, which can lead to propositions.

Using Grounded Theory Research for Theory Building

Figure 2 on page 246 shows the potential roles that ground theory research can play in context of the general method of theory-building research.

Evaluating Grounded Theory

There are four key areas for consideration when evaluating grounded theory research efforts. These are

  • Judgments about the validity, reliability and credibility of the data
  • Judgments about the theory itself
  • Decisions regarding the adequacy of the research process
  • Conclusions about the empirical grounding of the research

There are seven criteria for evaluating the research process. First, rationale for selection of the original sample. Second, elaboration of the major categories that emerges. Third, the events pointing to major categories identified. Fourth, an explanation of how theoretical formulations influenced or guided the data collection. Next, the elaboration regarding the hypothesis and justifications for the establishment of relationships. The accounting for discrepancies in the data and resulting theoretical modifications is a criterion as well. Last, the rationale for the selection of the core or central category.

Seven other criteria are hold for assessment of the grounding of a study. First, the systematic relationships between the concepts. Next, the quality of the concepts generated. Third, the clarity and density of conceptual linkages. Fourth, the inclusion of variation into the theory. Next, a clear description of the conditions under which variations can be found. Furthermore, an account of the research process. Finally, the significance of theoretical findings.

It has been argued that a theory built form the grounded theory approach will prove its value in practical applications. From this perspective, theory is viewed as adequate, when it is a good guide to understanding and directing action, but it de-emphasizes the importance of a theory’s truth or accuracy.

Grounded Theory Research in HRD

Grounded theory is important to HRD because of its potential for contribution to an overall agenda being established. The most salient link is its connection between theory and practice. HRD can leverage the strengths of grounded theory research to inform practice and the on-going theory-building process. Grounded theory can also be used by positivists and naturalists, making it a trans-disciplinary approach. Being trans-disciplinary, together with its aim to capture tacit knowledge are very important in considering the use of grounded theory approach in HRD.

Challenges and Limitations

There is a lot of controversy about grounded theory research. Benoliel suggested that only a small percentage of research articles that claim to have used this approach, truly has used this approach. Often these articles did not account for social structural influences of respondents. Also, grounded theory approach is often confused with other research methods, such as phenomenology.

An additional criticism is that grounded theory research is underdetermined and is not viable because the raw data that is used are actually facts taken from within the framework of some other theory or theory-in-use, not understood by the researcher.

If the group is larger, above about eight participants, not everyone will be able to have their say to the same extent. In groups that are small than about five or six, the researcher may similarly need to be more active, but in the sense of energizing or challenging the group. If the group is very small, with fewer than four participants, it can lose some of the qualities of being a group, particularly if there is a lot of difference between respondents.

  • Practical arrangements are also key to the success of group discussions: the time, the venue, the layout of the room and the quality of recording equipment are all important.

Chapter 8. Focus on Investigation

Observation in research content is more systematic and formal. Ethnocentric research is predicated on the regular and repeated observation of people and situation. Observation is defined as ‘the act of noting a phenomenon, often with instruments and recoding it for scientific purposes.’ In everyday usage, observation is restricted to the visual but a good ethnographer must be aware of information coming in from all sources. Ethnocentric observation is conducted in the field of natural settings.

According to Gold (1958) there are four categories of roles adopted by the ethnographer:

  • Complete observer role: observers are neither seen nor noticed
  • Observer-as-participant role: the researcher is known and recognized but relates to the ‘subjects’ of study solely as a researcher.
  • Participant-as-observer: his or her activities as a researcher are still acknowledged by being fully integrated into the life of group under study
  • Complete participant: he or she disappears completely into the settings and is fully engaged with the people and their activities
  • Most ethnographers position themselves somewhere within the second two roles.

Roles can also be discussed in terms of membership:

  • Adopting peripheral membership: researcher observes and interacts closely with the people under study and thereby establish identities as insiders but do not participate in those activities constituting the core of group membership.
  • Adopting an active membership role: researcher engages in core activities although they try to refrain from committing themselves to the group’s values, goals and attitudes.
  • Adopting complete membership: study settings in which researchers are active and engaged members. They are often advocates for the positions adopted by the group.

Observational techniques suitable for research dealing with

  • Specific settings
  • Events
  • Demographic factors

It is necessary to have the following qualities: language skills, explicit awareness, a good memory, cultivated naiveté and writing skills.

The process of observational research:

First step is site selection. As the site is selected, it is necessary to gain entrée into the community. In less inviting settings, added preparations must be made and gatekeepers must be approached and their approval and support gained. Then, researcher my begin observing immediately. The more exotic the locale, the more likely will it be that the researcher suffers from culture shock. An understanding of what is and is not central comes only after repeated observations. The method is best that helps the individual researcher retrieve and analyse whatever has been collected is the bottom line. With the progress, observations will gradually fall into discernible patterns.

James Spradley (1980) has referred to the stages of observation as a ‘funnel’ because the progress gradually narrows and directs researchers’ attention more deeply into the element of the setting. Observations continue until a point of theoretical saturation is achieved.

Reliability is the measure of the degree to which any given observation is consistent with a general pattern and not the result of random change. There are some ways in which observation-based researchers can achieve something approaching criteria of scientific reliability. For instance, observations that are conducted in a systematic fashion or that are repeated regularly over time.

Validity is the measure of the degree to which an observation actually demonstrates what is appears to demonstrate. Qualitative ethnographic researchers are not usually concerned with reliability because there is not expectation that one researcher observing a community one time will exactly duplicate the findings of a different researcher. If there is no basis for trusting the observation the research is meaningless. Observations are susceptible to bias from subjective interpretations. Observational findings are rarely confirmable. The most deployed means of achieving validity are:

  • Work with multiple observers or teams (represent various viewpoints)
  • Methodology of analytic induction (emergent propositions are tested in a search for negative cases. Goal is to achieve assertions that can be taken as universal.
  • Techniques of verisimilitude (style of writing that draws the reader into the world that has been studied to evoke a mood of recognition (use of descriptive language)

à more than other types of scientific ‘data’ ethnographic observations only become ‘valid’ when they have been rendered into some sort of coherent, consistent narrative

There are five categories that help us to judge the quality of research:

  • Objectivity/confirmability (‘external reliability): degree to which conclusions flow from the information that has been collected
  • Reliability/dependability/auditability: degree to which the process of research has been consistent and reasonably stable over time
  • Internal validity/credibility/authenticity (‘true value’): degree to which the conclusions of a study make sense
  • External validity/transferability/fittingness: degree to which the conclusions of a study have relevance to matters beyond the study itself
  • Utilization/applications/action orientation (‘pragmatic validity’): degree to which programs or actions result from a study’s findings and/or the degree to which ethical issues are forthrightly dealt with

Observer bias

‘Observer effects’= tendency of people to change their behaviour because they know they are being observed. Ways to minimize the bias:

  • Naturalness of observation provides some inoculation against bias
  • Observation research is emergent: to avoid predetermined categories, the research can shift the question(s) he or she is pursuing.
  • Combination with other techniques for the collections of information. The process of triangulations is a good hedge against the biases that result from ‘pure’ observation.

Observations in public spaces

Nature of setting is almost always the preferred technique. Some public spaces are fairly clearly delineated (e.g. airport) others less so, but all provide the context for studies involving moral order, interpersonal relations, and norms for dealing with different categories for individuals. Observational studies in public spaces allow researchers to gather data on large groups of people and thereby to identify patterns of group behaviour.

Most famous example of space observation: Humphreys (1975) who adopted a covert observer-as-participant role in a public bathroom. He observed men engaging in impersonal homosexual encounters and concluded that men in this setting adopt one of several possible roles.

Ethics: a researcher can be guilty by entering into places that can be construed as private even though they have a public character. The usual answer is that studying sensitive subjects is not taboo but doing so without the express permission of the participants is ethically wrong. Generally, it applies:

  • Unethical to deliberately misrepresent his or her identity for the purpose of entering a private domain
  • Unethical to deliberately misrepresent the character of the research in which he or she is engaged.

Chapter 9.  The Concept of Analysis

There will be two chapters devoted to analysis. This chapter will cover the different approaches and practices of analysis. Also the features analytic methods need to hold for effective and penetrative investigation and the stages and processes involved in analysis will be covered.

Approaches to qualitative analysis are compared according to the way they address a number of different issues such as:

  • The status of the data: data are referred to phenomena such as feelings, perceptions and events, which exist apart from the setting in which the data were captured.
  • The primary focus of analysis: analyses are different in what they focus on. Discourse analysis, conversation analysis and narrative analysis focus on the language and the construction and structure of talk, text and interaction. Content analysis, grounded theory and policy analysis are concerned with interpreting common sense and substantive meaning in the data.
  • The way data are reduced: for example paring down statements to their core meaning, thematic summaries, collective analytical categorization (categorizing the data), identification of the overall structure in the data or graphic displays of synthesized data.
  • The kinds of concepts generated: identification of key themes, concepts or categories. The nature of these concepts and the way in which they are generated varies between approaches.
  • The way concepts are applied to the data: two main ways; cross-sectional ‘code and retrieve’ methods and non-cross-sectional analysis. The first means devising a common system of categories and applying it to the data. The second means looking at different parts of the data separately and conceptualizing categories on these different parts.
  • The extent to which data are retained in context: the way in which approaches treat the context and retain links to the original data. Code and retrieve approaches, for example, group and compare data chunks outside the context. This is often criticized as it does not provide the full picture.
  • The way ‘analysed’ data are accessed and displayed: researchers take different approaches to accessing their already analysed data. Code and retrieve methods view the data in textual chunks, sorted by category. Other methods work directly from raw data.
  • The explicit level of abstraction: the way in which the researcher describes the data and patterns. The extent to which this is detailed varies among approaches.
  • The status of categories and the logic of explanation: the way in which categories are explained. Some researchers treat categories as variables explain a causal relationship with those variables (X leads to Y). Others say this is not possible with qualitative data.
  • The place of the researcher in the analytical account: the way in which the researcher is considered in the analysis. Some approaches treat the researcher as not being of influence to the evidence while others see the researcher as an integral part of the interpretation.

Miles (1995) categorized five different types of computer-assisted qualitative data analysis software (CAQDAS).

  1. Text retrievers: searches large amounts of data for words or phrases.
  2. Text base managers: provide a structure to the data and are searchable like text retrievers.
  3. Code and retrieve programs: allow you to label or tag passages to make them easier to find.
  4. Code based theory builders: allow one to create links between different aspects of the data.
  5. Conceptual network builders: facilitate graphic display of the data set concepts.

It is important to notice that there is not one best computer program. The researcher should see how well each program fits to their needs. However, the usage of CAQDAS methods offers the researcher more speed, consistency and eases the linking of data. A downside is that these programs take the segments of text out of their context and encourage the researcher to take shortcuts.

Researchers need certain tools to carry out their analysis as the data collected is likely to be very raw and difficult to interpret at first. These tools make the task of managing the evidence in a proper way easier for the investigators. It is important to choose a tool or ‘analytical support’ that maximizes the full potential of a qualitative analysis. Therefore, there are certain crucial characteristics of tools in any method used to investigate qualitative data:

  • Remains grounded in the data: concepts should be rooted within the data, rather than simply being imposed on the data. Therefore, a structure that allows concepts or patterns to emerge from the data should be present in every method. The tools should thus allow patterns to be shown rather than a researcher devising patterns himself and forming them to the data.
  • Permits captured synthesis: reducing the raw data to usable concepts still need to be done carefully in order to capture the original views of the participants. Therefore, the synthesis should be captured so that it can always be checked back against the original material.
  • Facilitates and displays ordering: ordering the evidence should be done in such a way that it can be inspected in related blocks at any time during the study.
  • Permits within and between case searches: the researcher should be able to easily search through the data to find relationships. This requires tools for three different types of search, namely (1) thematic categories and patterns across different cases, (2) associations between phenomena within one case, and (3) associations in phenomena between groups of cases.
  • Allows systematic and comprehensive coverage of the data set: the analysis needs to be systematically applied to all cases. This means all cases are analysed in the same way.
  • Permits flexibility: the method should allow the model to be amendable if new ideas emerge.
  • Allows transparency to others: the content should be accessible to others in order to perform a follow-up study. Others should be able to review how the data was analysed.

In order to make the data set more manageable for the researcher, an analytical hierarchy was developed. It is made up out of different parts which involve various analytical tasks. This makes it easier for the researcher to gain an overview and make sense of the data. He or she can move between the stages of the hierarchy which helps to produce more reliable results of the analysis. There are three main stages, namely data management, descriptive accounts and explanatory accounts.

At the start of the analytic process is data management. Here, the researcher sorts the data to make it more manageable. It involves generating concepts to which data are labelled and sorted. It may be carried out manually or through CAQDAS programs.

Secondly, the researcher makes use of this ordered data to identify key dimensions. Two features of qualitative data are essential to the analysis, namely the language participants use (which shows how strongly they feel about a phenomenon) and the substantive content of people’s accounts (the importance of the content of each case). After the data have been classified into refined categories, typologies can be developed. These are specific forms of classification and can be divided into two major forms; indigenous and analyst constructed. The first are classification systems devised by the participants themselves. The latter are created during the analytical process and classify patterns emerging from the data.

Finally, explanatory accounts are developed. The analyst must try to find patterns of associations within the data to move from descriptive to explanatory accounts. This means that, after finding the main concepts described in the data, the researcher will investigate the patterns and relationships among concepts and try to explain why these patterns occurs. Some researchers find that qualitative analysis cannot be used to find causal relationships (X leads to Y). However, it is an important research method in order to clarify the nature of different factors and the interrelationship between concepts.

Chapter 10. Grounded Theory Approach

Grounded theory is a relatively new approach to research. The process is described as the discovery of theory in social research. It is discovered, developed and verified through systematic data collection and analysis of data concerning a phenomenon. So data collection, analysis and theory stand are interrelated.

It is discovered empirically, through induction. Grounded theory is focusses at the contextual values and not the values of the investigator. This may lead to influences of contextual factors, such as time and culture, but nevertheless grounded theory produces generalizable data. Preconceived data should not be taken into account. A general understanding of the phenomenon is enough. Evidence is then being gathered by the researcher, resulting in an “emerging” theory.

Theoretical sensitivity

Research must pay attention “theoretical sensitivity” of the data, or the relevance of the categories as they emerge from the data. These categories should make sense in comparison to already existing theories. Theoretical sensitivity involves repetition in data collection and analysis and refuses to focus on any theoretical perspective in advance of the concepts generated by evidence alone. In the end, the discovered concepts and hypothesis can then be combined with existing literature.

Process of Ground Theory Research

The research is holistic, naturalistic and inductive. Several assumptions about the research are widely shared. First, the aim of research is to generate a theory. Next, it focuses on how individuals interact in relation with the phenomenon. Furthermore, theory is derived from data through fieldwork, interviews, observations and documents. Moreover, data analysis proceeds through identifying categories and connecting them and proceeds from open coding, to axial coding to selective coding. Next, theoretical ideas have to be set aside, so a substantive theory can emerge. Theory also asserts a plausible relation between concepts and set of concepts. The data analysis is systematic and further data collection (sampling is based on emerging concepts. Finally, the resulting theory can be reported in a framework or as a set of propositions.

Process of grounded theory research:

1.  Initiating Research

First involves a selection of an area of interest by the researcher and a suitable site for the study. It is important, as mentioned before, that the researcher avoids preconceptions about the subject. Researcher should focus on relaying initial observations and maintain a theoretical sensitivity.

2.  Data Selection

Involves the location and identification of potential data sources. First, a broad, unstructured approach to selecting a sample will be used. The rest of the samples are dependent on the emergence of categories and the theory. The further samples are chosen based on their potential to offer important variation in comparisons.

3.  Initiation and Data Collection

Interviews are most often used, but some argue that a combination of methods, including observation and documentary resources must be used.

Data collection is combined with data analysis until saturation has been reached. The direction of data collection gains focus over time, as the theory emerges. Throughout the collection phase, the methods of collecting data become also more specific (interviews shorten).

4.  Data Analysis

Data analysis in ground theory research consists of a constant comparative method for generating and analysing data. Is woven with data collection.

See figure 1 on page 239. Data analysis has 9 sub-steps. These steps can be divided into 2 categories. The first category, consisting of a, b and c, follows after the interview, when the coding starts. Coding involves the process of naming, comparing and memoing.

Naming: attempts to conceptualize and develop abstract meaning for the observations in data

Comparing: development of a common category for multiple observations

Memoing: act of taking notes for elaboration. Has two forms: 1) notes that capture insights gained in the field and 2) recording of ideas generated later in the research process.

The second part of the data analysis starts with searching for the emergence of categories. If they emerge, they are organized into sets. Naming of categories and their properties follows.

Then the level of elaboration that is needed is defined, based on the clarity of the categories. As the data collection becomes more focused, clarification of the concepts that are already found becomes the priority. Then time is spent clarifying the analytical rationale for the research process.

Property = part of a category. May very in degree of abstractness. Categories are not representatives of the data, but are indicated by the data.

5.  Concluding the Research

Grounded theory research is concluded when the point of saturation has been reached and sufficient theory has emerged from the data. Data saturation = data collection no longer contributes to elaboration of phenomenon being studied.

Once the saturation is evident, a structural framework is developed through the clarification of associations between central categories and the supporting categories and properties. The framework is likely to contain relationships, which can lead to propositions.

Using Grounded Theory Research for Theory Building

Figure 2 on page 246 shows the potential roles that ground theory research can play in context of the general method of theory-building research.

Evaluating Grounded Theory

There are four key areas for consideration when evaluating grounded theory research efforts. These are

  • Judgments about the validity, reliability and credibility of the data
  • Judgments about the theory itself
  • Decisions regarding the adequacy of the research process
  • Conclusions about the empirical grounding of the research

There are seven criteria for evaluating the research process. First, rationale for selection of the original sample. Second, elaboration of the major categories that emerges. Third, the events pointing to major categories identified. Fourth, an explanation of how theoretical formulations influenced or guided the data collection. Next, the elaboration regarding the hypothesis and justifications for the establishment of relationships. The accounting for discrepancies in the data and resulting theoretical modifications is a criterion as well. Last, the rationale for the selection of the core or central category.

Seven other criteria are hold for assessment of the grounding of a study. First, the systematic relationships between the concepts. Next, the quality of the concepts generated. Third, the clarity and density of conceptual linkages. Fourth, the inclusion of variation into the theory. Next, a clear description of the conditions under which variations can be found. Furthermore, an account of the research process. Finally, the significance of theoretical findings.

It has been argued that a theory built form the grounded theory approach will prove its value in practical applications. From this perspective, theory is viewed as adequate, when it is a good guide to understanding and directing action, but it de-emphasizes the importance of a theory’s truth or accuracy.

Grounded Theory Research in HRD

Grounded theory is important to HRD because of its potential for contribution to an overall agenda being established. The most salient link is its connection between theory and practice. HRD can leverage the strengths of grounded theory research to inform practice and the on-going theory-building process. Grounded theory can also be used by positivists and naturalists, making it a trans-disciplinary approach. Being trans-disciplinary, together with its aim to capture tacit knowledge are very important in considering the use of grounded theory approach in HRD.

Challenges and Limitations

There is a lot of controversy about grounded theory research. Benoliel suggested that only a small percentage of research articles that claim to have used this approach, truly has used this approach. Often these articles did not account for social structural influences of respondents. Also, grounded theory approach is often confused with other research methods, such as phenomenology.

An additional criticism is that grounded theory research is underdetermined and is not viable because the raw data that is used are actually facts taken from within the framework of some other theory or theory-in-use, not understood by the researcher. 

Chapter 11. Doing qualitative Analysis

In this chapter, the details of various activities that are carried out in qualitative are considered in more depth. These processes are mainly showed by practical examples throughout the chapter.
 

Analysis is a continuous and iterative process, as was described in chapter 8, but two key stages characterise its course. The first requires managing the data, and the second involves making sense of the evidence through description or explanatory accounts. The main sections of this chapter work through these two stages, step by step, as it is difficult to clearly separate the two stages.
 

Making sense of the data relies partly on the method used to order and categorise data. It, however, is mainly dependent on the analyst and the way of his conceptual thinking.

The name framework from the analysis method framework comes from the thematic framework which is the central component of the method applied here. The thematic framework is used to classify and organise data according to key themes, concepts and emergent categories. As such, each study has a distinct thematic framework comprising a series of main themes, subdivided by a succession of related subtopics. These evolve and are refined through familiarisation with the raw data and cross-sectional labelling. Once it is judged to be comprehensive, each main theme is displayed in its own matrix, where every respondent is allocated in a row and each column denotes a separate topic. Just like we have done with the assignments in the tutorials.
 

The raw qualitative data is most likely to be full of details but unwieldy and intertwined in content. In most analytical approaches, data management initially involves deciding upon the themes or concepts under which the data will be labelled, sorted and compared. In order to construct the thematic framework, the analyst must first gain an overview of the data coverage and become familiar with the data. Do not underestimate the importance of familiarisation, as it is a hugely important step in the process of analysing data. While becoming familiar with the data, keep the research objective in mind and compare the data to this objective.
 

Re-examining the sampling strategy and the profile of the achieved sample is also worthwhile as it will highlight any potential gaps or overemphasis in the data set, but also diversity of participants and circumstances. Try to collect as much diversified data as possible to let the sample look a lot like the population.
 

When reviewing the chosen material, the task is to identify recurring themes or ideas. Once these recurring themes have been noted, the next step is to devise a conceptual framework, drawing both upon the recurrent themes and upon issues introduced into the interviews through the topic guide. Themes are then sorted under categories and placed within the framework.  Once an initial conceptual framework is constructed, it has to be applied to the raw data. The process of applying the conceptual framework to the raw data is called ‘indexing’.
 

With textual data, indexing involves reading each paraphrase, sentence, paragraph in fine detail and deciding ‘what is it about?’ in order to determine which parts of the index apply. Alternatively, this can be done electronically. An important feature is to two or three index numbers are often interspersed. This is usually a sign of some interconnection between themes or issues that should be noted later for the associative analyses. Another key feature is that one always focuses on all indexes, as there can be some sort of reference in other responses to a certain index than the questions based on this index. Also, take into account that it is very well possible that you have to apply the initial indexes.

The following step is to order the data in some way so that material with similar content or properties is located together. There are different ways of sorting data, but this is described in chapter 8. Always remember that it is important to keep the option open to assign data to multiple locations. There are two reasons for this. First, it may be that a single passage will have relevance to two conceptually different subjects and carving it up would destroy both its meaning and its coherence. Second, the juxtaposition of two apparently unrelated matters may give the very first clues to some later insight or explanation.

The final stage of data management involves summarising or synthesising the original data. This not only serves to reduce the amount of material to a more manageable level but also begins the process of distilling the essence of the evidence for later representation. It also ensures the analyst inspects every word of the original material to consider its meaning and relevance to the subject under enquiry.

Three key requirements are essential if the essence of the original data is to be retained. First, key terms, phrases or expressions should be retained as much as possible from the participant’s own language. Second, interpretations should be kept to a minimum at this stage, so that there is always an opportunity to revisit the original ‘expression’ as the more refined levels of analysis occur. Third, material should not be dismissed as irrelevant just because its inclusion is not immediately clear. It may well be that issues that make little sense at this early stage of analysis become vital clues in the later interpretative stages of analysis. The steps involved in data management may take place in a different order depending on the analytical tool being used.

An initial stage in descriptive analysis refers to unpacking the content and nature of a particular phenomenon or theme. The main task is to display data in a way that is conceptually pure, makes distinctions that are meaningful and provides content that is illuminating. There are three key steps involved:

  • Detection – in which the substantive content and dimensions of a phenomenon are identified
  • Categorisation – in which categories are refined and descriptive data assigned to them
  • Classification – in which groups of categories are assigned to classes usually at a higher level of abstraction

The process of moving from synthesized or original text to descriptive categories is explained and illustrated in boxes 9.7 and 9.8. Check these boxes for an understanding of this process.

Typologies have two important characteristics. First, they are usually, although not inevitably, multidimensional or multifactorial classifications. That is, they combine two or more different dimensions so that a more refined or complex portrayal of a position or characteristic can be identified. Second, they offer a classification in which categories are discrete and independent of each other.
 

There are a number of steps to be taken in the detection of a typology. The first task is to identify the relevant dimensions of a typology. For this, it is important for the analyst to have a strong familiarity with the data set and the tasks further down the analytical hierarchy, such as identifying the elements of a phenomenon and refining categories, have been completed. Once this initial construction is developed, the analyst needs to ensure that all the cases can be assigned to each of the dimensions being used in the typology.

Unless the sample fits into each of the dimensions, and fits uniquely, the dimensions will not operate effectively within the typology. Once the dimensions of the typology have been checked in this way, then their cross-fertilisation into typographical categories can be made. Once this has been done the whole process of testing needs to start again to ensure that all cases can now be allocated to one, and only one, of the typological categories.

Associative analysis is a lucrative form of qualitative data investigation as it almost invariably brings a deeper understanding of the subject under review. Such analyses involve finding links or connections between two or more phenomena. It is common in qualitative analysis to find that linkages repeatedly occur between sets of phenomenon. We have termed these matched set linkages.

It is often important in a qualitative study to investigate whether there are any patterns occurring in the data within particular subgroups of the study population. Typologies and other group classifications are extremely useful in displaying associations in qualitative data by showing how particular views or experiences may attach to particular groups or sectors of the population.

Having found what appear to be linkages and associations in the data, it is necessary to explore why they exist. This is because the relationship itself – that is, that there is a connection between X and Y – is not verifiable within the small, purposively selected samples used in qualitative research unless to verify associations are the same for each of the types of associative analysis described above. A first step is to check exactly how the level of matching between the phenomena is distributed across the whole data set. This is one of the few occasions when numerical distributions are used in qualitative research – but as a means, not an end, to gaining understanding. A second step is to interrogate the patterns of association. Unlike large scale quantitative surveys where a correlation may be presented as an output in its own right, in qualitative research a pattern of association is used as a pointer toward further stages of analysis.
 

The search for explanations is a hard one to describe because it involves a mix of reading through synthesised data, following leads as they are discovered, studying patterns, sometimes re-reading full transcripts, and generally thinking around the data.

Short overview of ‘developing explanations’:

  • Using explicit reasons and accounts
  • Inferring an underlying logic
  • Using common sense to search for explanations
  • Developing explanatory concepts
  • Drawing from other empirical studies
  • Using theoretical frameworks

Chapter 12. Generalization

One issue in qualitative research is generalisation. That concerns, whether the findings from a study based on a sample can be said to be of relevance beyond the sample and context of the research itself. There are two types of generalisation, namely ‘empirical’ and ‘theoretical’. Empirical generalisation concerns the application of findings from qualitative research studies to populations or settings beyond the particular sample of the study (also called ‘transferability’ or ‘external validity’). Theoretical generalisation involves the generation of theoretical concepts or propositions which are deemed to be of wider, or even universal, application.

Generalisation further consists of three concepts:

1. Representational generalisation: Can what is found in a research sample be generalised to the parent population from which the sample is drawn?
- Two key issues: first, whether the phenomena found in the research sample would similarly be found in the parent population. Second, whether other additional phenomena would be found in the parent population which are not present in the study sample.

2. Inferential generalisation: Can the finding of a particular study be generalised to other settings or contexts beyond the sampled one?
- The main issue of inferential generalisation is that it requires congruence between the ‘sending’ and the ‘receiving’ contexts, and therefore the researcher has to know both contexts.

3. Theoretical generalisation: draws theoretical propositions, principles, or statements from the findings of a study for more general application.

Approaches to Generalisation

Theoretical generalisation implies ‘nomic’ generalisation, which means that “generalisation must be truly universal, unrestricted as to time and space. It must formulate what is always and everywhere the case.” (Kaplan) A new or refined theory is relevant if it can be used in further empirical inquiry.

Inferential generalisation can also be referred to as ‘naturalistic’ generalisation. This means that knowledge is created by recognizing the similarities of objects and issues in and out of context and by sensing the natural co-variations of happenings.

It also has to be understood that transferability depends on the degree of congruence between the ‘sending context’ within which the research is conducted, and the ‘receiving context’ to which it is to be applied. Therefore, the researcher has to provide a ‘thick description’ of the research context, to enable others to judge if the findings are transferable to their context.

In representational generalisation, it is at the level of categories, concepts, and explanation that generalisation can take place. Two broad issues are the accuracy with which the phenomena have been captured and interpreted in the study sample, and how representative the sample is of the parent population. Representation is a matter of inclusivity: whether the sample provides ‘symbolic representation’ by containing the diversity of dimensions and constituencies that are central to explanation. This leads to the concepts of reliability and validity.

Reliability

The question asked concerning this concept is: in repeated measurement, will I find the same values? Reliability can also be referred to as ‘confirmability’, ‘consistency’, and ‘dependability’. It is important to have some certainty (before the study) that the internal elements, dimensions, factors would recur outside of the study population. Additionally, it has to be ensured that the constructions placed on the data by the researcher have been consistently and rigorously derived. In this context, it has to be distinguished between internal and external reliability. External reliability concerns the level of replication that can be expected if similar studies are undertaken. Internal reliability (also called inter-rater reliability) relates to the extent to which the same conclusions or judgements are found or replicated by different researchers or judges.
 

Reliability in qualitative research also concerns the replicability of research findings and whether or not they would be repeated in another study. There are two levels on which replicability should be ensured. First, there is the need to ensure that the research is as robust as it can be by carrying out internal checks on the quality of the data and its interpretation. Second, there is the need to assure the reader of the research by providing information about the research process. Questions to be asked in this context: Was the sample design/selection without bias? Was the fieldwork carried out consistently? Was the analysis carried out systematically and comprehensively? Is the interpretation well supported by evidence? Did the design allow equal opportunity for all perspectives to be identified?

Validity

Validity refers to the ‘correctness’, ‘credibility’, or ‘precision’ of a research reading. It can be distinguished between internal (are you investigating what you claim to be investigating) and external validity (are the constructs generated applicable to other groups within the population). There are several types of validity: content, face, construct, predictive, concurrent, and instrument validity (not explained in detail in the book).
 

The main question concerning validity should be: Are we accurately reflecting the phenomena under study as perceived by the study population? To ensure this requirement, the following checks can be conducted: sample coverage (bias in sample frame?), capture of the phenomena (fully explored the topic?), identification or labelling (do the given names reflect the meanings assigned by study participants?), display (findings portrayed in a way that remains true to the original data?). Two concepts that increase validity are validation and the right documentation.

Internal validation – methods

  1. Constant comparative method or checking accuracy of fit: derive hypotheses from one part of the data and test on another part.
  2. Deviant case analysis: do not ignore outliers but use them as a resource for understanding or research development.

External validation – methods 

1. Triangulation: the use of different sources of information will help both to confirm and to improve the clarity, or precision, of a research finding.

  • Methods triangulation: compare data from different methods (e.g. qualitative and quantitative)
  • Triangulation of sources: comparing data from different qualitative methods
  • Triangulation through multiple analysis: use different observers, interviewers, analysts and compare findings
  • Theory triangulation: look at data from different theoretical perspectives

2. Member or respondent validation: take research evidence back to research participants and ask for their opinion
However, we can never know with certainty that an account is true because we have no independent and completely reliable access to ‘reality’ (Hammersley).

Provide transparency or ‘thick description’, in order for the readers to verify for themselves that conclusions reached by the researcher hold ‘validity’ and to allow readers to consider their transferability to other settings.

Four important principles for generalising

1. Full and appropriate use of the evidential base:
- This can be ensured by using the original data (what requires a well-collected data set), and by encompassing diversity (identify and display the range and diversity of the research). This makes the research easier to understand for readers. The nature not number concept is also important. This means that inference that can be drawn from qualitative data concerns the nature of the phenomenon being studied, but not its prevalence or statistical distribution.
2. Display of analytical routes and interpretation:
- The level of classification assigned to a phenomenon will affect the extent to which generalisation can occur. Higher levels of aggregation of categories are more likely to be transferable in representational terms than more specific or individualised items. The process of assigning meaning and interpretation influences the generalizability. The more a researcher places his/her own meaning or interpretation on a finding as a basis for generalisation, the more open it will be to questioning and review by others.
3. Research Design and Conduct
- Checks on research design and conduct are important to prevent them from limiting the nature or power of the inference drawn. The display of research methods is important in order to allow others to assess the research methods. It is also important to note limitations, because it will help the reader to understand the boundaries of the research.
4. Validation
- Checks against other evidence and corroboration from other sources are highly desirable.

Chapter 13. Representation

Reporting is not only recoding the outcomes of the analysis but also the active construction of the form and nature of the explored phenomena. It provides an opportunity for further thoughts when assembling the data into a coherent structure. It is a vital way to think about one’s data and to develop analytical ideas.

The aim of the report is to present findings in an accessible form that will satisfy the research objectives and enable the audience to understand them. One major challenge here is to portraying the different forms of descriptive and explanatory analyses:

1. Explaining the boundaries of the qualitative research (QR):

Explain what QR can and cannot do to audience that is unfamiliar with it and Include a discussion of the kinds of inferences that can be drawn

2. Documentation of the methods

Displays ‘credibility’ of the evidence. In other words, describing not only how research was conducted but also why particular methods and approaches were used to meet research aim.

3. Displaying the integrity of the findings

Explain where conclusions presented are generated from, and grounded in, date.

4. Being coherent

Depth & richness of QD presents a challenge. Therefore, present data in way that effectively guides reader through key findings.

5. Displaying diversity

Focus on only the dominant message might be misleading. Mention diversity ( main advantage of QR).

6. Judicious use of verbatim passages

Only use original passages. Note that overciting can make report hard to read.

Findings are not always reported in report form. Other possible forms are: oral presentations, interim report or paper, conference papers, journal articles, books , media debates or programmes or conferences to explore findings. QD also varies in form:  can be verbal data, but also photography and videos. Note that the chosen output should be in accordance with these data forms

4 types of  research output:

1. Comprehensive outputs:

To provide comprehensive review of research findings, research methods & wider implications

written reports used on completion

2. Summary:

To provide condensed information about key findings. The information is explained in a less comprehensive fashion. An executive summary holds the possibility to assess main findings. Mainly book chapter or journal articles are summarized.

3. Developmental:

To provide early indications of emergent findings or to offer theories or ideas for debate. Examples are an oral presentation, emergent findings, interim written report, journal articles or a conference or seminar paper.

4. Selective:

To focus on selected areas of research findings for specific audience, e.g give funders of committee opportunity to express particular interest or. let academic or colleagues contribute to interpretation. This can be done by  an oral presentation with specific focus, a conference or seminar paper to selected audience, a journal article or a media article or report.

Four factors influence the form of output. First, the origin, purpose and strategy of research determine the length. Second, the condition that contractual obligations should me meet. Third, the target audience identified. Last, the resources available. They can limit the research output by financial, time and access limits.

The chapter describes aspects of writing a qualitative report. First, get organised. Researchers will be deeply involved in their studies and the research story will thus be fuzzy. Therefore it is important to take a mental break and only note down important steps and findings. Moreover, all material that is needed for writing should be assembled. Finally, space in the working time should be made free for writing.

Secondly, it is recommended to start as early as possible with writing, as writing is a way of thinking.  It is important to consider the story to be told and how the story can be best conveyed in an organised and interesting way. Decisions about the shape and style are influenced by the research objective, the requirements of the commissioning body and the audience(s) to be targeted.

Third, the structure and content. The paper should at least contain of its key ingredients, which are the title page, acknowledgements, abstract, table of content, executive summary, introduction, literature review, research findings and evidence, conclusion and the technical appendices.

The main body is made up of findings and research evidences. It is important to consider the order of which evidences will be presented. 3 models:

  1. puzzle: reader figures evidences out alongside the researcher
  2. summary of main findings and conclusion, followed by evidence supporting them
  3. analytic presentation: organise findings according to areas of existing theory

Telling the story, concerns how the evidences will be related. For instance, if researcher has developed a strong typology, this should be mentioned in the beginning. The coverage of different populations should also be explained. It is for instance important to display, the importance of constant comparison between groups and the extent to which there are very distinct issues for the different groups.

The longitude of research should be mentioned when it influenced the findings. Finally, there are usually natural building blocks given by the phenomenon regarded.

Reporting style and language, is determined by the individual style, the requirements of the funders and the target audience(s). It should be considered whether to report in the ‘realist’ style (what you found out) or the ‘confessional’ style( how you did it). Research jargon or technical terminology should be avoided.

Describing the research content refers to providing background information about the study. It should include the origins of research, aims of the study, theoretical or policy context in which research is set, design and conduct of the study and the nature of the evidence collected. Some account of the author’s personal perspectives on the subject matter or aims of enquiry.

It should be only enough to place the research evidence in its appropriate setting and to give the reader a base to judge the credibility of the research. The included ‘audit trail’ should allow readers to look into the research process and follow its main stages. It should address the sample design, method of selection, achieved sample composition (e.g. socio-demographic factors), any known limitations, tools and approaches used to analyse and the epistemological orientation of research team. It should also be supported by appending examples, namely the topic guide, recruitment documents  and an analytical framework.

The length of the report depends on the number and density of areas that are to be included. Moreover, integrating qualitative and quantitative findings is an important part of the report. It should be made clear that qualitative and quantitative evidences offer different ways of ‘knowing’ the world. It should therefore be made clear which kind of evidence will tell the main ‘story’. When quantitative data is the main focus, then the dull capacity of qualitative data should nevertheless be used ( not only for supporting quotes from the participants). Moreover, a different ‘reading’ is given by the different types of data obtained. This should be figured out by the reporter and not the reader. Finally, summaries ( executive summaries) should be included to give a short, standalone account of the key findings and main messages derived from the research. Moreover, it should contain a brief description of the methods used.

Most important at displaying qualitative evidence is that the subtlety, richness and detail of the original material is displayed.

Descriptive accounts:

  1. Defining elements, categories and classifications

Show nature of all kinds of phenomena, covering attitudes, beliefs, behaviours, factors, features, events, procedures and processes

It is helpful for the reader to show:

  • Examples of the original material on which descriptions and classification is based
  • The range and diversity of different elements, concepts or constructs that have been formed
  • A comprehensive map of all categories
  • The basis of any subsequent classification and how the different elements and categories have been assigned.

How best to display the original material is related to the categories and classes of data found. (See p. 338 following for examples.)

When categories and classes are displayed in lists, charts or text-based descriptions, the appropriate order has to be considered.

  1. Typologies

Provide descriptions of different sectors or segments in the study of population or of different manifestations of phenomena. Here again, it is important to display the features that have led to the construction of the typology. For instance, it could be useful to give a case illustration of each of the typology groups. An example of a typology group could be ‘soft-negotiator’ and ‘non-interventionist’.

Sometimes, a typology might also relate to a sector within the population. In these cases it is useful to describe the distribution of the typology across the study sample. Nevertheless, these distributions hold no significance statistically.

Explanatory accounts:

  1. Associations and Linkages

During the data analysis patterns within the data become obvious through linkages. In the reporting stage it is thus vital, to give evidence that allows the reader some understanding of why two or more sets of phenomena may be linked and why some phenomena might be attached to certain subgroups.

Firstly, you should describe the evidence available that support these linkages. This evidence might be explicit or implicitly conveyed in the text. It also may be inferred through further analysis or might simply be a explanatory hypothesis. Secondly, you should describe the circumstances in which the connection may change or become modified. And finally, exceptions should be noted.

  1. Displaying the explanatory base of evidence

The source of explanation can be hard to pin down, depending as it does on fitting several pieces of data together through iterative analysis. There can be several sources of explanation:

  • Explicit reasons and accounts: display all the reasons that have been given by participants for a particular phenomena
  • Presenting underlying logic or ‘common sense’: display implicit connections within data. Make clear that it was the research, not the participant who was the architect.
  • Relaying explanatory concepts

Development of an important concept which is helpful in explaining the origins of different phenomena or sets of phenomena. Give background about definition of the concept.

  • Drawing on other theoretical or empirical evidence

These theories and evidences might help to explain own findings. Give background to how the concepts or theory they are using is developed.

  • Wider applications

Inferences drawn by researcher

Displaying and explaining recurrence

Frequent or dominant phenomena usually occur during the research process. The extent to which these should be reported is up to the researcher.

It is vital to remember here that statistical inferences about the wider population are usually biased and not applicable because of the purposeful selection of samples. How many people said something might be important but these numbers are interpreted in a different manner in qualitative analysis than in quantitative analysis. The issues that are talked about should stand in the focus of the analysis rather than the count of people. The ‘array’ that people talk about though can be presented in a more classified form though. (There are five main types of features parents find important when deciding about the school their children should attend). It should also be noted here, which issues differ between groups. If phenomena occur in a large number, than the appropriate description would be ‘dominant’, ‘recurrent’, ‘Consistent’, ‘widespread’, or ‘commonly held’. Numerical distribution of the sample is only used in the description of typologies within a sample.

The use of illustrative material

Qualitative research easily incites you to use a variety of verbatim passages ( citations) within your report. These quotations or other types of primary data can effectively be used:

  • To demonstrate the type of language, terms or concepts that people use to discuss a particular subject
  • To illustrate the meanings that people attach to social phenomena
  • To illustrate people’s expressions of their views or thoughts about  a particular subject and the different factors influencing it
  • To illustrate different positions in relation to a model, process or typology
  • Ti demonstrate features or presentation about phenomena such as strength ambivalence, hesitance, confusion or even contradictory views
  • To amplify the way in which complex phenomena are described and understood
  • To portray the general richness of individual or group accounts

These quotations though should never be used without interpretative commentaries. Moreover, verbatim passages should not compromise the confidentiality and anonymity of the participants and the report should not be loaded with them.

The use of diagrammatic and visual representation

In general, these make complex processes or relationships ore accessible to the reader. They can help to:

  • Display the range and diversity of phenomena or typologies
  • Display relationships between different factors
  • Explain complex processes and different levels and dimensions involved
  • Provide effective means for summarising
  • Break up texted-base format

Content of oral presentation

It should only represent the top line findings and methodological issues cannot be addressed in too much detail. They are most effective when tailored to one specific audience and is limited in length by the time the participants are able to listen actively.

The structure of the presentation should be coherent like the report. The presentation style concerns use of hand-outs, use of visual material, the language and the presenting stance or voice.

Chapter 14.  Case Studies

The development of a research design of case study research is difficult as there is not comprehensive catalog of research designs available yet (like for other methods). One pitfall to be avoided is to consider case study design to be a subset/variant of other research designs used for other methods. This is because it is a separate research method that has its own research design.

Definition of research design: Different definitions exist. Firstly, every type of empirical research has an implicit, if not explicit research design. In the most elementary sense, it is a logical plan for getting from here to there, where here may be defined as the initial set of questions to be answered, and there is some set of conclusions about the questions. Another textbook argues that research design is a plan that "guides the investigator in the process of collecting, analysing and interpreting observations. It is a logical model of proof that allows the researcher to draw inferences". However, it is more than just a plan because its main purpose is to help to avoid the situation in which the evidence does not address the initial research question.

Components of research design:

  1. a study's questions: form of question provides important clue regarding which methods to be used. Why and how is most appropriate for case study research.
  2. its propositions, if any: only if you have propositions, you will move in the right direction. This is because how and why questions are too broad.  It also indicates where to look for relevant evidence. However, some studies also have a legitimate reason for not having propositions, e.g. because it's of exploratory nature.
  3. its unit(s) of analysis: This is a fundamental problem. The case may be an individual, but also an event or entity. However, sometimes cases are not easily defined in terms of the beginning or end points. Selection of your unit of analysis will start to occur then you specify your research questions, however you may also change the case as a result of discoveries during data collection.
  4. the logic linking the data to propositions: This will require you to link the collected data to your initial study proposition.

It can be done through e.g. pattern matching, explanation building, time-series analysis, logic models, and cross-case synthesis. The text refers to a detailed explanation in Chapter 5, which is Chapter 17 in our custom book.

  1. the criteria for interpreting the findings: Because case study research does not rely on statistics, we have to find other ways to interpret our findings. Again, Chapter 17 of our book explains this topic in detail.

The role of theory in design work: Covering the five components of research design will effectively force us to begin constructing a preliminary theory that is related to our topic. This part is essential, whether the ensuing case study's purpose is to develop or test a theory. The theory will cover the questions, propositions, units of analysis, logic connecting data to propositions and criteria for interpreting the findings, hence it embodies the five research design components. Theory development takes time and can be difficult though.

Sometimes we have a too complex theoretical foundation while in other cases, we have only very limited information available on our topic. For the theory development, you might want to review literature concerning your topic. However there is a full range of theories available:

  1. Individual theories: e.g. theories about individual development, cognitive behaviour, learning, perception.
  2. Group theories: e.g. theories about family functioning, informal groups, teams.
  3. Organizational theories: e.g. theories about bureaucracies, org. structure and functioning, organizational performance
  4. Societal theories: e.g. theories about urban development, international behaviour

However, there are also theories like decision making theory that cut across these, hence involve more than one type. Theory development does not only facilitate data collection, but it might also facilitate the generalization from case study to theory. We have two types of generalization: Firstly, analytic generalization, in which previous developed theory is used as a template with which to compare the empirical results of the case study. It can be used with single or multiple case studies. One should aim toward analytical generalization, because it investigates at level two (see Fig. 2.2 on page 373). 

The second type is statistical generalization, which is less relevant for doing case studies, however it concerns making inference about a population on the basis of empirical data from a sample. However, as cases should not be seen as sample units. This type of generalization is making inference at level one only (see Fig. 2.2).

Criteria for judging the quality of research design

Four test exists in order to measure the quality of any empirical research (including case studies). Definitions are given below, while tactics that should be used in assuring a high quality of the research are summarized in Figure 2.3.

Construct validity: identifying correct operational measures for the concepts being studied. This test is especially challenging in case study research because it can be very subjective sometimes.

Internal validity: seeking to establish a causal relationship, whereby certain conditions are believed to lead to other conditions, as distinguished from spurious relationships (only applicable to explanatory or causal studies, not for exploratory or descriptive studies). Internal validity is mainly a concern for explanatory case studies, when an investigator wants to explain how and why x led to event y. Furthermore, the concern over internal validity, for case study research, extends to a broader problem of making inferences.

External validity: defining the domain to which a study's findings can be generalized.

Reliability: demonstrating that the operations of a study - such as the data collection procedures - can be repeated, with the same results. Hence, if a later investigator followed the same procedures as described by an earlier investigator. Emphasis is on the same case study, not a replication of it.

The two major dimensions case studies can be divided in are single- and multiple case analyses. After dividing the case study in one of these two categories it can further be divided in terms of unit of analysis being either holistic or embedded. This categorization leads to the matrix shown above.

Rationales when to use which case study design:

Single case study:

  1. The case represents the critical case in testing a well-formulated theory
  2. The case represents a unique or extreme case that is worth to be analysed.
  3. The case is a representative or typical case that is used to show for example a typical project among many different projects
  4. The case is revelatory, which means that it addresses an issue that has not yet been treated by other researchers
  5. The case is longitudinal, which means that the case analysis is used to be compared to another single case analysis from the past, to show how results changed over time

A weakness of using single case studies is the possibility of the results being completely different than expected beforehand since it is mostly exploratory and every case can provide different results.

The unit of analysis has also to be determined by the researcher. If he/she wants to examine subcategories within the case the embedded design is used. (See matrix table) If the case does not analyse subcategories and sticks with a general analysis the level of analysis is called holistic. A weakness of the holistic approach is the possibility that the case study stays superficial and/or abstract. A weakness of the embedded approach is the possibility that the analysis focuses too much on the subcategories of analysis and does not return to the global level of analysis.

Multiple case analysis: Multiple case analysis should have a replication design. Every case study is independent in its methodology, for example the researcher does one experiment and receives a result. Then he conducts a second and third experiment to verify this result, this would be an example of replication design. Therefore, each case must be selected in a way that predicts similar results (literal replication) or predicts contrasting results that can be explained and anticipated beforehand. (Theoretical replication) Multiple case analysis in general should not follow a sampling logic that wants to make inferences about the whole populations through the use of sampling groups, but should always make use replication logic. Each individual case in the replication approach consists of a full-scale independent study. The more replications are done (so the more case studies are conducted the more reliable the gathered results become)

When using the multiple case analysis, there is still the categorization into holistic or embedded. (see matrix table) Therefore, a multiple case study of 3 cases can be embedded which means that each of the 3 cases have subcategories that are examined.

Modest advice in selecting case study design

Single or multiple design? In general, multiple case analysis leads to better and more meaningful results and is generally preferred over single case analysis as it also reduces skepticism towards the study by making it more reliable, however it takes more time and resources to conduct.

Closed-or flexible design? Sometimes it is necessary to change the outlay of your research when you discover results that point in a new direction or the case study design is inefficient. In this case you have to evaluate the nature of the changes you make so you avoid changes that are too fundamental so they would influence results of the study. In general, adaptation in design is allowed, but pay attention that it is consistent with your research objectives.

Mixing case studies with other Methods? Mixing case studies and quantitative research can often lead to more meaningful results, but integrating the approaches is complicated to it depends on the skill of the researcher and the resources he has whether he makes use of a mixed methods approach.

Chapter 15. Case Study Preparation

This chapter deals with the needed preparation for a case study. A good preparation includes desired skills of the investigator, training for a specific case study, developing a protocol, screening candidates and conducting a pilot case study.

Desired skills

The demand on your intellect, ego and emotions are greater than those of other research methods, because date collection is not routinized.
Commonly required skills are:
- Able to ask good questions and interpret the answers
Case studies require an inquiring mind during date collection, not just before or after. Due to the fact that specific information is not always readily predictable, it might be needed to search for additional evidence. Research is about questions and not necessarily about answers.
- Be a good listener
Listening is also about observation. A good listener hears the exact words, captures the mood and understands the context and worries whether there is a message between the lines.
- Be adaptive and flexible, new situations are opportunities, not threats.
It might be necessary to make some changes, from the need to pursue an unexpected lead (potentially minor) to the need to identify a new case for study (potentially major). The original purpose has to be remembered, but a researched must be willing to adapt procedures.
- Have a firm grasp of the issues being studied, reduces relevant events to be sought to manageable proportions.
Data collection is not only a matter of recording date, but you must be able to interpret the information. Make inferences about what actually transpired.
- Be unbiased by preconceived notions
You have to be open to contrary findings.
-Training for a specific case study
- Human subjects protection

Most case studies are about contemporary human affairs, which require specific care.
This care includes:
- Gaining informed consent from all participants
- Protecting participants from harm, including avoiding use of any deception.
- Protecting privacy and confidentiality
- Take special precautions that might be needed to protect especially vulnerable groups (for instance, children)
Every institution now has an Institutional Review Board (IRB), which reviews and approves on human subject. An important step before proceeding your case study is to obtain IRB’s approval.

Case study training as a seminar experience

Every case study investigator must be able to operate as a ‘senior’ investigator. Training begins with the definition of the questions and development of the study design.  A case study often must rely on a case study team for the following reasons:
- Often intensive data collection at the same time is needed, requiring a team of investigators
- Often multiple cases are involved, different persons are need to cover each site or to rotate among sites
- A combination of the first two conditions.

The goal of training is to have all team members understand why the study is done, what evidence is being sought, what variations can be anticipated and what would constitute supportive or contrary evidence for any given proposition.

Problems to be addressed

Training has also purpose of uncovering problems. The most obvious problem is that training may reveal flaws in the case study design, so it might be necessary to make some revisions. The training might reveal incompatibilities among the investigating team. A way of dealing with it is to suggest to the team that contrary evidence will be respected if it is collected and verifiable. The training can also reveal some impractical time deadlines or expectations. On the other hand, training may uncover some positive features as complementary skills.

The case study protocol
A protocol is more than a questionnaire or instrument. It also contains general rules to be followed and is essential doing a multiple-case study, although it is always desirable. A protocol is increasing the reliability of case study and guides the investigator in carrying out data collection. It also keeps you targeted on the topic of the case and it prepares to anticipate several problems. It might be convenient to identify the audience before you conducted the study.

Figure 3.2 (page 414) shows a table of contents from a protocol. The following sections are always included in a case study protocol:

- Overview of the case study project
It should cover the background information, which is a statement of the project you can present to anyone. It should also include the substantive issues being investigated and the relevant reading about the issues.
- Field procedures
Studies within their real life context. You have to learn to integrate real-world events with the needs of the data collection plan. Within a laboratory experiment, the respondent can often not deviate from the agenda set and the behaviour is constrained by the ground rules of the investigator.
The field procedures of the protocol need to emphasize major tasks in collecting data:
- Gain access to key organizations or interviewees
- Have sufficient resources while in the field (computer, writing instruments etc.)
- Developing a procedure for calling for assistance when needed.
- Making a clear schedule of the data collection expected within specified time.
- Providing for unanticipated events, including changes in availability.
The protocol should carefully describe procedures for protection human subjects.
- Case study questions
General orientation of questions. The questions in a protocol are posed to you, the investigator, not to an interviewee. This keeps the investigator on track. Each question should be accompanied by a list of likely sources of evidence.

There are different types of questions, divided in levels.
Level 1: Asked of specific interviewees
Level 2: Asked of the individual case
Level 3: Asked of the pattern of findings across multiple cases
Level 4: Questions asked of an entire study
Level 5: Normative questions about policy recommendations and conclusions

The difference between level 1 and level 2 questions is that the verbal line of inquiry is different from the mental line of inquiry. The level 3 questions cannot be addressed until the data from all single cases are examined. Only multiple-case analysis can cover level 3.
The protocol is for the date collection from a single case (even when part of a multiple-case study) and is not intended to serve the entire project.
There can be some differences between date collection source and the unit of analysis. A matrix of this can be found in Figure 3.5 at page 423.
 

A guide for the case study report
Some planning about format, outline and audience is needed, although researchers often forget it. Most reports of experiments follow a similar outline with posing research questions and hypotheses; a description of the research design, apparatus; data collection procedures; presentation of data collected; analysis of the date and a discussion of findings and conclusions. But keep in mind, case study plans can change, and you still have to be flexible.

Screening candidate ‘cases’ for you case study

Sometimes selection is straightforward, but sometimes there are many qualified candidates and a choice has to be made. Prior to collecting screening data, is a defined set of operational criteria. When doing multiple-case studies, select cases that best fit your replication design. When there are too many candidates, a two-stage screening procedure is warranted. First, collect relevant quantitative data about the pool. Secondly, define relevant criteria for either stratifying or reducing the number of candidates. Reduce the number of candidates to 20. 

The pilot case study
Pilot cases can be conducted for selecting the final case study design, but also a pilot case can represents a most complicated case, compared to the real case, so nearly all relevant data collection will be encountered in the pilot case.

A pilot test is not a pre-test, because a pilot is more formative and assisting. The pilot case study can be so important that more resources may be devoted to this phase than to the collection of data from an actual case. Therefore, the following subtopics are worth discussing: Selection of plot cases, the nature of the inquiry for pilot cases, nature of reports from pilot cases.

Selection of pilot cases: 
Convenience, access and geographic proximity can be main criteria. It is possible to try different pilot cases for different types of technology. After a pilot case, you have to provide some feedback, to do so; a protocol also has to be developed for a pilot case. A pilot case should never be the occasion for an overly informal or highly personalized inquiry.

Scope of the pilot inquiry:
The scope of inquiry for a pilot case can be much broader than ultimate data collection. It can cover both substantive and methodological issues. Pilot data can provide considerable insight into the basic issues being studied. When this is parallel with some relevant literature, it helps to ensure that the actual study reflected significant theoretical or policy issues.

Reports from the pilot cases
The reports do have a great value and need to be written clearly. The difference with the actual study report is that pilot reports should be explicit about the lessons learned for both research design and field procedures.

Chapter 16. Evidence Collection

Case study evidence can come from many sources. This chapter discusses 6 of them: documentation, archival records, interviews, direct observation, participant-observation, and physical artefacts.

At an earlier time guidance on collecting data relevant for case studies was available under three rubrics, which are fieldwork, field research, and social science methods.

Six sources of evidence

The sources of evidence discussed here are the most commonly used in doing case studies. There are however a lot more sources. Figure 4.1 provides the list of sources with their weaknesses and strengths. Note: no single source has a complete advantage over the others, the various sources are highly complementary.

1. Documentation. Documentary information is likely to relevant to every case study topic. This type of information can take many forms and should be the object of explicit data collection plans.

For case studies, the most important use of documents is to corroborate and augment evidence from other sources. First, documents are helpful in verifying the correct spellings and tittles or names of organizations that might have been mentioned in interviews. Second, documents can provide other specific details to corroborate information from other sources. Third, you can make inferences from documents.

Documents play an explicit role in any data collection in doing case studies. Systematic searches for relevant documents are important in any data collection plan. However you have to be critical, since not all documents are telling the truth. It is important in reviewing any document that it is written for some specific purpose and audience other than those of the case study being done. Another problem is the abundance of materials available through internet searches. You may get lost in reviewing materials and waste time in them.

  1. Archival Records. Archival records often take the form of computer files and records. This type of data can be used in conjunction with other sources of information in producing a case study. However the importance of archival records varies a lot. When archival evidence has been deemed relevant, an investigator must be careful to ascertain the conditions under which it was produced as well as its accuracy.

Most archival records are also produced for a specific purpose and a specific audience other than the case study investigation, and these conditions must be fully appreciated in interpreting the usefulness and accuracy of the records.

Interviews are one of the most important sources of case study information. interviews will be guided conversations rather than structured queries. Throughout the interview process, you have two jobs: (a) to follow your own line of inquiry, as reflected in your case study protocol, and (b) to ask your actual questions in an unbiased manner that also serves the needs of your own line of inquiry.

One type of case study interview is an in-depth interview. You can ask respondents about the facts of a matter as well as their opinions about events. Sometimes you can also ask the interviewee to propose her/his own insight into certain occurrences and may use such propositions as the basis for further inquiry.

For this reason the interview can take place over an extended period of time. And the interviewee can suggest others to interview, as well as other sources of evidence.

The more an interviewee assist, the more he/she becomes an informant. Key informants are often critical to the success of a case study. However you have to be careful that you are not becoming overly dependent on a key informant. You have to rely on other sources of evidence and search for contrary evidence as carefully as possible.

A second type of case study interview is a focused interview, in which a person is interviewed for a short period of time. In such cases,  The interviews may still remain open-ended and assume a conversational manner, but you are more likely to be following a certain set of questions derived from the case study protocol. When performing such interviews, your questions have to be carefully worded, so that you appear genuinely naive about the topic and allow the interviewee to provide a fresh commentary about it.

A third type of case study entails more structured questions, which is a formal survey. Such a survey could be designed as part of an embedded case study and produce quantitative data as part of the case study evidence.

3. Overall interviews are an essential source of case study evidence. The reason here is because most case studies are about human affairs or behavioural events.

Well-informed interviewees can provide important insights into such affairs or events, and can also provide shortcuts to prior history of such situations, helping you to identify other relevant sources of evidence. Interviews should always be considered verbal reports only.

A common question about doing interviews is whether to record them. This is a personal preference. It does provide a more accurate rendition. However, a recording device should not be used when (a) an interviewee refuses permission or appears uncomfortable in its presence, (b) there is no specific plan for transcribing or systematically listening to the contents of the electronic records, (c) the investigator is clumsy enough with mechanical devices that the recording creates distractions during the interview itself, or (d) the investigator thinks that the recording device is a substitute for listening close throughout the course of an interview.

4. Direct Observation. Because a case study should take place in the natural setting of the case, you are creating the opportunity for direct observations. These observations serve as yet another source of evidence in a case study. The observations can range from formal to casual data collection activities. Observational evidence is often useful in providing additional information about the topic being studied. A common procedure to increase the reliability of observational evidence is to have more than a single observer making an observation.

5. Participant-Observation. Participant-observation is a special mode of observation in which you are not merely a passive observer. Instead, you may assume a variety of roles within a case study situation and may actually participate in the events being studied. This type of study has been most frequently used in anthropological studies of different cultural or social groups. The technique can be used in more everyday settings, such as a large organization or informal small groups.

Participant-observation provides certain unusual opportunities for collecting study data, but it also involves major problems. For some topics there may be no way of collecting evidence other than through participant-observation. Another distinctive opportunity is the ability to perceive reality from the viewpoint of someone inside the case study rather than external to it. Finally, other opportunities arise because you may have the ability to manipulate minor events.

The major problems related to participant-observation have to do with the potential biases produced. First, the investigator has less ability to work as an external observer and may, at times, have to assume positions or advocacy roles contrary to the interests of good social science practice. Second, the participant-observer is likely to follow a commonly known phenomenon and become a supporter of the group or organization being studied, if such support did not already exist. Third, the participant role may simply require too much attention relative to the observer role, so he/she has not enough time to take notes. Fourth, if the organization or social group being studied is physically dispersed, the participant-observer may find it difficult to be at the right place at the right time, either to participate in or to observe important events. These trade-offs between opportunities and the problems have to considered seriously in undertaking any participant-observation study.

6. Physical artefacts. A final source of evidence is physical or cultural artefact – a technological device, a tool or instrument, a work of art, or some other physical evidence. Such artefacts may be collected or observed as part of a case study and have been used extensively in anthropological research. Physical artefacts have less potential relevance in the most typical kind of case study. However, when relevant, the artefacts can be an important component in the overall case.

Three principles of data collection

The benefits from these six sources of evidence can be maximized if you follow three principles. These principles are extremely important for doing high-quality case studies, are relevant for all six types of sources, and should be followed whenever possible. This is because they can help dealing with the problems of establishing the construct validity and reliability of the case study evidence.

Principle 1: Use multiple sources of evidence

Triangulation: Rationale for using multiple sources of evidence. It is not recommended for conducting case studies. However, a major strength of case study data collection is the opportunity to use many different sources.

The use of multiple sources of evidence in case studies allows an investigator to address a broader range of historical and behavioural issues. However, the most important advantage presented by using multiple sources of evidence is the development of converging lines of inquiry, a process of triangulation and corroboration. Any case study finding or conclusion is likely to be more convincing and accurate if it is based on several different sources of information, following a corroboratory mode.

Patton (2002) discusses four types of triangulation in doing evaluations – The triangulation

  1. of data sources (data triangulation)
  2.  among different evaluators (investigator triangulation)
  3. of perspectives to the same data set (theory triangulation), and
  4. of methods (methodological triangulation)

We mainly focus on the first type. Figure 4.2 distinguishes between two conditions – when you have really triangulated the data (upper portion) and when you have multiple sources as part of the same study but nevertheless address different facts (lower portion). When you have triangulated the data, the events or facts of the case study have been supported by more than a single source of evidence; when you have used multiple sources not actually triangulated the data, you typically have analysed each source of evidence separately and have compared the conclusions of different analysis – but not triangulated the data.

With data triangulation, the potential problem of construct validity also can be addressed, because multiple sources of evidence essentially provide multiple measures of the same phenomenon.

Prerequisites for using multiple sources of evidence. The use of multiple sources of evidence imposes a greater burden. First is that the collection of data from multiple sources is more expensive than if data were only collected from a single source. Second and more important, each investigator needs to know how to carry out the full variety of data collection techniques.

Principle 2: Create a Case Study Database

This principle has to do with the way of organizing and documenting the data collected for case studies. The documentation commonly consists of two separate conditions:

  1. the data or evidentiary base and
  2. the report of the investigator, whether in article, report, or book form.

A case study database markedly increases the reliability of the entire case study. The lack of a formal database is a major shortcoming and needs to be corrected.

There are numerous ways to accomplish the task, as long as you and other investigators are aware of the need and are willing to commit the additional effort required to build the database. Every report should contain enough data so that the reader of the report can draw independent conclusions about the case study.

The problem of developing the database is described in terms of four components:

  1. Case study notes. These notes must be stored in a way that other people can also retrieve them at some late date. So they need to be organized, categorized, complete, and available for later access.
  2. Case study documents. Main objective is to make the documents readily retrievable for later inspection. This can be on paper or electronically (PDF files).
  3. Tabular materials. The database can consist of tabular materials, either collected from the site being studied or created by the research team. Such material also needs to be organized and stored to allow for later retrieval.
  4. Narratives. Narratives also can be considered as a formal part of the database. The narrative reflects a special practice that should be used more frequently: to have case study investigators compose open-ended answers to the questions in the case study protocol. The main purpose is to document the connection between specific pieces of evidence and various issues in the case study, generously using footnotes and citations.

Principle 3: Maintain a Chain of Evidence
This principle increases the reliability of the information in a case study. This principle is to allow an external observer (reader of the case study) to follow the derivation of any evidence from initial research questions to ultimate case study conclusions. The external observer should be able to trace the steps in either direction. It is the same process used in forensic investigations. No evidence should have been lost.

Chapter 17. Data Analysis

The analysis of case study evidence is one of the least developed and most difficult aspects of doing case studies. Too many times, investigators start case studies without having the foggiest notion about how the evidence is to be analysed. Investigators are continuing their search for formulas, recipes, or tools, hoping that familiarity with these devices will produce the needed analytic result. There are, for example, many computer-assisted routines with pre-packaged software that assist with qualitative data analysis. These tools can help you code and categorize large amounts of narrative text. Key to your understanding of the value of these packages are two words: assisted and tools. The software will not do any analysis for you, but it may serve as an able assistant and reliable tool. So software can be a really helpful tool, but you must keep in mind that they are the assistant, not you.

All empirical research studies, including case studies, have a 'story' to tell. The story differs from a fictional account because it embraces your data, but it remains a story because it must have a beginning, end, and middle. The needed analytical strategy is your guide to crafting this story, and only rarely will your data do the crafting for you. The strategy will help you to treat the evidence fairly, produce compelling analytic conclusions, and rule out alternative interpretations. The strategy will also help you to use tools and make manipulations more effectively and efficiently. The book describes 4 strategies:

Relying on theoretical propositions: The first and most preferred strategy is to follow the theoretical propositions that led to your case study. The original objectives and design of the case study presumably were based on such propositions, which in turn reflected a set of research questions, reviews of the literature and new hypotheses or propositions. These propositions help you to focus attention on certain data and to ignore other data.

Developing a case description: A second general analytic strategy is to develop a descriptive framework for organizing the case study. For instance, you actually (but undesirably) may have collected a lot of data without having settled on an initial set of research questions. Studies started this way inevitably encounter challenges at their analytic phase.

The ideas for your framework should have come from your initial review of literature, which may have revealed gaps or topics of interest to you, spurring your interest in doing a case study.

Another suggestion is to note the structure of existing case studies, and at least to observe their tables of contents as an implicit clue to different descriptive approaches.

Using both qualitative and quantitative data: This third strategy may be more attractive to advanced students and scholars and can yield appreciable benefits. Certain case studies can include substantial amount of quantitative data. If these data are subjected to statistical analyses at the same time that qualitative data nevertheless remain central to the entire case study, you will have successfully followed a strong analytic strategy. The quantitative data may have been relevant to your case study for at least two reasons:

  1. The data may cover the behaviour or events that your case study is trying to explain.
  2. The data may be related to an embedded unit of analysis within your broader case study.

Examining rival explanations: a fourth general analytic strategy, trying to define and test rival explanations, generally works with all of the previous three: Initial theoretical propositions (The first strategy) might have included rival hypotheses, the contrasting perspective of participants and stakeholders, may produce rival descriptive frameworks (the second strategy); and data from comparison groups may cover rival conditions to be examined as part of using both quantitative and qualitative data (the third strategy).

Figure 5.1 classifies and lists many types of rivals. There are three types of Craft rivals, rivals that underlie all of our social science research, and textbooks have given much attention to these craft rivals. And there are six Real-life rivals, rivals which have received virtually no attention by other textbooks. These real-life rivals are the ones that you should carefully identify prior to your data collection.

1. Pattern Matching

For case study analysis, one of the most desirable techniques is to use a pattern-matching logic. Such logic compares an empirically based pattern with a predicted one. If the patterns coincide, the results can help a case study top strengthen its internal validity (see Chapter 2). If the case study is an explanatory one, the patterns may be related to the dependent or the independent variables of the study. If the case study is a descriptive one, pattern matching is still relevant, as long as the predicted pattern of specific variables is defined prior to data collection.

Non-equivalent dependent variables as a pattern

The dependent-variables pattern may be derived from one of the more potent quasi-experimental research designs, labelled a 'non-equivalent, dependent variables design'. According to this design, an experiment or quasi-experiment may have multiple dependent variables-that is, a variety of relevant outcomes. The pattern matching occurs in the following manner: If, for each outcome, the initially predicted values have been found, and at the same time, alternative 'patterns' of predicted values have not been found, strong causal inferences can be made.

Rival explanations as patterns.

The use of rival explanations provides a good example of pattern matching for independent variables. In such a situation, several cases may be known to have a certain type outcome, and your investigation has focused on how and why this outcome occurred in each case. This analysis requires the development of rival theoretical propositions.

Simpler patterns
This same logic can be applied to simpler patterns, having a minimal variety of either dependent or independent variables. In the simplest case, where there may be only two different dependent (or independent) variables, pattern matching is possible as long as a different pattern has been stipulated for these two variables

Precision of pattern matching
At this point in the state of the art, the actual pattern-matching procedure involves no precise comparisons. Whether one is predicting a pattern of non-equivalent dependent variables, a pattern based on rival explanations, or a simple pattern, the fundamental comparison between the predicted and the actual pattern may involve no quantitative or statistical criteria.

2. Explanation Building

A second analytic technique is in fact a special type of pattern matching, but the procedure is more difficult and therefore deserves separate attention. Here, the goal is to analyse the case study data by building an explanation about the case.

Elements of explanations

To 'explain' a phenomenon is to stipulate a presumed set of causal links about it, or 'how' or 'why' something happened.

The causal links may be complex and difficult to measure in any precise manner. In most existing case studies, explanation building has occurred in narrative form. Because such narratives cannot be precise, the better case studies are the ones in which the explanations have reflected some theoretically significant propositions.

Iterative nature of explanation building

The explanation-building process, for explanatory case studies, has not been well documented in operational terms. However, the eventual explanation is likely to be a result of a series of iterations (repetitions):

  • Making an initial theoretical statement or an initial proposition about policy or social behaviour.
  • Comparing the findings of an initial case against such a statement or proposition
  • Revising the statement or proposition
  • Comparing other details of the case against the revision
  • Comparing the revision to the facts of a second, thirds or more cases
  • Repeating this process as many times as needed.

In this sense, the final explanation may not have been fully stipulated (determined) at the beginning of a study and therefore differs from the pattern-matching approaches previously described. Rather, the case study evidence is examined, theoretical positions are revised, and the evidence is examined once again from a new perspective in this iterative mode.

Potential problems in explanation building

You should be forewarned that this approach to case study is fraught with dangers. Much analytic insight is demanded of the explanation builder. As the iterative process progresses, for instance, an investigator may slowly begin to drift away from the original topic of interest. Constant reference to the original purpose of the inquiry and the possible alternative explanations may help to reduce this potential problem.

3. Time-Series Analysis

A third analytic technique is to conduct a time-series analysis, directly analogous to the time-series analysis conducted in experiments and quasi-experiments.

Such analysis can follow many intricate patterns, which have been the subject of several major textbooks in experimental and clinical psychology with single subjects; the interested reader is referred to such works for further detailed guidance. The more precise the pattern, the more that the time-series analysis also will lay a firm foundation for the conclusions of the case study.

Simple time series.

Compared to the more general pattern-matching analysis, a time-series design can be much simpler in one sense: in time series, there may only be a single dependent or independent variable. In these circumstances, when a large number of data points are relevant and available, statistical tests can even be used to analyse the data. However, the pattern can be more complicated in another sense because the appropriate starting or ending points for this single variable may not be clear.

Complex time series

The time-series designs can be more complex when the trends within a given case are postulated (presumed) to be more complex. One can postulate, for instance, not merely rising or declining trends but some rise followed by some decline within the same case. This type of mixed pattern, across time, would be the beginning of a more complex time series. In general, although a more complex time series creates greater problems for data collection, it also leads to a more elaborate trend that can strengthen an analysis.

Chronologies

The compiling of chronological events is a frequent technique in case studies and may be considered a special form of time-series analysis. The chronological sequence again focuses directly on the major strength of cases studies cited earlier-that case studies allow you to trace events over time. The procedure can have an important analytic purpose-to investigate presumed causal events-because the basis sequence of a cause and its effect cannot be temporally inverted. The analytic goal is to compare the chronology with that predicted by some explanatory theory-in which the theory has specified one or more of the following kinds of conditions:

  • Some events must always occur before other events, with the reverse sequence being impossible.
  • Some events must always be followed by other events, on a contingency basis.
  • Some events can only follow other events after pre-specified interval of time.
  • Certain time periods in a case study may be marked by classes of events that differ substantially from those of other time periods.

Summary conditions for time-series analysis

Whatever the stipulated nature of the time series, the important case study objective is to examine some relevant 'how' and 'why' questions about the relationship of events over time, not merely to observe the time trends alone. An interruption in a time series will be the occasion for postulating potential causal relationships; similarly, a chronological sequence should contain causal postulates.

4. Logic Models

This fourth technique has become increasingly useful in recent years, especially in doing case study evaluations. The logic models deliberately stipulates a complex chain of events over an extended period of time. The events are staged in repeated cause-effect-cause-effect patterns, whereby a dependent variable (event) at an earlier stage becomes the independent variable (causal event) for the next stage.

This process can help a group define more clearly its vision and goals as well how the sequence of programmatic actions will accomplish the goals. As an analytic technique, the use of logic models consists of matching empirically observed events to theoretically predicted events.

Individual-level logic model

The first type assumes that your case study is about an individual person (Figure 5.2). The events flow across a series of boxes and arrows reading from left to right in the figure.

Firm or organizational-level logic model

A second type of logic model traces events taking place in an individual organization, such as a manufacturing firm. Figure 5.3 shows how changes in a firm are claimed to lead to improved manufacturing and eventually to improved business performance.

An alternative configuration for an organizational-level logic model

Graphically, nearly all logic models follow a linear sequence. In real life, however, events can be more dynamic, not necessarily progressing linearly. One such a set of events might occur in relation to the 'reforming' or 'transformation' of an organization.

For instance, business firms may undergo many significant operational changes, and the business's mission and culture also may change. Figure 5.4 presents an alternatively configured, third type of logic model, reflecting these conditions.

Program-level logic model

Returning to the more conventional linear model, Figure 5.5 contains a fourth and final type of logic model. Here, the model depicts the rationale underlying a major federal program, aimed at reducing the incidence of HIV/AIDS by supporting community planning and prevention initiatives.

5. Cross-Case Synthesis

A fifth technique applies specifically to the analysis of multiple cases. The technique is especially relevant if, a case study consist of at least two cases. The analysis is likely to be easier and the findings likely to be more robust than having only a single case. Cross-case syntheses can be performed whether the individual case studies have previously been conducted as independent research studies or as a predesigned part of the same study.

Pressing for a high quality analysis

No matter what specific analytic strategy or techniques have been chosen, you must do everything to make sure that your analysis is of the highest quality. First, your analysis should show that you attended to all the evidence. Second, your analysis should address, if possible, all major rival interpretations. Third, your analysis should address the most significant aspect of your case study. Fourth, you should use your own prior, expert knowledge in your case study.

Contributions, Comments & Kudos

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.
Summaries & Study Note of World Supporter Cycle
Join World Supporter
Join World Supporter
Log in or create your free account

Why create an account?

  • Your WorldSupporter account gives you access to all functionalities of the platform
  • Once you are logged in, you can:
    • Save pages to your favorites
    • Give feedback or share contributions
    • participate in discussions
    • share your own contributions through the 11 WorldSupporter tools
Content
Access level of this page
  • Public
  • WorldSupporters only
  • JoHo members
  • Private
Statistics
4
Promotions
Visit Africa Internships

Join one of the NEED-based projects of Let's Go Africa! Internship and volunteer opportunities in 12 different African countries.

Psychology - Pedagogy - Medicine - Sports - Psysiotherapy