Deze samenvatting is gebaseerd op collegejaar 2012-2013 (4th edition).
The type of theory
The relation of theory and research is mainly characterized by two matters. First of all, there is the question of what form of theory one is talking about. And, secondly, there is the question of whether data are collected to test or to build theories. The importance of theory lies on the fact that theory provides a rationale for the research and a framework within social phenomena can be understood and the research findings can be interpreted.
“Theory” refers to an explanation of observed regularities. However, sociological theories involve a higher level of abstraction, such as the theories that include structural-functionalism, symbolic interactionism, critical theory and postructuralism, structuration theory. The distinction that derives from the aforementioned is between the former theories, the theories of the middle age and the grand theories, which operate at a more abstract and general level. Finally, the grand theories are considered of limited use in connection to social research because they the offer few indications to researchers as to how they might guide or influence the collection of empirical evidence.
In such a way, the level of abstraction is likely to be so great that the researcher would find it difficult to make the necessary links with the real world. On the other hand, middle-range theories are intermediate to general theories of social systems which are too remote from particular classes of social behaviour, organization and change to account for what is observed and to those detailed descriptions of particulars that are not generalized at all. As a consequence, middle-range theories are much more likely to be the focus of empirical enquiry.
Middle-range theories no matter they operate in a limited domain, they vary in their range of application. Moreover, they fall between grand theories and empirical findings. And they also, represent attempts to understand and explain a limited aspect of social life. However, the distinction between grand and middle-range theories does not appear adequate to answer the question what is theory. The term “theory” is frequently used in a manner that means little more than the background literature in an area of social enquiry.
“Empiricism”, all in all, refers to research being dismissed by scientists for not having any obvious connection with theory. In particular, the term “empiricism” is used to denote a general approach to the study of reality that suggests that only knowledge gained through experience and the senses is acceptable. In this sense, ideas must be subjected to the rigorous of testing before they can be considered knowledge. Moreover, the term is also used to refer to belief that the accumulation of “facts” is a legitimate goal in its own right, and under this meaning, it is also referred as “naïve empiricism”.
However, it would be inaccurate to dismiss as naïve empiricism, studies whose authors have not been preoccupied with theory. This kind of research is conditioned and directed towards research questions that arise out of an interrogation of the literature. The data collection and analysis are subsequently geared to the illumination or resolution of the research issue or problem that has been identified at the outset. The literature acts as a proxy for theory and in many instances theory is latent or implicit in the literature. Thus, research that appears to have the characteristics of the fact-finding exercise should not be dismissed as naïve empiricism.
Indeed, usually research is done in order to answer questions posed by theoretical considerations. Yet, an alternative position is to view theory as something that occurs after the collection and analysis of some or all of the data associated with a project. At this point, the significance of the relationship between theory and research is related to whether we are referring to deductive or inductive theory.
Deductive and inductive theory
In the case of the deductive theory, the researcher based on what is known about in a particular domain and of theoretical considerations in relation to that domain, deduces a hypothesis (or hypotheses) that must then be subjected to empirical scrutiny. Embedded within the hypothesis will be concepts that will need to translate into researchable entities. In that case, the social scientist must deduce a hypothesis and then translate it into operational terms. In that case, the social scientist needs to specify how data can be collected in relation to the concepts that make up hypothesis.
The process of deduction would be like:
3. Data collection
5. Hypothesis confirmed or rejected
6. Revision of theory
On the other hand, in the case of induction, the researcher infers the implications of his or her findings for the theory that prompted the whole exercise. The findings are fed back into the stock of theory and the research findings associated with a certain domain of enquiry.
However, inductiveness is typically deemed to be predominantly deductive in orientation. Also, deductive approach, which is usually associated with quantitative research, does not often follow the sequence outlined in its pure form. “Theory” may be little more than the literature on a certain topic in the form of the accumulated knowledge gleaned from books and articles. Moreover, no matter the deductive process appears very linear- one step follows the other in a clear logical sequence, there are many instances where this is not the case. For instance, in the case that the researcher’s view of a theory or literature has changed as a result of the analysis of the collected data, or when new theoretical ideas or findings may be published by others before the researcher has generated his or her findings, or the case when the relevance of a set of data for a theory may become apparent after the data have been collected.
As a part of the logic of the idea of developing and then testing them, some researchers prefer an approach to the relationship between theory and research that is primarily inductive. With an inductive stance, the process of induction involves drawing generalizable inferences out of observations.
A representative figure showing the difference between inductivism and deductivism referring to the relationship between theory and research would be like:
Deductive Approach: Theory → Observations/ Findings
Inductive Approach: Observations/ Findings → Theory
In the end, what should be noted is that just like deduction entails an element of induction, induction represents an alternative strategy for linking theory and research, although it contains a deductive element too. Moreover, the characterization of the inductive strategy as associated with qualitative research is not entirely straightforward; not only does much qualitative research not generate theory, but also theory is often used at the very least as a background to qualitative investigations. However, even if it appears useful to think of the relationship between theory and research in terms of deductive and inductive strategies, the issues are not as clear cut as they are sometimes presented. Therefore, deductive and inductive strategies should be better thought of as tendencies rather than as a hard-and-fast distinction.
An epistemological issue concerns the question of what is (or should be) regarded as acceptable knowledge in a discipline. A central question is whether the social world can and should be studied according to the same principles, procedures and ethos as the natural sciences.
Positivism: a natural science epistemology
Positivism affirms the importance of imitating the natural sciences. Positivism is an epistemological position that advocates the application of the methods of the natural sciences to the study of social reality and beyond. Moreover, positivism also entails five main principles.
First of all, according to the principle of phenomenalism, only phenomena and knowledge confirmed by the senses can genuinely be warranted as knowledge. Second, the principle of deductivism argues that the purpose of theory is to generate hypotheses that can be tested and that will thereby allow explanations of laws to be assessed. Third, based on the principle of induction, knowledge is arrive data through the gathering of facts that provide the basis for laws. The fourth principle refers to the objective, which is that science must be conducted in a way that is value free. And finally, there is a clear distinction between scientific statement and normative statements and a belief that the former are the true domain of the scientist. So that the last principle is implied by the first because the truth or otherwise of normative statements cannot be confirmed by the senses.
As a final point, it should be noted that positivism should not be treated as synonymous with science and the scientific. In general, philosophers of science differ on their characterization of scientific practice. Thus, is not clear when the comment on positivism whether they refer to the philosophical term or to the scientific approach.
Realism, and particularly critical realism, is another philosophical position that purports to provide an account of nature of scientific practice. In general, realism has two common features with positivism. The first one refers to the belief that the natural and the social sciences can and should apply the same kinds of approach to the collection of data and to explanation. The second feature refers to the commitment to the view that there is an external reality to which scientists direct their attention, meaning that there is a reality that is separate from our descriptions of it. Realism involves two major forms, the empirical realism and the critical realism.
Empirical realism is also referred as naïve realism. It asserts that through the use of appropriate methods, reality can be understood. Naïve realism is used to reflect the fact that it is often assumed by realists that there is a perfect correspondence between reality and the term used to describe it. Therefore, it fails to recognise that there are enduring structures and generative mechanisms underlying and producing observable phenomena and events, and is thus superficial.
Critical realism is a specific form of realism whose manifesto is to recognize the reality of the natural order and the events and discourses of the social world. According to critical realism, the scientist’s conceptualization of reality is simply a way of knowing that reality. Moreover, critical realists acknowledge that the categories the employ to understand reality are likely to be provisional. Thus, they recognize that there is a distinction between the objects that are the focus of their enquiries and the terms they use to describe them. Secondly, critical realists admit into their explanations theoretical terms that are not directly amenable to observation. Critical realists accept that generative mechanisms are not directly observable, since they are admissible on the grounds that their effects are observable. Critical realism becomes critical due to the identification of generative mechanisms which offers the prospect of introducing changes that can transform the status quo.
The term interpretivism refers to the contrasting epistemology to positivism. The term is used to describe the views of the authors who have been critical of the application of the scientific model to the study of the social world and who have been influenced by different intellectual traditions. Interpretivism is predicated upon the view that a strategy is required that respects the differences between people and objects of the social sciences and therefore requires the social scientists to grasp the subjective meaning of social action. Their common premise is that the subject matter of social sciences-people and their institutions- is fundamentally different from that of the natural sciences. It involves Weber’s view, the hermeneutic-phenomenological tradition and symbolic interactionism.
According to Max Weber sociology is a science which attempts the interpretive understanding of social action in order to arrive to a casual explanation of its course and effects. Even if Weber’s description seems here to embrace both explanation and understanding, but the point is that the task of casual explanation refers to the interpretive understanding of social action rather to external forces that have no meaning for those involved in that social action.
Hermeneutics is a term drawn from theology and that, when imported to social sciences, is concerned with the theory and method of the interpretation of human action. The distinction reflects a division between an emphasis on the explanation of human behaviour that and the understanding of human behaviour.
Understanding human behaviour is concerned with the emphatic understanding of human action rather than with the forces that are deemed to act on it.
Phenomenology is one of the main anti-positivist traditions and it is a philosophy concerned with the question of how individuals make sense of the world around them and how the philosopher should bracket out perceptions in his or her grasp of the world.
Symbolic interactionism argues that interaction takes place in such a way that the individual is continually interpreting the symbolic meaning of his or her environment and acts on the basis of this imputed meaning.
The nature of social sciences is questioned on whether social entities can and should be considered objective entities that have a reality external to social actors or whether they can and should be considered social constructions built up from the perceptions and actions of social actors. These positions are known as objectivism and constructionism. Ontogological assumptions and commitments feed up into the ways in which research questions are formulated and research is carried out.
Objectivism is an ontological position that asserts that social phenomena and their meanings have an existence that is independent of social actors. It implies that social phenomena confront us as external facts that are beyond our reach or influence.
Constructionism is an ontological position, also known as constructivism, which asserts that social phenomena and their meanings are continually being accomplished by social actors. It implies that social phenomena and categories are produced through social interaction and through and that they are in a constant state of revision. The researcher, in recent times, presents a specific version of social reality, rather than one that can be regarded as definitive.
The qualitative and quantitative distinction represents a useful means of classifying different methods of social research and it is an umbrella for a range of issues concerned with the practice of social research. There is distinction between the two on the fact that quantitative researchers employ measurements and qualitative researchers do not. However, there more differences between the two.
Quantitative research emphasizes quantification in the collection and the analysis of data. Moreover, it entails a deductive approach to the relationship between theory and research, and the accent is pace on the testing of theories. It has incorporated the practices and norms of the natural scientific model and positivism. And, finally, it embodies a view of social reality as an external, objective reality.
Qualitative research emphasizes words rather than quantification in the collection and analysis of data. Moreover, it emphasizes an inductive approach to the relationship between theory and research, in which the emphasis is placed on the generation of theories. It has rejected the practices and norms of the natural scientific model and of positivism in preference for an emphasis on the way in which individuals interpret their social world. Finally, it embodies view of social reality as a constantly shifting emergent property of individuals’ creation.
Mixed methods is a term widely used to refer to research that combines methods associated with both quantitative and qualitative research. By contracting the two approaches they may be easy seen as incompatible. In contrast, they can be fruitfully within a single project.
As discussed above, the influences on social research comes from the theory, the ontology and epistemology. Two additional factors that have an impact are the values and practical considerations.
Values reflect either the personal beliefs or the feelings of the researcher. There are numerous points at which bias and intrusion of values can occur. In brief, some of these points involve the choice of research area, the formulation of research question, the choice of method, the formulation of research design and data collection techniques, the implementation of data collection, the analysis of data, the interpretation of data and the conclusions.
Moreover, another position related to values and bias is to recognize that research cannot be value free and to ensure that there is no untrammelled incursion of values in the research process and to be self-reflective so exhibit reflexivity about the part played by such factors. Additionally, another approach is to argue for value-laden research. This position is common among some feminist writers who argue that only research on women that is intendedly for women will be consistent with the wider political needs of women.
Practical issues in decisions about how social research should be carried out is also important and of great significance. For example the choices of research strategy, design or method have to be dovetailed with the specific research question being investigated. Also, the topic being investigated and people being investigated is another practical issue to be considered. Because all social research is a coming-together of the ideal and feasible, there will be many circumstances in which the nature of the topic or the subjects of an investigation and the constraints on a researcher loom large in decisions about how best to proceed.
A research design provides a framework for the collection and analysis of data. A choice of research design reflects decisions about the priority being given to a range of dimensions of the research process. \these dimensions include the importance attached to expressing casual connections between variables, generalizing to larger groups of individuals than those actually forming part of the investigation, understanding behaviour and the meaning of that behaviour in its specific social context and having a temporal, meaning over time, appreciation of social phenomena and their interconnections.
A research method is simply a technique for gathering data. It may involve a specific instrument, such as self-completion questionnaire or a structured interview schedule, or participant observation whereby the researcher listens to and watches others.
Criteria in social research
In social research three of the most prominent criteria are reliability, replication and validity.
Reliability is concerned with whether the results of the study are repeatable. The term usually reflects the question of whether the measures that are devised for concepts in the social sciences are consistent. Looking reliability in greater detail, it is at issue in connection with quantitative research. The quantitative researcher is likely to be concerned with the question of whether a measure is stable or not.
Replication and replicability involves the choice of researchers to replicate the findings of others. Moreover, in order for replication to take place, a study must be capable of replication, it must be replicable. That means the researcher has to spell out his or her procedures in great detail so that replication is possible. An investigation’s capacity to be replicated is valued by many social researchers working within a quantitative research tradition.
Validity is concerned with the integrity of the conclusions generated from a piece of research. Thus, it is also considered the most important criterion of research. There are four main types of validity: measurement validity, internal validity, external validity and ecological validity.
Measurement validity applies primarily to quantitative and to the search for measures of social scientific concepts. It is also referred as construct validity and questions whether a measure that is devised of a concept really does reflect the concept that it is supposed to be denoting. Moreover, the assessment of measurement validity presupposes that a measure is reliable; if a measure of a concept is unstable in that it fluctuates and hence is unreliable, it simply cannot be providing a valid measure of the concept in question. If a measure is unreliable because it does not give a stable reading of the underlying concept, it cannot be a valid measure because it will not reflect the concept it is supposed to be measuring.
A variable is an attribute on which cases vary. Cases may be people, things, organizations, countries, nations and so on. If an attribute does not vary, it is a constant.
The main distinction that social researcher make though, is between the dependent and the independent variables, while constants are not of such interest to them. The independent variables are deemed to have a causal influence on the dependent variable.
Internal validity is mainly related to the issue of causality. Internal validity is concerned with the question of whether a conclusion that incorporates a causal relationship between two or more variable holds water. That is, if we suggest that x causes y, can we be sure that it is x that is responsible for variation in y and not something else that is producing an apparent causal relationship?
External validity is concerned with the question of whether the results of a study can be generalized beyond the specific research context. It is at this point that it becomes crucial the issue of how people are selected to participate in research and why quantitative researchers are keen to generate representative samples.
Ecological validity is concerned with the question of whether social scientific findings are applicable to people’s everyday, natural social settings. It is concerned with the question of whether social research sometimes produces findings that may be technically valid but have little to do with what happens in people’s everyday lives. In case the research results are ecologically invalid, they are in a sense artcrafts of the social scientist’s arsenal of data collection and analytic tools.
For example, the findings deriving form a study using questionnaires may have measurement validity and a reasonable level of internal validity, and they may be externally valid, in the sense that they can be generalized to other samples confronted by the same questionnaire, but the unnaturalness of the fact of having to answer a questionnaire may mean that the findings have little ecological validity.
Relationship with research strategy
Reliability and measurement validity are both concerned with the adequacy of measures, which are most obviously a concern in quantitative research. Internal validity is concerned with the soundness of findings that specify a causal connection, an issue that is most common among quantitative =researchers as well. External validity may be relevant to qualitative research, but the question of representativeness of research subjects with which the issue is concerned has a more obvious application to the realm of quantitative research, with its preoccupation with sampling procedures that maximize the opportunity for generating a representative sample. Ecological validity relates to the naturalness of the research approach and has a considerable relevance to both qualitative and quantitative research.
Naturalism is a term that has three different meanings. First of all, naturalism means viewing all objects of study- whether natural or social ones- as belonging to the same realm and a consequent commitment to the principles of natural scientific method. This meaning has affinities wit positivism and implies that all entities belong to the same order of things and there is no essential difference between the objects of the natural sciences and those of the social sciences.
Secondly, naturalism means being true to the nature of the phenomenon being investigated. This meaning of the term represents a fusion of elements of interpretivist epistemology and constructionist ontology.
Thirdly, naturalism is a style of research that seeks to minimize the intrusion of artificial methods of data collection. This meaning implies that the social world should be as undisturbed as possible when it is being studied.
True experiments are quite unusual in sociology; however experimental designs are sometimes employed in the way that a true experiment is often used as a yardstick against which non-experimental research is assessed. Experimental research engenders considerable confidence in the robustness and trustworthiness of causal findings.
A social researcher in order to conduct a true experiment is necessary to manipulate the independent variable in order to determine whether it does in fact have an influence on the dependent variable. Because experimental subjects are likely to be allocated to one of two or more experimental groups, each of which represents different types or levels of the independent variable. Then it is possible to establish how far differences between the groups are responsible for variations in the level of the dependent variable. Therefore, manipulation entails intervening in a situation to determine which of two or more things happens to objects.
Classical experimental design
The classical experimental design is also known as the randomized experiment or randomized controlled trial (RCT). In this design, an observation is made in relation to the dependent variable or there may be two or more observations as well. The experimental treatment is the independent variable. The no experimental treatment refers to the absence of an experimental treatment. The timing of the observations is made in relation to the dependent variable.
Apart from the field experiments, there are also the laboratory experiments. The main advantage of the laboratory experiments over the field experiments is that the researcher has far greater influence over the experimental arrangements. Moreover, laboratory experiments are more likely that they will be more straightforward to replicate, because they are less bound up with a certain milieu that is difficult to reproduce.
The limitations of the laboratory experiment lie on the external validity that is difficult to establish due to the interaction of setting and treatment, since the setting of the laboratory is likely to be related to real-world experiences and contexts. As well it is possible to an interaction of selection and treatment. Also, the external validity of the study may be poor because it is not known whether the results may be applicable to real world.
Quasi-experiments are studies that have certain characteristics of experimental designs but that do not fulfil all the internal validity requirements. There are many different types of quasi-experiments. For instance, “natural experiments” are experiments on the sense of entailing manipulation of a social setting but as a part of naturally occurring attempt to alter social arrangements. So that it is invariably not possible to assign subjects randomly to experimental and control groups.
Logic of comparison
A central feature of any experiment is that it entails a comparison at the very least it entails a comparison of results obtained by an experimental group with those engendered by a control group. The advantage of a comparison is that we understand the phenomenon that we are interested in better when we compare it with something else that it is similar to it.
Cross sectional design
The cross-sectional design is also called survey design. A cross-sectional research design entails the collection of data on more than one case, which are examined to detect patterns of association.
There is a basic structure of a cross-sectional design that involves some basic elements. First of all, there are more cases than one, since researchers who employ a cross-sectional design are interested in variation. Variation can be established only when more than one case is being examined. Second, in cross-sectional design research, data on the variables of interest are collected more or less simultaneously, at a single point in time. Third, in order to establish variation between cases, it is necessary to have a systemic and standardized method for gauging variation, which is either quantitative or quantifiable data. Fourth, with a cross-sectional design it is possible to examine only relationships between variables and create patterns of association.
“Survey” is a term used to denote research that employs a cross-sectional design and in which data are collected by questionnaire or by structured interview.
Reliability, replicability, and validity
Reliability and measurement validity are related to the quality of the measures that are employed to tap the concepts in which the researcher is interested, rather than to research design matters. Replicability is related to the researcher’s procedures of selecting respondents, designing measures of concepts, administering research instruments and analyzing data. Typically, internal validity is weak because it is difficult to establish causal direction from the resulting data. On the other hand, external validity is strong when the sample from which data are collected has been randomly selected. Finally, in cross-sectional designs, ecological validity may be jeopardised because the very instruments disrupt the “natural habitat”, that is because cross-sectional design makes a great deal of use of research instruments.
In much social research it is not possible to manipulate the variables of interest. Therefore, most quantitative social research employs a cross-sectional research design rather than an experimental one. Also, there are independent variables simply can not be manipulated. All intents and purposes are “givens” that are not really amenable to the kind of manipulation that is necessary for a true experimental design.
On the other hand, regarding certain variables as givens provides us with a clue as to how we can make casual inferences in cross-sectional research. Many of the variables of interest can be assumed to be temporally prior to other variables.
A cross-sectional design comprises the collection of data on a series of variables (observations made in relation to the variables) at a single point in time.
Mostly, cross-sectional designs are placed in the context of quantitative research. Also, quantitative research entails a form of cross-sectional design, for instance when the researcher employs unstructured interviewing or semi-structured interviewing with people.
Longitudinal designs are a distinct form of research design due to the cost and the time they involve. In social sciences, the longitudinal design represents an extension of survey research based on a self-completion questionnaire or structured interview research within a cross-sectional design. It may also allow some insight into the time order of variables and thus be able to allow casual inferences to be made.
There are two types of longitudinal design. In the panel study a sample is the focus of data collection on at least two occasions. Data may be collected from different types of case within a panel study framework. In the cohort study, either an entire sample of cohort of people or a random sample of them is selected as the focus of data collection. People, in this case, share a certain characteristic.
In both cohort and panel studies, data are collected in two waves on the same sample of people. Also, both are concerned with illuminating social change and with improving the understanding of causal influences over time. That means that longitudinal designs are better able to deal with the problem of “ambiguity about the direction of causal influence” that plagues cross-sectional designs. Due to the fact that potentially independent variables can be identified at one certain time point (T1), the researcher is in a better position to infer that purported effects that are identified at the following time point (T2), or later have occurred after the independent variables.
There also differences between the two, panel and cohort design. A panel study takes place over year and can distinguish between age effects and cohort effects. A cohort study can distinguish only ageing effects since all members of the sample will have been born at more or less the same time.
Panel and cohort design share also some common problems. First, there is the problem of sample attrition through subjects choosing to withdraw at later stages of the research. The problem with attrition is largely that those who leave the study may differ in some important respects from those who remain, so that the latter do not form a representative group. There is evidence that the problem of attrition declines with time; those who do not drop out after the first wave or two of data collection tend to stay on the panel. Secondly, there are few guidelines as to when is the best juncture to conduct further waves of data collection. Thirdly, it is suggested that many longitudinal studies are poorly thought out and that they result in the collection of large amounts of data with little apparent planning. And fourth, there is evidence that a panel conditioning effect can occur whereby continued participation in a longitudinal study affects how respondents behave.
Case study design
The case study entails basically the detailed and intensive analysis of a single case. It concerns with the complexity and particular nature of the case in question. Commonly the term “case” associates the case study with a location, such as a community or organization and the emphasis is upon an intensive examination of the setting. Preferably, the term “case study” indicates those instances where the “case” is the focus of interest in its own right.
Identifying case studies with qualitative research is not appropriate. Even if case study design favours qualitative methods, it is frequently employed by both qualitative and quantitative method. With a case study, the case is an object of interest in its own right and the researcher aims to provide an in-depth elucidation of it. The researcher is usually concerned to elucidate the unique features of the case.
In experimental and cross-sectional designs the typical orientation to the relationship between theory and research is deductive; the research design and the data collection are guided by specific research questions that derive from theoretical concerns. But, when a qualitative research strategy is employed within a cross-sectional design, the approach tends to be inductive. That means that whether a cross-sectional design is inductive or deductive tends to be affected by whether a quantitative or qualitative research strategy is employed.
Reliability, replicability and validity
Whether a case study analysis fares in the context of the research criteria- measurement validity, internal validity, external validity, ecological validity, reliability and replicability- depends on how far the researcher feels that these are appropriate for the evaluation of case study research. A question of main concern regards the external validity and generalizability of case study research. How a single case may be representative so that it might yield findings that can be applied more generally to other cases? Case study researchers do not delude themselves that it is possible to identify typical cases that can be used to represent a certain class of objects; they do not think that a case is a sample of one. Therefore, one of the standard criticisms of case study is that findings deriving from it cannot be generalized
The five different types of case:
the critical case: the researcher has a well-developed theory and a case is chosen on the grounds that it will allow a better understanding of the circumstances in which the hypothesis will and will not hold.
the extreme or unique case: the unique or extreme case is a common focus in clinical studies. In this sense, a case maybe extremely prominent at the time of the research and as such, it holds an intrinsic interest that makes it essentially unique.
the representative or typical case: that is an exemplifying case that aims to capture the circumstances and conditions of an everyday or commonplace situation; it exemplifies a broader category of which it is a member. Selecting an exemplifying case lies on the fact, that cases epitomize a broader category of cases or provide a suitable context for certain research questions to be answered. Also, researchers select exemplifying cases because they allow the researcher to examine key social process.
the revelatory case: This case study is based on the fact that the investigator has the opportunity to observe and analyze a phenomenon previously inaccessible to specific investigation.
the longitudinal case: many case studies compromise a longitudinal element because it is more likely that a case will be chosen both because it is appropriate to the research questions and because it can be studied over time.
Case study as intensive analysis, longitudinal research and the case study
Case study researchers argue that they aim to generate an intensive examination of a single case in relation to which they then engage in a theoretical analysis. The crucial question is how well the researcher generates theory out of the findings, so that the central issue of concern is the quality of the theoretical reasoning in which the case study researcher engages. Case study is usually placed in the inductive tradition of the relationship between theory and research. However, case studies can be associated with both theory and generation and theory testing.
Case study researchers are often in a position to generalize by drawing on findings by comparable cases investigated by others.
Case study research often includes a longitudinal element. The researcher is often a participant of an organization or member of a community for many months or years. Or, he/she may also conduct interviews with individuals over a lengthy period. Moreover, a longitudinal element may occur when a case that has been studied is returned to at a later stage.
This design entails studying two contrasting cases using more or less identical methods. It embodies the logic of comparison implying that we can understand social phenomena better when they are compared in relation to two or more meaningfully contrasting cases or situations. The comparative design may be realized in the context of either quantitative or qualitative research. Moreover, the logic of comparison can be applied to a variety of situations, by choosing meaningful contrasts the significance of the different patterns for a variety of experiences of both employees and employers could be portrayed. However, the problem of comparative designs occurs when the differences that are observed between the contrasting cases may not be due exclusively to the distinguishing features of the cases. Yet, the key of comparative design is exactly its ability to allow the distinguishing characteristics of two or more cases to act as a springboard for theoretical reflections about contrasting findings. Like a hybrid, comparative design, in quantitative research is frequently an extension of a cross-sectional design and in qualitative research it is frequently an extension of a case study design.
Know what is expected of you from your institution and think about your research area while using your supervisor.
Commonly, the institution or department will have specific requirements concerning a wide variety of different features that your dissertation should compromise and a range of other matters relating to it. There are: the form of binding, the presentation, whether an abstract is required, how big the page margins should be, the format for referencing, the number of words, the structure, how much advice you can get from your supervisor, whether or not a proposal is required, plagiarism, deadlines, how much assistance you can expect and so on.
Here the advice is to follow the requirements, the instructions and the information you are given, and please ignore guidelines that differ from your institution’s guidelines.
It is common that you are asked about what you want to do research on, so it is better to begin thinking about whether there are any topics that might provide you with a researchable area.
In most institutions that require a dissertation or thesis, students are allocated to supervisors. Institutions vary in terms of what kinds of and how much assistance supervisors will give to students allocated to them. On the other hand, students vary in how frequently they see their supervisors and in their use of them. The advice here is to use your supervisor to the fullest extent you are allowed and follow the pointers you are given by him or her. If your supervisor is critical of your research questions, your drafts of the thesis, or whatever, try to respond positively.
Managing time and resources
Research is constrained by time and resources. Therefore, a timetable is needed. Work out a schedule, in conjunction with your supervisor, detailing the different stages of your research (including the review of your literature and the writing up). In the timetable you should specify the different stages and the calendar points at which you should start and finish them. Yet, for example, searching the literature is a stage that is likely to be ongoing, but this should not be a problem to develop a timetable. Secondly, find out if there are any resources which may be put at your disposal for carrying out research. For instance, travel costs, or photocopying costs, or transcription machines or SPSS or qualitative data analysis package and so on. This kind of information will help you to establish how far your research design and methods are financially and feasible and practical.
Formulating suitable research questions
No matter student want to conduct research into areas that are of personal interest to them, they also have to be able to develop research questions. This is something that applies both to qualitative and quantitative research; what is it about your area of interest that you want to know?
Therefore, you have to specify clear research questions, otherwise there is a great risk that your research will be unfocused and that you will be unsure about what your research is about and what you are collecting data for. The importance of research questions lies on the facts that they guide your literature search, the guide you in deciding what data you need to collect, they guide your analysis of your data, they guide your writing-up of your data and they stop you from going off in unnecessary directions and tangents. Also, what you have to remember is that your research questions must have a clear social scientific angle.
Steps in selecting research questions
Research area: concerns about risk
Select aspect of research area: variations in concerns about risk
Research questions: what areas of risk are of greatest concern among people? For instance, does concern about risk vary by age, gender, social class, education and so on?
Select research questions: What areas of risk are of greatest concern among people? At this point, the student has to select and specify his/her research question.
In general, research questions may derive from personal interest/experience, theory, the research literature, puzzles, new developments in society, social problem.
Tips for evaluating research questions
Research questions have to be clear, in the sense of being intelligible.
Research questions should be researchable- they should allow you to do research in relation to them. Thus, they should not be formulated in terms that are so abstract that they cannot be converted into researchable terms.
Research questions should have some connections with established theory and research. There should be a literature on which you can draw to help illuminate how your research questions should be approached.
Research questions should be linked to each other. Unrelated research questions are unlikely to be acceptable, since you should be developing an argument in your dissertation.
Research questions should at the very least hold out the prospect of being able to make an original contribution to the topic.
Research questions should be neither too broad nor too narrow.
Writing your research proposal
Writing a proposal is useful in getting you started on your research project and encouraging you to set realistic objectives for your research project. There are some main issues that should be taken into account when writing a research proposal.
What is your research topic or what are your research objectives?
Why is your research topic important?
What is your research question or what are your research questions?
What does the literature have to say about your research topic/ objectives and research questions?
How are you going to go about collecting data relevant to your research questions? In other words, what research methods are you intending to use?
Why are the research methods/ sources you have selected the appropriate ones for your research question?
What resources will you need to conduct your research and how will those resources be funded?
What is your timetable for the different stages of the project?
What problems do you anticipate in doing the research?
What are the possible ethical problems associated with your research?
How will you analyze your data
Preparing for your research
Begin your data collection after you have identified your research questions reasonably clearly. After that, develop your data collection instruments with these research questions at the forefront, otherwise there is the risk that your results will not allow you to illuminate the research questions.
Moreover, may be your research requires you to access an organization etc, in such a case you should confirm the earliest opportunity that you have to conduct your work.
For instance, you should consider some sub-questions, such as:
Who do you need to study in order to investigate your research questions?
How easily can you gain access to a sampling frame?
What kind of sampling strategy will you employ?
Can you justify your choice of sampling method?
Review of the existing literature is useful because it presents what is already known about the area of interest. In the literature review you demonstrate that you are able to engage in scholarly review based on your reading and understanding of the work of others in the same field. In this section, you use the existing literature and present it in a way that you develop your argument about the significance of your research and where it leads.
All in all, when reviewing the existing literature the aim is to demonstrate:
What is already known about this area?
What concepts and theories are relevant to this area?
What research methods and research strategies have been employed in studying this area?
Are there any significant controversies?
Are there any inconsistencies in findings relating to this area?
Are there any unanswered research questions in this area?
Why to write a literature review?
you need to know what is already known in connection with your research area because you do not want to be accused of reinventing the wheel
you can learn from other researchers’ mistakes and avoid making the same ones
you can learn about different theoretical and methodological approaches to your research area
it may help you to develop an analytic framework
it may lead you to consider the inclusion of variables in your research that you might not otherwise have thought about
it may suggest further research questions for you
it will help with the interpretation of your findings
it gives you some pegs on which to hang your findings
it is pretty much expected
Tips and skills: conceptualizing a literature review
list: make a list of items representing the literature of the subject
search: identify relevant information and focus on finding which may involve going through sources to identify information
survey: literature review may be also seen as an investigation of past and present writing and research on the subject
vehicle: the review also has an impact on the researcher because it is a vehicle for learning that leads to an increase in his knowledge and understanding
facilitator: the literature review can be understood as directly related to the research that is about to be or being undertaken
report: the focus is on framing a written discourse about the literature which may be established as a component part of a thesis or other research report
Getting the most from your reading
An important part of your time will be devoted to writing the literature review. Here, the advice is to get the most from your reading. That involves developing your skills in being able to read actively and critically.
Therefore, it would be helpful to do the following, when you read the literature:
take good and detailed notes
develop critical reading skills, apart from summarizing the existing literature, whenever appropriate you should also be critical on your approach
through your literature review you may show why your research questions are important
remember that you will want to return to the literature that you examine in the discussion of your findings and conclusions
in the literature review you must assist in developing an argument, so do not try to get everything you read into you literature review
you should continue your search for and reading of relevant literature more or less throughout your research, so that if you have written your review before collecting your data, you should consider it provisional, and in general, you may want to make substantial revisions of your review towards the end of writing up your work
Systematic review is an approach to reviewing the literature that adopts explicit procedures. Because it is suggested that many reviews of the literature tend to lack thoroughness and reflect the biases of the researcher, the proponents of systematic review suggest that adopting explicit procedures make such biases less likely to surface.
Also, systematic reviews of the literature are also seen as an accompaniment to evidence-based approaches, for instance in the field of medicine. However, the systematic review begins to diffuse into other areas, like social policy, so that policy-makers and others can draw on reviews that summarize the balance of the evidence in certain areas of practice.
Systematic review provides a more reliable foundation on which to design research, because it is based on a more comprehensive understanding of what we know about the subject. Yet, a limitation of systematic review stems from situations where research questions are not capable of being defined in terms of the effect of a particular variable, or when the subject boundaries are more fluid and open or subject to change.
The five steps of the systematic review.
The purpose of the review must be defined, that means that decisions about the inclusion and exclusion of studies can be made in a consistent way.
Establish criteria to guide the selection of studies, ex. the reviewer might want to restrict the review to studies published only in a particular time period or to studies that derive from one region or nation rather than another, as well as the kind of the research design is an additional criterion.
The reviewer should seek out and incorporate within the review all studies that meet the criteria spelled out in the previous step (step b).
Key features of each study should be identified, such as the date when the research was conducted, the location, the sample size, data collection methods and the main findings.
A synthesis of the results then has to be produced. In quantitative analysis, this phase involves producing summary statistics from the quantitative data supplied with each study, while in qualitative analysis this involves a summary, an overview of the key characteristics of the studies reviewed.
Interpretative researchers can have quite different reasons for reviewing the literature on a particular subject, since their purpose is to enrich human discourse by generating understanding rather than by accumulating knowledge. In this sense, the literature review is a means for gaining an initial impression of the topic area that they intend to understand through their research. However, the process of reviewing the literature is an uncertain process of discovery that you do not know in advance where it will take you; therefore, narrative reviews tend to be less focused and more wide-ranging in scope than systematic reviews. Also, narrative reviews are less explicit about the criteria for exclusion or inclusion of studies.
Most kind of reviews are narratives. For example, when the literature is reviewed as a means of specifying what is already known in connection with a research topic, so that research questions can be identified that the reviewer will then examine. Or, when reviews are ends in their own right as a means of summarizing what is known in an area. All in all, the literature relevant to the researcher’s area of interest is always reviewed as a means of establishing why the researcher conducted the research and what its contribution is likely to be.
Searching the existing literature
Online bibliographical databases accessible on the internet are an important source of journal references. E-journals provide also access to a full text of an article. Here, you need to find out whether your institution can give you a username and password to gain access to these databases.
The most useful source for the social sciences is the Social Sciences Citation Index (SSCI) which provides references and abstracts and full text links for articles from some of the most important social science journals published worldwide.
A feature of SSCI that is useful is the “times cited” link; in case you find an article that is relevant to your dissertation you can click to see which other articles have cited it. Also, Scopus is available at http://www.scopus.com/scopus/home.url and describes itself as the “largest abstract and citation database of research literature and quality web sources”.
In general, most newspapers require subscription to be able to search online databases. But, most academic libraries will have a subscription to some individual newspapers or to a service such as Proquest or Lexis Nexis that allows you to search several newspapers at once and you may need a password to access them.
At this point, the advice is to be critical when you use internet sources for your research. Internet engines are very useful for researching but they do not evaluate the sites they find, thus you have to be critical. In sum, you should consider the following points:
Who is the author of the site and what is their motive for publishing?
Where is the site located? The URL can help you here; an academic site (.ac), a government site (.gov), a non-commercial organization (.org), or a commercial site (.com)
How recently was the site updated? Sites either give you information about when was the last time they have been updated or may understand so from the sites general appearance
For the online databases, you have to work out keywords that can be entered into the search engines and will allow you to identify suitable references. You may also need to think synonyms or alternative terms to match your language to the source you are reading. Also, it is helpful to use “operators”, such as AND, OR, NOT and so on.
Referencing your work
Referencing the work of others is very important, because it shows that you are aware of the work of others, as well as that you recognize your own research builds on the work of others. A reference is also described as a citation and the act of referencing as citing. Here, the advice is to keep a record of what you have read, including all bibliographic details about article or book that will go into your bibliography or references. Mainly, two methods are usually used; the Harvard or author- date and the footnote or numeric. The former is used whenever somebody paraphrases the argument or ideas of an author in their writing, while the latter involves the use of superscript numbers in the text that refer to a note at he foot of the page or the end of the text, where the reference is given in full together with the page number if it is a direct quotation. In published articles there is usually a list of references at the end, while in books this may be called bibliography. Here, the advice is to be consistent, select a format for punctuating your references, and then stick to it.
It is very important to avoid plagiarism. Plagiarism is defined as to “take and use another person’s thoughts, writings, and inventions as one’s own”, or “the passing off of another’s work as if it were one’s own” and is considered a serious and punishable academic offense. Plagiarism does not relate to the literature you read in the course of preparing an essay or report, but taking material in a wholesale and unattributed way from sources like essays written by others or from websites, without making a citation of it, referencing and paraphrasing is in the context of plagiarism.
Here, the advice is that in order to avoid plagiarism, do not copy large parts of a text without making clear that they are in fact quotations. This should make clear that the text in question is not yours but that you are quoting someone. Also, do not pass other’s people ideas as your own. All in all, try to express your ideas in your own words and acknowledge properly those ideas that are not your own. Moreover, make sure what is considered plagiarism from your institution and how is defined at your institution so that you do not inadvertently commit it.
Ethics in social research concern with the issue of how should we treat the people on whom we conduct research and whether there are activities in which we should or should not engage in our relations with them. But, authors have different perspectives on what is and what is not ethically acceptable. Nowadays, these issues have become more common rather than in the past.
Mainly, discussions on ethical principles in social research tend to resolve some specific issues; whether there is harm to participants, whether there is a lack if informed consent, whether there is an invasion of privacy and whether deception is involved. More specifically, research that is likely to harm participants is considered by most people unacceptable. Harm may entail physical harm, harm to participants’ development, loss of self-esteem, stress and so on.
The lack of informed consent is an important issue within social research ethics. This principle means that prospective research participants should be given as much information as might be needed to make an informed decision about whether or not they wish to participate in a study. Even when people they are being asked to participate in research, they should be fully informed about the research process.
As a consequence, researchers prefer to obtain the informed consent of research participants by getting them to sign informed consent forms. The advantages of such forms are that they give respondents the opportunity to be fully informed of the nature of the research and the implications of their participation at the outset.
An additional issue concerns the degree to which invasions of privacy can be condoned, as covert methods violate the principles of informed consent and may invade the privacy of those being studied. There are certain questions that delve into private realms and that respondents do not wish to make public, regardless of the fact that the interview is in private. And thus, convert methods are deemed to violate the privacy principle on the grounds that participants are not being given the opportunity to refuse invasions of their privacy.
Politics in social research
As has been already stated, values affect all the phases of the research process, from the choice of a research area to the formulation of conclusion, and therefore, the social researcher is never conducting an investigation in a moral vacuum, the personality of the researcher involves a variety of presuppositions that have implications for the conduct of social research.
Politics plays an important role in social research and becomes important in different contexts and ways. Some are the following points:
Social scientists are sometimes put in the position where they take sides, for instance, feminist researchers focus on women disadvantages and on improving their position
The issue of funding research is also related to the above point. Much social research is funded by organizations such as firms and government departments, which often have a vested interest in the outcomes of the research.
An additional political process is gaining access. Access is mediated by gatekeepers, who are concerned with the researcher’s motives and who seek to influence the investigation process. In this sense, gaining access is almost always a matter of negotiation and as such it inevitably turns into a political process.
Getting on in organization entails a constant process of negotiation of what is permissible and what is not. That means that there may be several layers of gatekeepers in any research project, so that issues of access become an ongoing feature of research.
In cases where research is conducted in teams, politics may loom large; different career and other objectives of team members and their different perceptions and contributions may form a quite separate political arena.
Also, there may be pressure to restrict the publication of findings. And the use made of findings by others can be the focus of further political machinations.
Quantitative research has been described as a research strategy entailing the collection of numeric data, as exhibiting a view of the relationship between theory and research. It involves a deductive approach and an objectivist conception of social reality (in particular positivism). The main steps in quantitative research as an ideal-typical account of the process are:
3. Research design
4. Devise measures of concepts
5. Select research sites
6. Select research subjects/ respondents
7. Administer research instruments/ collect data
8. Process data
9. Analyse data
10. Findings/ conclusions
11. Write up findings/ conclusions
Commonly, in quantitative research a hypothesis is deducted from the theory and is tested. However, a great deal of quantitative research does not entail the specification of a hypothesis and instead theory acts loosely as a set of concerns in relation to which the social researcher collects data.
Next, the research design has to be selected. This has implications for a variety of issues, such as the external validity of findings and the researcher’s ability to impute causality to their findings. Also, this step entails devising measures of the concepts in which the researcher is interested and is often referred as operationalization.
The next step entails the selection of research site and the selection of subjects/ respondents. The people to whom researchers conduct the research are called “subjects” by experimental researcher, while social survey researchers call them “respondents”.
The administration of the research instruments, in experimental research is likely to entail pre-testing subjects that manipulate the independent variable for the experimental group, and post-testing respondents. After that, the collected information has to be transformed into data. In quantitative research this is likely to mean that it must be prepared so that it can be quantified. This process may also involve coding the information, that is, transforming it into numbers to facilitate the quantitative analysis of the data, especially if the analysis is going to be carried out by computer. Codes act as tags that are placed on data about people to allow information to be processed by the computer.
In the next step, the analysis of data, the researcher is concerned with the techniques of quantitative data analysis to reduce the amount of data collected, to test the relationships between variables, to develop ways of presenting the results of the analysis to others and so on. The researcher has to interpret the results of the analysis; this is the stage where “the findings” of the research c emerge. The researcher here has to consider the connections between the findings that emerge and the preoccupations that acted as the impetus of the research. For instance, whether there was a hypothesis, is it confirmed? What are the implications of the findings for the theoretical ideas that formed the background to the research?
After that, the research must be written up. At this stage, the researcher does not only writing up the findings and conclusion, but has also to convince the readers for the research conclusions, the significance and the validity of his/her fin findings.
Concepts and their measurement
Concepts are the building elements of theory and represent the points around which social research in conducted. Each concept represents a label that we give to elements of the social world that seem to have common features and that strike us as significant.
Concepts employed in quantitative research have to be measured. Once they are measured, concepts can be in the form of independent or dependent variables. Concepts can both provide an explanation of a certain aspect of the social world, or they may stand for things we want to explain.
At first place, measurement allows us to delineate fine differences between people in terms of the characteristics in question. Also, measurement gives us a consistent device or yardstick for making such distinctions. A measurement t device provides a consistent instrument for gauging differences; our ability to be consistent over time and our ability to be consistent with other researchers. In the end, measurement provides the basis for more precise estimates of the degree of relationship between concepts, for example the correlation analysis.
In order to provide a measure of a concept, an operational definition, it is necessary to have an indicator that will stand for the concept. There is a distinction between an indicator and a measure. The latter can be taken to refer to things that can be relatively unambiguously counted; measures are quantities.
Indicators are used to tap the concepts that are less directly quantifiable. An indicator is something that is devised or already exists and that is employed as though it were a measure of a concept. It is viewed as an indirect measure of a concept.
Indicator can be devised through self-completion questionnaire, through a structured observation schedule and through content analysis.
Dimensions of concepts
According to Lazarsfeld (1958) a concept may compromise different dimensions. That means that when the researcher is seeking to develop a measure of a concept, the different aspects or components of that concept should be considered. The specification of the dimensions of a concept would be undertaken with reference to theory and research associated with that concept. However, in most quantitative research there is a tendency to rely on a single indicator of concepts. What is important is whether measures are reliable and whether they are valid representations of the concepts they are supposed to be tapping.
Reliability and validity
Reliability is concerned with consistency of measures. Reliability involves the meanings of stability, internal reliability and inter-observer consistency.
Considering stability entails asking whether a measure is stable over time, so that we can be confident that the results relating to that measure for a sample of respondents do not fluctuate. The test-retest method is appropriate for testing stability of a measure. This means that if we administer a measure to a group and then readminister it, there will be little variation over time in the results obtained.
Internal validity concerns whether the indicators that make up the scale or index are consistent, whether respondents’ scores on any one indicator tend to be related to their scores on the other indicators.
Inter-observer validity entails when a great deal of subjective judgement is involved in such activities as the recording of observations or the translation of data into categories and where more than one “observer” is involved in such activities, there is the possibility that there is lack of consistency in their decisions.
Validity refers to the issue whether an indicator or a set of indicators that is devised to gauge a concept really measures that concept. Measurement validity has to do with whether a measure of a concept really measures that concept. There are different types of validity.
Face validity is that the new measure developed by the researcher reflects the content of the concept in question. Concurrent validity employs a criterion on which cases are known to differ and that is relevant to the concept in question. Predictive validity entails the use, by the researcher, of a future criterion measure rather than a contemporary one. In construct validity the researcher is encouraged to deduce hypotheses from a theory that is relevant to the concept. In convergent validity, the validity of a measure ought to be gauged by comparing it to measures of the same concept developed through other methods.
However, quantitative research can be characterized as exhibiting certain preoccupations, the most central of which are measurement, causality, generalization and replication. Moreover, quantitative research has been subjected to many criticisms by qualitative researchers. These criticisms tend to evolve around the view that a natural science model is inappropriate for studying the social world.
The first step in doing survey research is to identify the general research issues that need to be investigated. These issues are gradually narrowed down so that they become research questions, which may also take the form of hypotheses, but not necessarily. Research issues may be transformed into research questions through the reading of literature related to the issues, the relevant theory and evidence.
After the formulation of the research questions, starts the planning of the fieldwork. The decisions, here, concern the sampling and the research instrument. The survey researcher needs to decide what kind of population is suited to the investigation of the topic and also needs to formulate a research instrument and how it should be administrated. Research instrument refers to something like a structured interview schedule or a self-completion questionnaire.
Steps in conducting a social survey:
Issues to be researched
Review literature/ theories related to topic/area
Formulate research question (s)
Consider whether a social survey is appropriate (if not, consider an alternative research design)
Consider what kind of population will be appropriate
Consider what kind of sample design will be employed
Decide on sample size
Decide on mode of administration (face-to-face; telephone; postal; email; web)
Develop questions (and devise answer alternatives for closed questions)
Review questions and assess face validity
Finalize questionnaire/ schedule
Sample for the population
Administer questionnaire/ schedule to sample
Follow up non-respondents at least once
Transform completed questionnaires/ schedules into computer readable data (coding)
Enter data into statistical analysis program like SPSS
Consider implications of findings for research questions
The need to sample is almost invariably encountered in quantitative research. There are some basic concepts and term in sampling.
Population refers basically to the universe of units from which the sample is to be selected. The term “units” refers not only to people being sampled but also to a universe of nations, cities, regions, firms, etc. Sample is the segment of population that is selected for investigation; it is a subject of the population.
Sampling frame is the listing of all units in the population from which the sample will be selected.
Representative sample is a sample that reflects the population accurately so that it is a microcosm of the population. Sampling bias is a distortion in the representativeness of the sample that arises when some members of the population stand little or no chance of being selected for inclusion in the sample.
Probability sample is a sample that has been selected using random selection so that each unit in the population has a known chance of being selected. It is assumed as a representative sample is likely to be the outcome when this method of selection from the population is employed. The aim of probability sampling is to keep sampling error to the minimum.
Sampling error is the error in the findings deriving from research due to the difference between a sample and the population from which it is selected. This may occur even though a probability sample has been employed.
Non-probability sample is a sample that has not been selected using a random selection method. This implies that some units in the population are more likely to be selected than others.
Non-sampling error is the error in findings deriving from research due to the deficiencies between the population and the sample that arise either from deficiencies in the sampling approach, such as inadequate sampling frame or non-response, or from problems, such as poor question wording, poor interviewing, or flawed processing data.
Non-response is a source of non-sampling error that is particularly likely to happen when individuals are being sampled, and occurs whenever some members refuse to cooperate, or cannot be contacted or for some reason cannot supply the required data.
Census is the enumeration of an entire population. So, if data are collected in relation to all units in a population, rather than in relation to a sample of units of that population, the data are treated as census data.
Three sources of bias; why the strategies of sampling may be unlikely to produce a representative sample?
If a non-probability or non-random sampling method is used, there is a possibility that human judgement will affect the selection process, making some members of the population more likely to be selected than others.
If the sampling frame is inadequate, not comprehensive or inaccurate or suffers from some kind of similar deficiency, the sample that is derived cannot represent the population, even if a random/ probability sampling method is employed.
If some sample members refuse to participate or cannot be contacted- if there is no response-, involves the problem that those who agree to participate may differ in various ways from those who do not agree to participate. Here, some of the differences may be significant to the research question.
What is important to appreciate is that probability sampling does not and cannot eliminate sampling error. Yet, probability sampling stands a better chance than non-probability sampling of keeping sampling error in check. Also, probability sampling allows the researcher to employ tests of statistical significance that permit inferences to be made about the sample from which the sample was selected.
Probability sampling is such an important procedure in social survey research because it is possible to make inferences from information about a random sample to the population from which it was selected. We can generalize findings derived from a sample to the population. This is not to say that we treat the population data and the sample data as the same.
If we take an infinite number of samples from a population, the sample estimates of the mean of the variable under consideration will vary in relation to the population mean. This variation will take the form of a bell-shaped curve known as normal distribution. The shape of the distribution implies that there is a clustering of sample means at or around the population mean. Half the sample means will be at or below the population mean; the other half will be at or above the population mean. As we move the left or the right, the curve tails off, implying fewer and fewer samples generating means that depart considerably from the population mean. The variation of sample means around the population mean is the sampling error and is measured using a statistic known as the standard error of the mean. This is an estimate of the amount that a sample mean is likely to differ from the population mean.
Types of probability sampling
The simple random sample is the most basic form of probability sample. With random sampling each unit of the population has an equal probability of inclusion in the sample.
The systematic sample is a variation of the simple random sample. With this kind of sample, you select units directly from the sampling frame, that is, without resorting to a table of random numbers.
One of the features we might want our sample to exhibit is a proportional representation of the different faculties. Generating a simple random sample or a systematic sample might yield such a representation. The advantage of stratified sampling is that it ensures that the resulting sample will be distributed in the same way as the population in terms of the stratifying criterion.
A problem that occurs whenever the aim is to interview a sample that is to be drawn from a widely dispersed population, such as a national population, or a large region, or even a large city, is that is requires a great deal of time and cost. One way to deal with this problem is to employ cluster sampling. With cluster sampling, the primary sampling unit- the first stage of sampling procedure- is not the units of the population to be sampled but groupings of those units. It is the latter groupings or aggregations of population units that are known as clusters.
In a sense, cluster sampling is always a multi-stage approach because one always samples clusters first, and then something else-either further clusters or population units- is sampled. Also, many examples of multi-cluster sampling entail stratification. The advantage of multi-stage cluster is that it allows interviewers to be far more geographically concentrated than would be the case if a simple random or stratified sample were selected.
The sample size depends on the considerations and there is no definitive answer on how large a sample should be. Most of the time decisions about sample size are affected by considerations of time and cost.
It is the absolute size of a sample that is important and not its relative size. Also, a large sample cannot guarantee precision, so that it I probably better to say that increasing the size of a sample increases the likely precision of a sample. That means that as sample increases, sampling error decreases. Consequently, an important component of any decision about sample size should be how much sampling error one is prepared to tolerate. The less sampling error one is prepared to tolerate, the larger the sample will need to be.
Time and cost, Problem of non-response, Heterogeneity of the population
Considerations of sampling size may be affected by maters of time and cost at such a juncture, since striving for smaller and smaller increments of precision becomes an increasingly uneconomic proposition.
Most sample surveys attract a certain amount of non-response. Thus it is likely that only some members of our sample will agree to participate in the research. The issues of non-response, and particularly of refusal to participate is very significant, because it has been suggested by some researchers that response rates to social surveys are declining in many countries. There is a growing tendency towards people refusing to participate in social survey research. Also, boosting response rates to interview-based surveys can prove expensive. The response rate is the percentage of a sample that does in fact agree to participate.
One additional issue, concerns the homogeneity and heterogeneity of the population from which the sample is to be taken. When a sample is very heterogeneous, the population is likely to be highly varied. When it is relatively homogeneous, the amount of variation is less. The implication of this is, the greater the heterogeneity of a population the larger a sample will need to be.
Types of non-probability sample
Convenience sampling is a sample that is available to the researcher by virtue of its accessibility. It is highly desirable to pilot such a research instrument before using it in an investigation, and administering it to a group who are not a part of the main study may be a legitimate way of carrying out some preliminary analysis of such issues as whether respondents tend to answer in identical ways to a question. Also, it may be at least fairly acceptable to use a convenience sample and it represents too good an opportunity to miss. The data will not allow definitive findings to be generated, because of the problem of generalization, but it could provide a springboard for further research to allow links to be forged with existing findings in an area.
Snowball sampling is a form of convenience sample, but it is worth distinguishing because it has attracted quite a lot of attention over the years. The researcher makes initial contact with a small group of people who are relevant to the research topic and then uses these to establish contacts with others. The problem with snowball sampling is that it is very unlikely that the sample will be representative of the population, as the very notion of the population may be problematic in some circumstances. However, in general, snowball sampling is used not within a quantitative research strategy, but within a qualitative one. In qualitative research, the orientation to sampling is more likely to be guided by a preference for theoretical sampling than with the kind of statistical sampling.
Quota sampling is rarely employed in academic social research, but is used intensively in commercial research and political opinion polling. The aim of quota sampling is to produce a sample that reflects a population in terms of the relative proportions of people in different categories, such as gender, ethnicity, socio-economic groups, and region or residence, and a combination of these categories. Once the categories and the number of people to be interviewed within each category (known as quotas) have been decided upon, it is then the job of interviewers to select people who fit these categories.
Limits to generalization and error in survey research
Even when a sample has been selected using probability sampling, any findings can be generalized only to the population from which that sample was taken. No matter, this is an obvious point; it is easy to think that findings from a study have some kind of broader applicability. Another point is that it is almost impossible to assess whether there is a time limit on the findings that are generated.
Error in survey research involves the sampling error; it is extremely unlikely that one will end up with a truly representative sample, even when probability sampling is employed.
A sampling-related error is subsumed under the category non-sampling error, but that arises from activities or events that are related to the sampling process and that are connected with the issue of generalizability or external validity of findings.
Data collection error is connected with the implementation of the research process. This source of error includes such factors as poor question wording in self-completion questionnaires or structured interviews, and flaws in the administration of research instruments.
Data processing error arises from faulty management of data, and in particular from errors in the coding of answers.
Structured interview is commonly employed in survey research. The research interview is a prominent data-collection strategy in both quantitative and qualitative research. The structured interview is one of the two main ways of administering a survey research instrument. Sometimes, a structured interview is called a standardized interview and entails administration of an interview schedule by the interviewer.
The aim is for all the interviewees to be given exactly the same context of questioning. This means that each respondent receives exactly the same interview stimulus as any other. The goal is to ensure that interviewees’ replies can be aggregated and this can be achieved reliably only if those replies are in response to identical cues. Interviewers read out questions exactly and in the same order as they are printed to the schedule. Questions are usually very specific and very often offer the interviewee a fixed range of answers.
Structured interview is preferred because it promotes both standardization of asking the questions and recording the answers. That means that if it is properly executed, variation in people’s replies will be due to “true” or “real” variation and not due to the interview context. For instance, when we ask a question that is supposed to be an indicator of a concept, we want to keep error to a minimum. Also, we can think of the answers to a question as constituting the values that a variable takes. These values exhibit variation.
The aim is to keep the error component to a minimum, since error has an adverse effect on the validity of a measure. If the error is quite high, validity will be jeopardised. The significance for error of standardization in the structured interview is that two sources of variation due to error are likely to be less pronounced, since the opportunity for variation in the interviewer behaviour in these two areas is reduced.
Assuming that there is no problem with an interview question due to such things as confusing terms or ambiguity, we want to be able to say as far as possible that the variation that we find is connected with true variation between interviewees and not to variation in the way a question was asked or the answers recorded in the course of the administration of a survey by structured interview.
Variability: two ways
Intra-interviewer variability occurs when an interviewer is not consistent in the ways he/she asks the questions and/ or records the answers
Inter-interviewer variability occurs when there is more than one interviewer and interviewers are not consistent with each other in the ways they ask the questions and /or record answers.
Most structured interviews contain mainly questions that are variously referred to as closed, closed ended, pre-coded, or fixed choice.
With the closed question, the respondent is given a limited choice of possible answers. In other words, the interviewer provides respondents with two or more possible answers and asks them to select which one or ones apply. The interviewer places a tick in a box by the answer(s) selected by a respondent. The advantage of this practice is that the potential for interviewer variability is reduced and that there is no problem of whether the interviewer writes down everything that the respondent says or of misinterpretation of the reply given.
In cases that an open or open-ended question is asked, the interviewer may not write down everything said, may embellish what is said, or may misinterpret what is said.
When an open question is asked the answers need to be shifted and coded in order for the data to be analyzed quantitatively.
Coding introduces another source of error. If the rules for assigning answers to categories, known as coding frame, are flawed, the variation that is observed will not reflect the true variation in interviewee’s reply. Also, there may be variability in the ways in which answers are categorized; intra-coder variability whereby the coder varies over time in the way in which the rules for assigning answers to categories are implemented, and inter-coder variability whereby coders differ from each other in the way in which the rules for assigning answers to categories are implemented.
Major types of interview
Semi-structured interview refers to a context in which the interviewer has a series of questions that are in the general form of an interview schedule but is able to vary the sequence of questions.
Unstructured interview has only a list of topics or issues, often called an interview guide or aide-mémoire, which are to be covered, while the style of questioning is usually informal.
Intensive interview is an alternative term to the unstructured interview.
Qualitative interview is a term to denote an unstructured interview but more frequently it embraces interviews of both the semi-structured and unstructured kind.
In-depth interview refers to an unstructured interview but more often refers to both semi-structured and unstructured interviewing.
Focused interview refers to an interview using predominantly open questions to ask interviewees questions about a specific situation or event that is relevant to them and of interest to the researcher.
Focus group is the same as the focused interview but the interviewees discuss the specific issue in groups.
Group interview refers to a situation in which members of a group discuss a variety of matters that may be only partially related.
Oral history interview is an unstructured or semi-structured interview in which the respondent is asked to recall events from his or her past and to reflect on them;
Life history interview is similar to oral history interview but the aim of this type of unstructured interview is to glean information on the entire biography of each respondent.
Advantages of telephone interviews over personal interviews
they are cheaper and quicker
it is easier to be supervised than the personal one
evidence advantage, which in personal interview respondents’ replies are sometimes affected by the characteristics of the interviewer
Limitations of telephone interviewing
people who dot own or who are not contactable by telephone obviously cannot be interviewed by telephone
respondents with hearing impairments are much more likely to find telephone interviewing difficult for them than personal interviewing
the length of a telephone interview is unlikely sustainable beyond 20-25 minutes
there is a general belief that telephone interviews achieves slightly lower rates than personal interviews
telephone interviews fare less well for the asking of questions about sensitive issues, such as drug and alcohol use, income, tax returns, and health
there is a general difficulty of gaining access to respondents through conventional headlines
telephone interviews cannot engage in observation
specific individuals in households or firms are the targets of an interview
the telephone interview cannot readily employ visuals aids such as show cards from which respondents might be asked to select their replies or to use diagrams or photographs
the quality of data derived from telephone interview is inferior to that of comparable face-to-face interviews
Computer- assisted interviewing
There are two main formats for computer-assisted interviewing, the computer- assisted personal interviewing (CAPI) and the computer-assisted telephone interviewing (CATI).
With computer- assisted interviewing the questions tat comprise an interview schedule appear on the screen. The advantage is when filter questions are asked, so that certain answers may be skipped as a result of a person’s reply, the computer can be programmed to jump to the next relevant question.
However, it is possible that technophobic respondents may be a bit alarmed by their use, but in general the use of computer-assisted interviewing seems to grow.
Before interviewing anybody, an interviewer should be fully conversant with the schedule and know in inside out. Prospective respondents have to be provided with a credible rationale for the research in which they are being asked to participate. The introductory rationale may be either spoken by the interviewer or written down. Since interviewers represent the interface between the research and the respondent, the have an important role in maximizing the response rate in the survey. Thus, the following points are important to be considered:
interviewers should be prepared to keep calling back if interviewees are out or unavailable
reassure people tat you are not a salesperson
dress in a way that will be acceptable to a wide spectrum of people
make it clear that you will be happy to find a time to suit the respondent
Also, it is suggested that it is important for the interviewer to achieve rapport with the respondent. This means that very quickly a relationship must be established that encourages the respondent to want to participate in and persist with the interview. Only if an element of rapport can be established, some respondents may initially agree to be interviewed but then decide to terminate their participation because of the length of time the interview is taking or perhaps because of the nature of the questions being asked. However, too much rapport may result in the interview going on too long and the respondent suddenly deciding that too much time is being spent on the activity.
There are two points generally advised to interviewers. First of all, within a survey, question order should not be varied, unless of course, question order is the subject of the study. Secondly, researchers should be sensitive to the possible implications of the effect of early questions on answers to subsequent questions.
The following rules are also proposed:
early questions should be directly related to the topic of the research
as far as possible, questions that are more likely to be salient to respondents should be asked early in the interview schedule, so that their interest and attention is more likely to be secured
potentially embarrassing questions or ones that may be a source of anxiety should be left till later
with a long schedule or questionnaire, questions should be grouped into sections
within each group of questions, general questions should precede specific ones
when a specific question comes before a general one, the aspect of the general question that is covered by the specific one is discounted in the minds of respondents because they feel they have already covered it
questions dealing with opinions and attitudes should precede questions to do with behaviour and knowledge
during the course of an interview, it sometimes happens that a respondent provides and answer to a question that is to be asked later in the interview
Probing frequently happens in interviews that respondents need help with their answers, for example when they do not understand the question. Also, it may happen when the respondent does not provide a sufficiently complete answer and has to be probed for more information. The problem here is that the interviewer’s interventions may influence the respondent, and the nature of interviewer’s interventions may differ.
There are some general tactics regarding the probes:
if further information is required, standardized probes can be employed, such as “could you please say a little more about that?”
if the problem is that when presented with a closed question the respondent replies in a way that does not allow the interviewee to select one of the predesigned answers
if the interviewer needs to know about something that requires information but the respondent resists this by answering in general terms, the interviewer needs to persist with securing a specific reply from the respondent
Prompting occurs when the interviewer suggests a possible answer to a question to the respondent. Here, the key prerequisite is that all respondents receive the same prompts. There are three kinds of context in which it might be preferable to employ show cards rather than to read out the entire set of possible answers:
there may be a very long list of possible answers
sometimes, during the course of interviews, respondents are presented with a group of questions to which the same possible answers are attached
some people are not keen to divulge personal details such as tehri age or their income
Training and supervision
Whenever people other than the lead researcher are involved in interviewing, they will need training and supervision in the following areas:
contacting prospective respondents and providing an introduction to the study
reading out questions as written and following instructions in the interview schedule
using appropriate styles of probing
recording exactly what is said
maintaining an interview style that does not bias respondents’ answers
Supervision related to these issues can be achieved by:
checking individual interviewers’ response rates
tape-recording at least a sample of interviews
examining completed schedules to determine whether any questions are being left out or if they are being completed properly
call-backs on a sample of respondents to determine whether they were interviewed and to ask interviewers’ conduct
Advice on structured interviewing
interviewers should refrain from expressing views or opinions on the topics covered by the survey instrument
Interviewers should refrain from presenting any personal information that might provide a bias for inferring what their preferences or values might be that are relevant to the content of the interview
Although a little informal chatting about neutral topics, such as the weather or pets, any help to free up communication, for the most part, interviewers should focus on the task
The self-completion questionnaire is sometimes referred to as a self-administered questionnaire. With self-completion questionnaires respondents answer questions by completing the questionnaire by themselves. One form of the self-completion questionnaire is the postal or mail questionnaire, that is, a questionnaire sent through the post to the respondent. “Postal” is used rather than mail to distinguish questionnaires that are sent through the postal system from email questionnaires.
Self-completion questionnaires compared to structured interviews:
have fewer open questions, since close ones tend to be easier to answer
have easy- to- follow designs to minimize the risk that the respondent will fail to follow filter questions or will inadvertently omit a question
be shorter to reduce the risk of “respondent fatigue”, since it is manifestly easier for a respondent who becomes tired of answering questions in a long questionnaire to consign it to a waste paper bin rather than to terminate an interview
The advantages of the self-completion questionnaires over the structured interviews are that self-completion questionnaires:
are cheaper to administer
are quicker to administer
there is absence of interviewer effects
there is no interviewer variability
they are more convenient for respondents
The disadvantages of the self-completion questionnaires in comparison with the structured interview are that self-completion questionnaires:
cannot prompt, since there is no one present to help the respondents if they have any difficulty answering a question
cannot probe, there is no opportunity to probe respondents to elaborate an answer
cannot ask many questions that are not salient to respondents
there is difficulty of asking other kinds of questions
questionnaire can be read as a whole
we do not know who answers, we can never be sure whether the right person has answered the questionnaire
cannot collect additional data
difficult to ask a lot of questions
not appropriate for some kind of researchers, such as those whose literacy is limited or whose facility with English is restricted
there is a greater risk of missing data; partially answered questionnaires are more likely since there is lack of prompting or supervision
lower response rates in comparison to the interview-based studies
Response rates are important because the lower a response rate, the more questions are likely to be raised about the representativeness of the achieved sample. This is likely to be an issue only with randomly selected samples. With samples that are not selected on the basis of a probability sampling method, it could be argued that the response rate is less of an issue because the sample would not be representative of a population even if everyone participated. Here, the key point is to recognize and acknowledge the implications of the possible limitations of a low response rate.
A great deal of thought and research has gone into ways of improving survey response; the following steps are frequently suggested:
write a good covering letter explaining the reasons for the research; why it is important, why the recipient has been selected, who is the sponsorship, and provide guarantees of confidentiality
postal questionnaires should always be accompanied by a stamped addressed envelope or return postage
follow up individuals who do not reply at first, possibly with two or three further mailing
shorter questionnaires tend to achieve better response rates than longer ones
clear instructions and an attractive layout improve postal questionnaires response rates
do not allow the questionnaire to appear unnecessarily bulky
begin with questions that are more likely to be of interest
personalize covering letters by including the respondents’ name and address
postal questionnaires should comprise as few open questions as possible, since people are often deterred by the prospect of having to write a lot
providing monetary incentives increases the response rate
Designing the self-completion questionnaire
Do not cramp the presentation: make the instrument appear as short as possible in order for it to be less likely to deter prospective respondents
Clear presentation: make sure that is has a layout that is easy on the eye and that is facilitates the answering of all questions that are relevant to the respondent
Vertical closed answers: vertical format whenever possible because in some cases where either arrangement is feasible, confusion can arise when a horizontal one is employed
Clear instructions about how to respond: be clear about how you want the respondents to indicate their replies when answering closed questions
Keep question and answers together: you should never split a question so that it appears on two separate pages
Diaries as a form of self-completion questionnaire
In quantitative research, the researcher-driven diary functions in a similar way to the self-completion questionnaire. Also, the researcher-driven diary is an alternative method of data collection to observation. The dairy can be thought of as the equivalent of structured observation in the context of research questions that are framed in terms of quantitative research.
There are three major ways in which the term “diary” has been employed in the context of social research:
the diary as a method of data collection: the researcher devises a structure for the diary and then asks a sample of diarists to complete the instruments so that they record what they do more or less contemporaneously with their activities
the diary as a document: the diary in this context is written spontaneously by the diarist and not at the behest of the researcher. Diaries in this context are used as by historians but have some potential for social researchers working on issues that are of social scientific significance
the diary as a log of the researcher’s activities: researchers sometimes keep record of what they do at different stages as an aide-mémoire, for example the field notes by ethnographers
According to Corti (1993) there are some recommendations on how a person should prepare the diary. Precisely, he recommends that the dairy should:
provide explicit instructions for diarists
be clear about the time periods within which behaviour is to be recorded- meaning, the day, twenty-four hours, week
provide a model of a completed section of a diary
provide checklists of “items, events or behaviour” that can jog people’s memory- but the list should not become too daunting in length or complexity
include fixed blocks of time or columns showing when the designed activities start and finish
Advantages and disadvantages of the diary as a method of data collection
Apart from the major advantage of the diaries, that is that the resulting data are more accurate than the equivalent data based on interviews or questionnaires, there are some additional potential advantages of the use of diaries:
when fairly precise estimates of the frequency or amount of time spent in different forms of behaviour are required, the diary provides more valid and reliable data than questionnaires
when information about the sequencing of different types of behaviour is required, the diary is likely to perform better than questionnaires or interviews
based on the previous two advantages, it could be suggested that structured observation would be just as feasible, but structured observation is probably less appropriate for producing data on behaviour that is personally sensitive
However, diaries may have also the following disadvantages:
they tend to be more expensive than personal interviews
diaries can suffer from a process of attrition, as people decide they have had enough of the task of completing a diary
this last point raises the possibility that diaries become less diligent over time about their record keeping
this is sometimes failure to record details sufficiently quickly, so that memory recall problems set it
One consideration of researchers is whether to ask a question in an open or closed format. The issue of whether to ask a question in an open or closed format is relevant to the design of both structured interview and self-completion questionnaire research.
No matter researchers prefer to use closed question, open questions also have certain advantages:
respondents can answer in their own terms
open questions allow unusual responses to be derived, replies that the survey researcher may not have contemplated
the questions do not suggest certain kinds of answer to respondents, so that respondents’ levels of knowledge and understanding of issues can be tapped and the salience of issues for respondents can also be explored
open questions are useful for exploring new areas or ones in which the researcher has limited knowledge
open questions are useful for generating fixed-choice format answers
However, open questions present also some problems for the survey researcher, which are:
open questions are time-consuming for interviewers to administer; interviewees are likely to talk for longer than is usually the case with a comparable closed question
answers have to be “coded”, which is time-consuming, as well
open questions require effort from respondents, who are likely to talk for longer than would be the case for a comparable closed question or in the case of a self-completion questionnaire, would need to write for much longer
there is the possibility in research based on structured interviews of variability between interviewers in the recording of answers, due to the difficulty of writing down verbatim what respondents say to interviewers
Coding is a key stage in quantitative research. It entails two stages. First, the unstructured material must be categorized. Second, the researcher must assign numbers to the categories that have been created.
Also, there is an important distinction between pre-coding and post-coding. Many closed questions in survey instruments are pre-coded, which means that respondents are being asked to assign themselves to a category that has already had a number assigned to it. Post-coding occurs when answers to an open question are being coded or when themes in newspaper articles concerned with a certain topic are being counted, as in content analysis.
Three basic principles should be applied in coding process:
the categories that are generated must not overlap, otherwise the numbers assigned to them cannot be applied to distinct categories
the list of categories must be complete and therefore cover all the possibilities, otherwise some material will not be capable of being coded
there should be clear rules about how codes should be applied, so that the person conducting the coding has instructions about the kinds of answers that should be subsumed under a particular code.
“Coding frame” is used to describe the lists of codes that should be applied to unstructured data and the rules for their application. “Coding manual” is a term used in content analysis and structured observation, to describe the lists of codes for each item of information and the rules to be employed.
Closed questions: advantages and disadvantages
The advantages of closed questions are:
it is easy to process answers
closed questions enhance the comparability of answers
closed questions may clarify the meaning f a question for respondents
closed questions are easy for interviewers and/ or respondents to complete
in interviews, closed questions reduce the possibility of variability in the recording of answers in structured interviewing
The disadvantages of closed questions are:
there is loss of spontaneity in respondents’ answers
it can be difficult to make forced-choice answers mutually exclusive
it is difficult to make forced-choice answers exhaustive
there may be variation among respondents in the interpretation of forced-choice answers
closed questions may be irritating to respondents when they are not able to find a category that the feel applies to them
in interviews, a large number of closed questions may make it difficult to establish rapport, because the respondent and interviewer are less likely to engage with each other in a conversation
Types of questions
There are several types of questions to ask when employing a structured interview or a self-completion questionnaire. Some prominent types of question are:
Personal factual questions: these are questions that ask the respondent to provide personal information, such as marital status, income, age, education and so on. They also include information on the behaviour of the respondent, such as church attendance or frequency of visiting cinema and so on.
Factual questions about others: these questions ask for personal information about other, also in combination with the respondent
Informant factual questions: people who are interviewed are sometimes placed in the position of informants rather than of respondents about themselves, for example when people are asked about the size of the firm where they work
Questions about attitudes: these questions may involve formats for measuring attitudes
Questions about beliefs: respondents are frequently asked about their beliefs, religious or political beliefs
Questions about normative standards and values: respondents may be asked to indicate what principles or behaviour influence them or they hold dear
Questions about knowledge: questions can sometimes be employed to test respondents’ knowledge in an area
Rules for designing questions
Hereby are presented some general rules of thumb and specific rules when designing questions that the researcher should take into consideration.
General rules of thumb
always bear in mind your research questions
what do you want to know?
How would you answer it?
Specific rules when designing questions
avoid ambiguous terms in questions
avoid long questions
avoid double-barrelled questions
avoid very general questions
avoid leading questions
avoid questions that are actually asking two questions
avoid questions that include negatives
avoid technical terms
does the respondent have the requisite knowledge
make sure that there is a symmetry between a closed question and its answers
make sure that the answers provided for a closed question are balances
memory problems, do not rely too much on people’s memories to the extent that the answers for many of them are likely to be inaccurate
do not offer a “don’t know” or “no opinion” answer option unless it is absolutely necessary, since it has been assessed that respondents who are bored of hold a low level of education may be prone to these answers
The vignette technique essentially comprises presenting respondents with one or more scenarios and then asking them how they would respond when confronted with the circumstances of that scenario. The advantage of a vignette over an attitude question is that it anchors the choice in a situation and as such it reduces the possibility of an unreflective reply. For instance, when the subject is a sensitive area, there is a possibility that the questions may be seen as threatening by respondents, and respondents may also feel being judged for their replies. A major requirement of the vignette technique is that the scenarios must be believable, so that considerable effort needs to go into the construction of credible situations. Yet, it is difficult o estimate how far people’s answers reflect their own normative views or indeed how they themselves would act when confronted with the kinds of choices revealed in the scenarios.
Piloting and pre-testing questions
The desirability of conducting a pilot study before administering a self-completion questionnaire or structured interview schedule does not only have to do with trying to ensure that survey questions operate well but also with ensuring that the research instrument as a whole functions well.
Some uses of pilot studies are included in the following points:
if the main study is going to employ mainly closed questions, open questions can be asked in the pilot to generate the fixed-choice answers
piloting an interview schedule can provide interviews with some experience of using it and can infuse them with a greater sense of confidence
if everyone who answers a question replies in the same way, the resulting data are unlikely to be of interest because they do not form a variable
in interview studies it may be possible to identify questions that make respondents fel uncomfortable and to detect any tendency for respondents’ interest to be lost at certain junctures
questions that seem not to be understood or questions that are often not answered should become apparent
pilot studies allow the researcher to determine the adequacy of instructions to interviewers or to respondents completing a self-completion questionnaire
it may be possible to consider how well the questions flow and whether it is necessary to move some of them around to improve this feature
Using existing questions
It is recommended that you should also consider using existing questions, questions that have been employed by other researchers for at least part of your questionnaire. This allows you in a sense to use questions which have been already piloted for you. There are some “questions banks” where you can find some questions. One common source is the http://qb.soc.surrey.ac.uk/ . The question banks include questions from major surveys and are presented in the context of the questionnaire they appeared and are accompanied by technical details.
Problems with using survey research to investigate behaviour
problem of meaning: people may vary in their interpretations of key terms in a question
problem of omission: when answering the question, respondents may inadvertently omit key terms in the question
problem of memory: they may misremember aspects of the occurrence of certain forms of behaviour
social desirability effect: people may exhibit a tendency towards replying in ways that are meant to be consistent with their perceptions of the desirability of certain kinds of answer
question threat: some questions may appear threatening and result in a failure to provide an honest reply
interview characteristics: aspects of the interviewer may influence the answers provided
gap between stated and actual behaviour: how people say they are likely to behave and how they actually behave may be inconsistent
Major types of observation research
Structured observation is a method that is relatively underused in social research. It is also called systematic observation and is a technique in which the researcher employs explicitly formulated rules for the observation and recording of behaviour. The rules inform the observer about what they should look for and how they should record behaviour. Each person who is part of the research is observed for a predetermined period of time using the same rules. Structured observation entails the direct observation of behaviour and the recording of that behaviour in terms of categories that have been devised prior to the start of data collection.
Apart from the structured observation, there are some extra forms of observational research in social research, such as:
Participant observation: this is one of the best known methods of research in the social science and it is associated with qualitative research. It entails the relatively prolonged immersion of the observer in a social setting in which he/she seeks to observe the behaviour of members of that setting and to elicit the meanings the attribute to their environment and behaviour.
Non-participant observation: it is a term used to describe a situation in which the observer observes but does not participate in what is going on the social setting. Structured observers are usually non-participants in that they are in the social setting being observed but rarely participate in what is happening.
Unstructured observation: as the term implies it does not entail the use of an observation schedule for the recording of behaviour. The aim is to record as much detail as possible in the behaviour of participants with the aim of developing a narrative account of that behaviour.
Simple observation and contrived observation: with simple observation the researcher has no influence over the situation being observed, while in the case of contrived observation, the observer actively alters the situation the effects of an intervention. Both of them are forms of non-participant observation and can entail either structured or unstructured observation.
The observation schedule
Devising a schedule for the recording of observation is clearly a crucial step in the structured observation project and the following considerations should be taken into account:
a clear focus is necessary. It should be clear to the observer exactly who or what is to be observed. Also, the observer needs to know precisely which if any aspects of the setting are to be observed and hence recorded
the forms taken by any category of behaviour must be both mutually exclusive and inclusive
the recording system must be easy to operate, complex systems with large numbers of types of behaviour will be undesirable
one possible problem with observation schedules is that sometimes require a certain amount of interpretation on the part of the observer
Strategies for observing behaviour
There are several ways of conceptualizing how behaviour can be recorded, such as:
we can record in terms of incidents: this means that waiting for something to happen and then recording what follows from it
we can observe and record in terms of short periods of time:
we can observe and record observations for quite long periods of time, the observer watches and records more or less continuously
a further approach to the observation of behaviour is time sampling
In general, sampling procedures conform to probability sampling principle, because it is feasible to construct a sampling frame for individuals. However, this is not always possible for different kinds of reason.
Random sampling is not permitted by studies in public areas because we cannot very easily construct a sampling frame of the people who walk along the street. Also, a sampling frame of interactions is not easily to get constructed. The problem of doing structured observation research is the difficulty of lending a specification of a sampling frame, and therefore the researcher’s ability to generate a probability sample is curtailed.
A further set of distinctions between types of sampling in structured observation come from Martin and Bateson (1986) and are the following:
“ad libitum sampling”: the observer records whatever is happening at the time
“focal sampling”: a specific individual is observed for a set period of time; the observer records all examples of whatever forms of behaviour are of interest in terms of schedule
“scan sampling”: an entire group of individuals is scanned at regular intervals and the behaviour of all of them is recorded at that time
“behaviour sampling”: an entire group is watched and the observer records who was involved in a particular kind of behaviour
Reliability and Validity
Compared to interviews and questionnaires, structured observation has been concluded to provide:
more reliable information about events
greater precision regarding their timing, duration and frequency
greater accuracy in the time ordering or variables
more accurate and economical reconstructions of large-scale social episodes
Reliability: considerations in relation to reliability:
Inter-observer consistency entails considering the degree to which two or more observers of the same behaviour agree in terms of their coding of that behaviour on the observation schedule.
Intra-observer consistency is the degree of consistency of the application of the observation schedule over time. It is a difficult notion due to the capacity and often necessity for people to behave in different ways on different occasions and in different contexts
In structured observation is difficult to achieve reliability. Also, considering the fact that validity presupposes reliability may be difficult to achieve on occasions because of the effects of such factors as observer fatigue and lapses attention.
Measurement validity concerns the issue whether a measure is measuring what is supposed to measure, and the validity of any measure will be affected by:
whether the measure reflects the concept it has been designed to measure, which means that in the structured observation it is necessary to attend to the same kinds of issues concerning the checking of validity that are encountered in research based on interviews and questionnaires
error that arises from the implementation of the measure in the research process, and is related with two matters; whether the observation schedule is administered as it is supposed to be and whether people do change their behaviour because they know they are being observed (“reactive effect”)
Field stimulation as a form of structured observation
Field stimulation is a study in which the researcher directly intervenes in and/ or manipulates a natural setting in order to observe what happens as a consequence of that intervention. Unlike most structure observation, in a field stimulation participants do not know they are being studied.
Criticisms of structured observation
Apart form criticisms concerning reliability and validity issues, as well as generalizability, there are certain other areas of criticism:
there is a risk of imposing a potentially inappropriate or irrelevant framework on the setting being observed
structure observation concentrates upon directly observable behaviour and therefore it is rarely able to get at intentions behind behaviour
there is a tendency for structured observation to generate lots of fragments of data. The problem can be one of trying to piece them together to produce an overall picture, or one of trying to find general themes that link the fragments of data together
it is often suggested that structured observation neglects the context within which behaviour takes place
However, there are also some points on which structure observation is more applicable and effective:
when overt behaviour is the focus of analysis and perhaps issues of meaning are less salient, structured observation more accurate and effective than getting people to report on their behaviour through questionnaires.
structured observation is a method that works best when it is accompanied by other methods. For example, since it cannot provide reasons for observed patterns of behaviour, if it is accompanied by another method that can probe reasons, it is of greater utility.
in laboratory experiments in fields like social psychology, observation with varying degrees of structure is quite commonplace. That may be due to the fact tat although interviews and questionnaires are limited in terms of their capacity to tap behaviour accurately, they do offer the opportunity to reveal information about both behaviour and social backgrounds.
they are more flexible and offer the prospect of being able to uncover a variety of correlates of behaviour
they can ask questions about attitudes and investigate explanations that people proffer for their behaviour
Content analysis is an approach to the analysis of document sand texts (which may be printed or visual) that seeks to quantify content in terms of predetermined categories and in a systematic and replicable manner. It is a very flexible method that can be applied to a variety of different media. In a sense it is not a research method in that it is an approach to the analysis of documents and texts rather than a means of generating data. However, it is usually treated as a research method because of its distinctive approach to analysis. Content analysis is firmly rooted in the quantitative research strategy in that the aim is to produce quantitative accounts of the raw material in terms of the categories specified by the rules.
The most well-know definition of content analysis comes from Berelson (1952):
Content analysis is a technique for the objective, systematic and quantitative description of the manifest content of communication.
Content analysis can be usually contrasted with two other approaches to the analysis of the content of communication:
semiotics, that is the study/ science of signs and is an approach to the analysis of documents and other phenomena that emphasizes the importance of seeking out the deeper meaning of those phenomena. A semiotic approach is concerned to uncover the process of meaning production and how signs are designed to have an effect upon actual and prospective consumers of those signs
ethnographic content analysis, which refers to an approach to documents that emphasizes the role of the investigator in the construction of the meaning of and in texts. Also, it is referred as qualitative content analysis, because there is an emphasis on allowing categories to emerge out of data and on recognizing the significance for understanding meaning in the context in which an item being analyzed.
What are the research questions?
It is necessary to specify the research questions which will guide both the selection of the media to be content analyzed and the coding schedule. Otherwise, there is a risk that inappropriate media will be analyzed or that the coding schedule will miss out key dimensions.
Most content analysis is likely to entail several research questions. Such research questions seem to revolve around the questions of:
who (gets reported)
what (gets reported)
where (gets reported)
location (of coverage within the items analyzed)
how much (gets reported)
why (does the issue get reported)
Selecting a sample
There are several phases on selecting a sample for content analysis. This method can be applied to many different kinds of document, but here the case of applying it to mass media is examined:
decision about which mass media: Studies of mass media present a research problem as “the representation of X in the mass media”, but which mass media will choose one to focus upon?
decision about dates: sometimes the decision about dates is more or less dictated by the occurrence of a phenomenon. With a research question that entails an ongoing general phenomenon, the matter of dates is more open.
decision on whether the focus will be on an issue that entails keeping track of representation as it happens, in which case the researcher may begin at any time and the key decision becomes when to stop, or whether it is necessary to go backwards in time to select media from one or more time periods in the past
What is to be counted?
Decisions on what should be counted in the course of the content analysis are bound to be affected by the nature of the research questions under consideration. So that you would actually want or need to count will be significantly dictated by your research questions. However, the following kinds of units of analysis are frequently encountered and can be used as guides to the kinds of objects that might be the focus of attention (particularly in the context of mass media news reporting).
what kind of person has produced the item, for example general or specialist news reporter
who is or are the main focus of the item, for example politician, expert, government spokesperson, or representative of an organization?
who provides alternative vices, for example politician, expert, government spokesperson, or representative of an organization?
what was the context for the item, for example, interview, release a report, or an event such as an outbreak of hostilities or a minister’s visit to a hospital?
So, the significant actors include:
the author of the item, for example the type of correspondent?
the type of item, for example in the case of press, whether the research was reported in a general article, feature article, or some other context?
the details of the researcher who was most prominent in the item, foe example, personal details, status, and whether he or she was acting as a researcher or pundit in the context of the item?
what prompted the item, for example, launch of a report, new research initiative, or a conference?
the details of the main (if any) commentators on the research?
any other actors
The counting of the frequency with which certain words occur is sometimes undertaken in content analysis. The use of some significant words rather than others can often be of some significance, because it reveals the predilection of sensationalizing certain events.
A variation on the search for the occurrence of certain words is the examination of keywords in the context of other words. The examination of such words can be a springboard for a more thematic analysis.
Subjects and themes
In a content analysis the researcher may want to code text in terms of certain subjects and themes seeking a categorization of the phenomenon or the phenomena of interest.
When the process of coding is thematic a more interpretative approach needs to be taken. The analyst is searching not just for manifest content but also latent content as well. It becomes then necessary to probe beneath the surface in order to ask deeper questions about what is happening
A further level of interpretation is likely to be entailed when the researcher seeks to demonstrate a disposition in the texts being analysed. A way in which dispositions may be revealed in content analysis is through the coding of ideologies, beliefs or principles.
Coding is a crucial stage in the process of doing a content analysis. There are two elements to a content analysis coding scheme; designing a coding schedule and designing a coding manual.
The coding schedule is a form onto which all the data relating to an item being coded will be entered. The schedule is a simplification in order to facilitate the discussion of the principles of coding in content analysis and of the construction of a coding schedule.
However, because the coding schedule does not appear to provide much information about what is to be done or where the coding manual is needed. The coding manual is a statement of instructions to coders that also includes all the possible categories for each dimension being coded. It provides a list of all the dimensions, the different categories subsumed under each dimension, the numbers- codes that correspond to each category, and guidance on what each dimension is concerned with and any factors that should be taken into account in deciding how to allocate any particular code to each dimension. A coding manual includes all the dimensions that would be employed in the coding process, indications of the guidance for coders, and the kinds of lists of categories that were created for each dimension. The coding manual is important because it provides coders with complete listings of all categories for each dimension they are coding and guidance about how to interpret the dimensions. For a researcher it is important to spend a lot of time providing himself/ herself with instructions about how to code, and face the problem of inter-coder reliability, but mostly the issue of intra-coder reliability.
Potential pitfalls in devising coding schemes
discrete dimensions: there should be no conceptual or empirical overlap between your dimensions and you should be sure they are entirely separate
mutually exclusive categories: there should be no overlap in the categories supplied for each dimension, so that if the categories are not mutually exclusive, coders will be unsure about how to code each item
exhaustive categories: for each dimension, all possible categories should be available to coders
clear instructions: coders should be clear about how to interpret what each dimension is about and what factors to take into account when assigning codes to each category
clarity about the unit of analysis: for instance, you need to be clear about the distinction between the media item (e.g. newspaper) and the topic being coded, no matter that in practice the researcher is interested in both, he/she needs to keep the distinction in mind
For the enhancements of the quality of a coding scheme, it is highly advisable to pilot early versions of the scheme. Piloting will help to identify difficulties in applying the coding scheme. Also, it will help to identify any evidence that one category of a dimension tends to subsume an extremely large percentage in items.
For the reliability of coding, it must be done in a consistent manner. Coding must be consistent between coders (inter-coder reliability) and each coder must be consistent over time (intra-coder reliability).
Advantages of content analysis
content analysis is a very transparent research method; the coding scheme and the sampling process can be clearly set out so that replications and follow-up studies are feasible
it can allow a certain amount of longitudinal analysis with relative ease; studies may allow the researcher to track changes in frequency over time
content analysis is often referred to favourably as an unobtrusive method, a method that does not entail participants in a study having to take the researcher into account
it is a highly flexible method, it can be applied to a wide variety of different kinds of unstructured information
content analysis can allow information to be generated about social groups to which it is difficult to gain access
Disadvantages of content analysis
a content analysis can be only as good as the documents on which the practitioner works
it is almost impossible to devise coding manuals that do not entail some interpretation on the part of coders
particular problems are likely to arise when the aim is to impute latent rather that manifest content
it is difficult to ascertain the answers to “why?” questions through content analysis content analytic studies are sometimes accused of being atheroetical because in content analysis the emphasis on measurement can easily and unwittingly result in an accent being placed on what is measurable rather than on what is theoretically significant or important
It may be possible that students do not have the time or financial resources required in order to collect data. Yet, there are large amounts of quantitative data collected by social scientist and other. Also, many organizations, mainly government departments and their representative, collect data that are presented in statistical form and that may be usable to social scientists. Analyzing these data, rather than collecting new data, refers to the secondary analysis. The secondary analysis, this, concerns the secondary analysis of data that have been collected by other researchers and the secondary analysis of data that have been collected by various institutions in the course of their business.
Secondary analysis is the analysis of data by researchers who will probably not have been involved in the collection of those data, for purposes that likely were not envisaged by those responsible for the data collection. Secondary analysis may entail the analysis of either quantitative data or qualitative data. Typically, secondary analysis entails the analysis of data that others have collected, but, this is not necessarily the case.
Advantages of secondary analysis
cost and time: secondary analysis offers the prospect of having access to good quality data for a tiny fraction of the resources involved in carrying out a data collection exercise yourself
high-quality data: many of the data employed for secondary analysis are of high quality
opportunity for longitudinal analysis: the fact that secondary analysis can offer the opportunity for longitudinal research is rather rare in social research because of the time and cost involved
subgroup analysis: when large samples are the source of data, there is the opportunity to study what can often be quite sizable subgroups
opportunity for cross-cultural analysis: cross-cultural analysis research has considerable appeal at a time when social scientists are more attuned to the processes associated with globalization and to cultural differences. Findings should not be applied to countries other that the research was conducted. However, the secondary analysis of comparable data provides one possible model for conducting cross-cultural research.
more time for data analysis: exactly because data collection is time-consuming, the analysis of data is often squeezed. In secondary analysis the fact that you are freed from having to collect the data means that your approach to the analysis of data can be more considered than it might perhaps otherwise have been
reanalysis may offer new interpretations: data can be analysed in so many different ways that it is very unusual for the range of possible analyses to be exhausted, for instance a secondary analyst may decide to consider the impact of a certain variable on the relationships between variables of interest. Also, the arrival of theoretical ideas may suggest analyses that could not have been conceived of by the original researchers. An alternative method of quantitative data analysis may be employed and offer the prospect of a rather different interpretation of the data. And fourth, new methods of quantitative data analysis are continuously emerging.
the wider obligations of the social researcher: making data available for secondary analysis enhances the possibility that fuller use will be made of data
Limitations of secondary analysis
lack of familiarity with data: with data collected by others a period of familiarization is necessary; with the range of variables, the ways in which the variables have been coded, and various aspects of the organization of data
complexity of data: some of the best known data sets that are employed for secondary analysis are very large in the sense of having large numbers of both respondents and variables. Also, there are problems with the hierarchical data sets, where data are collected and presented at the level of both the household and the individual, as well as other levels and the secondary analyst must decide which level of analysis is going to be employed.
no control over data quality: the quality of data should never been taken for granted, even in cases that it is reasonably assured, data will not necessarily meet all of a prospective secondary analyst’s needs, since they may not have been collected on an aspect of a topic that would have been of considerable interest.
absence of key variables: because secondary data are collected by others, it may be that one or more key variables may not be present.
The Data Archive
The UK Data Archive is likely to become your source of quantitative data for secondary analysis. Access to this catalogue can be achieved online by going to the Data Archive home page at http://www.data-archive.ac.uk/ .
There are also available some large data sets for secondary analysis:
British crime Survey (BCS)
British Household Panel Survey (BHPS)
British Social Attitudes (BSA) Survey
Expenditure and Food Survey (EFS)
Family Expenditure Survey (FES)
General Household Survey (GHS)
Labour Force Survey (LFS)
Millennium Cohort Study
National Child Development Study (NCDS)
National Food Survey (NFS)
ONS Omnibus Survey
Workplace Employee Relations Survey (WERS)
Agencies of the state, in the course of their activities, are required to keep a running record of their areas of activities. When these records are aggregated, they form the official statistics in an area of activity.
Official statistics offer researchers some advantages:
the data have been already collected, so that considerable time and expenses may be saved
the problem of reactivity is less pronounced, since people who are the source of the data are not being asked questions that are part of a research project
there is the prospect of analysing the data both cross-sectionally and longitudinally. Also, we can analyse data over time.
there is the prospect of cross-cultural analysis, since the official statistics for different nation states can be compared for a specific area of activity
The problem of ecological fallacy
The ecological fallacy is the error of assuming that inferences about individuals can be made from findings relating to aggregate data. He fallacy can arise for many reasons, for example, it may not be the members of the minority groups who are responsible for the high levels of offending.
An unobtrusive method is any method that directly removes the observer from the set of interactions or events being studied.
There are four main types:
physical traces: these are the “signs left behind a group” and include such things as graffiti and trash
archive materials: this category includes statistics collected by governmental and non-governmental organizations, diaries, the mass media, and the historical records
simple observation: this refers to “situations in which the observer has no control over the behaviour or sign in question, and plays an unobserved, passive, and nonintrusive role in the research situation”
contrived observation: this is the same as simple observation, but the observer either actively varies the setting in some way or employs hidden hardware to record observations
Official statistics, therefore, would be subsumed under the second category, as they represent a form of unobtrusive method and enjoy certain advantages, as the lack of reactivity, because of that.
A common error that arises is that people first collect their data and then they expect to decide which method of analysis they are going to employ. This is because quantitative data analysis looks like a distinct phase that occurs after data have been collected. Indeed, quantitative data analysis is something that occurs typically at a stage in the overall process and is also a distinct stage. Yet, that does not mean that you should not be considering how you will analyse your data.
Actually, you should be fully aware of what techniques you will apply at a fairly early stage, when you are designing a questionnaire, observation schedule, coding frame, or whatever. This is because:
you cannot apply any technique to any variable. Techniques have to be appropriately matched to the types of variables you have created through your research. This means that you must be fully conversant with the ways in which different types of variable are classified.
The size and nature of your sample are likely to impose limitations on the kinds of techniques you can use
So, you need to know that decisions that you make at an early stage in the research process (kind of data you collect, size of your sample) will have implications for the shorts of analysis you will be able to conduct.
A general problem that arises in quantitative data analysis is how to handle “missing data”. Missing data may arise when respondents fail to reply to a question, either by accident or because they do not want to answer the question. When there are missing data, it is necessary to code missing data with a number that could not also be a true figure. For instance, it missing data is coded as a “9999” number, then it is important to ensure that the computer software is notified of this fact, since it needs to be taken into account during the analysis.
Types of variables
When you look at a questionnaire, the kinds of information that you receive varies by question, so that the answers differ between them. This leads to a classification of the different types of variable that are generated in the course of research.
There are four main types of variables:
interval/ ratio variables: these are variables where the distances between the categories are identical across the range of categories. For example, when a parson spends 32min in one activity and the other person spends 33min for the same activity, then the distance between the categories is one minute. This is the highest level of measurement, and a very wide range of techniques of analysis can be applied to interval/ratio variables. Also, there is a distinction between interval and ratio variables, in that ratio variables are interval variables with a fixed zero point, which is not the case for social research since most ration variable exhibit this quality.
ordinal variables: these are variables whose categories can be rank ordered (as in interval/ ratio variables), but the distances between the categories are not equal across the range. Also, if you subsequently group an interval/ratio variable which refers to people’s age, into categories (e.g. 20 and under; 21-30; 30 and over), you are transforming it into an ordinal variable.
nominal variables: these variables, also known as categorical variables, comprise categories that cannot be rank ordered.
dichotomous variables: these variables contain data that have only two categories (e.g. gender). Their position in relation to the other type is slightly ambiguous, as they have only one interval. They look as though they are nominal variables, but because they have only one interval they are sometimes treated as ordinal variables.
Multiple-indicator (or multiple-item) measures of concepts, strictly speaking, produce ordinal variables. However, it has also been argued that they can be treated as though they produce interval/ ratio variables, because of the relatively large number of categories they generate.
Univariate analysis refers to the analysis of one variable at a time.
A frequency table provides the number of people and the percentage belonging to each of the categories for the variable in question. It can be used in relation to all different types of variable.
If an interval/ratio variable (like people’s age) is to be presented in a frequency table format, it is invariably the case that the categories will need to be grouped. When grouping in such a way, you should ensure that the categories you create do not overlap (i.e. 20-30, 30-40, 40-50, etc.). if you do not group people in terms of age categories, there will be too many categories to take in. So, for instance, by creating five categories the distribution of ages is easier to comprehend.
Diagrams are a common method to display quantitative data. The chief advantage is that they are relatively easy to interpret and understand. When working with nominal or ordinal variables, the bar chart and the pie chart are the easiest methods to use. If you are displaying an interval/ratio variable, a histogram is likely to be employed. But, with the histogram there is no space between the bars, whereas there is a space between the bars of a bar chart. Histograms are produced for interval/ ratio variables, whereas bar charts are produced fro nominal and ordinal variables.
Measures of central tendency
In quantitative data analysis, three different forms of average are recognized:
arithmetic mean: this is the average as we understand it in everyday use, that is, we sum all the values in a distribution and then divide by the number of values. The mean should be employed only in relation to interval/ratio variables, though it is not uncommon to see it being used for ordinal variables as well.
median: this is the mid-point in a distribution of values. The media is derived by arraying all the values in a distribution from the smallest to the largest and then finding the middle point. The median can be employed in relation to both interval/ratio and ordinal variables.
mode: this is the value that occurs most frequently in a distribution. The mode can be employed in relation to all types of variables.
Measures of dispersion
range: is the most obvious way to measure dispersion. This is simply the difference between the maximum and the minimum value in a distribution of values associated with an interval/ratio variable.
standard deviation: is another measure of dispersion. This is essentially the average amount of variation around the mean. The standard deviation is calculated by taking the difference between each value in a distribution and the mean and then dividing the total of the differences by the number of values.
Bivariate analysis is concerned with the analysis of two variables at a time in order to uncover whether the two variables are related. Exploring relationships between variables means that the variation in one variable coincides with the variation in another variable. There is a variety of techniques to do so, but this is mainly dependent on the nature of the two variables being analysed.
Relationships not causality
When analysing relationships between variables, it is important to bear in mind, precisely the relationships that they uncover. This means that you cannot infer that one variable causes another.
However, sometimes we may feel confident that we can infer a causal direction when a relationship between two variables is discerned. Yet, such inferences may be based on good reasoning and they can only be inferences, and there is the possibility that the real pattern of causal direction is the opposite of that which is anticipated.
A contingency table is like a frequency table, but it allows two variables to be simultaneously analysed so that relationships between the two variables can be examined. It is normal for contingency tables to include percentages, since they make these tables to interpret. The percentages are column percentages, that is, they calculate the number in each cell as a percentage of the total number in each cell as a percentage of the total number in that column. Users of contingency tables usually present the presumed independent variable as the column variable and the dependent as variable as the row variable.
It is the most flexible of all methods of analysing relationships in that they can be employed in relation to any pair of variables, though they are not the most efficient method for some pairs. That is the reason why the method is not recommended in all of the cells.
Pearson’s r is a method for examining relationships between interval/ratio variables. The chief features of this method are:
the coefficient almost certainly lies between 0 (zero or no relationship between the two variables) and 1 (a perfect relationship), which indicates the strength of a relationship
the closer the coefficient is to 1, the stronger the relationship; the closer to it is to zero, the weaker the relationship
the coefficient will be either positive or negative, and this indicates the direction of a relationship
The scatter diagrams present the relationship between pairs of interval/ratio variables. In a perfect positive relationship, which has a Pearson’s r correlation of 1, means that as one variable increases by the same amount and that no other variable is related to either of them. If the correlation was below 1, would mean that variable1 is related to at least one other variables as well as variable2.
In a perfect negative relationship, there is a Pearson’s r correlation of -1, means that as one variable increases the other variable decreases and that no other variable is related to either of them.
If there was no or no virtually no correlation between the variables, there would be no apparent pattern to the markers in the scatter diagram. For instance, when the correlation is virtually zero at -0.041, means that the variation in each variable is associated with other variables than the ones present in the analysis.
If a relationship is strong, a clear patterning to the variables will be evident. If there is clearly a positive relationship and in fact the Pearson’s r value is 10.88, it means that the variation in the two variables is closely connected, but that there is some influence of other variables in the extent to which they vary.
For using Pearson’s r, the relationship between the two variables has to be linear meaning that when plotted on a scatter diagram, the values of the two variables approximate to a straight line and do not curve. So that, plotting a scatter diagram before using Pearson’s r is important in order to determine that the nature of the relationship between a pair of variables does not violate the assumptions being made when this method of correlation is employed.
Moreover, if you square Pearson’s r, a further useful statistic is derived, the coefficient of determination, which expresses how much of the variation in one variable is due to the other variable. The coefficient of determination is a useful adjunct to the interpretation of correlation information.
Spearman’s rho, which is often represented with the Greek letter ρ, is designed for the use of pairs of ordinal variables. Also, it is used when one variable is ordinal and the other is interval/ratio. It is exactly like the Pearson’s r in terms of the outcome of calculating it, in that the computed value of rho will be either positive or negative and will vary between 0 and 1. In a situation that we want to calculate the correlation between an ordinal and an interval/ratio variable, we cannot use Pearson’s r, because both variables must be at the interval/ratio level of measurement, and instead we can use Spearman’s rho.
Phi and Cramér’s V
Phi and Cramér’s V are two closely related statistics.
The phi coefficient is used for the analysis of the relationship between two dichotomous variables. It results in a computed statistic that varies between 0 and 1 or -1.
Cramér’s V can take on only a positive value, so that it can give an indication only of the strength of the relationship between two variables, not of the direction. It is usually reported along with a contingency table and a chi-square test, and it is not normally presented on its own.
A multivariate analysis entails the simultaneous analysis of three or more variables. There are three main contexts within which multivariate analysis might be employed:
could the relationship be spurious? In order to establish a relationship between two variables there must be evidence as well as the relationship must be shown to be non-spurious. A spurious relationship exists when there appears to be a relationship between two variables, but the relationship is not real; it is being produced because each variable is itself related to a third variable.
could there be an intervening variable?. An intervening variable allows us to answer questions about the bivariate relationship between variables. It suggests that the relationship between the two variables is not a direct one, it is occurring through another variable, which appears necessary for the relationship.
could a third variable moderate the relationship? The relationship between the two variables is moderated by a third one.
A difficulty when working on data deriving from sample is whether the findings will be generalizable to the population from which the sample was drawn. There is always the possibility for a sampling error to occur, also even when probability sampling procedures have been followed. In such a case, the sample will be unrepresentative of the wider population and therefore any findings will be invalid.
In order to provide an indication of how confident you can be in your findings. This is where statistical significance and the various tests of statistical significance come in. There is a need to know how confident we can be that our findings can be generalized to the population from which that sample was selected. At this point, a technique is needed that allows us to establish how confident we can be that the findings exists in the population and what risk we are taking in inferring that the finding exists in the population. What has to be noted here is that tests of statistical significance can be employed only in relation to samples that have been drawn using probability sampling.
There are some tests that are available for determining the degree of confidence we can have in our findings when we explore relationships between variables. All the tests have a common structure.
set up a null hypothesis: this stipulates that two variables are not related in the population
establish the level of statistical significance that you find acceptable: this is essentially a measure of the degree of risk that you might reject the null hypothesis (implying that there is a relationship in the population) when you should support it (implying that there is no relationship in the population). Levels of statistical significance are expressed as probability levels, that is, the probability of rejecting the null hypothesis when you should be confirming it. The convention among most social researchers is that the maximum level of statistical significance that is acceptable is p < 0.05, which implies that there are fewer than five chances in 100 that you could have a sample that shows a relationship when there is not one in the population.
determine the statistical significance of your findings, meaning use a statistical test like chi-square
if your findings are statistically significant at the 0.05 level, so that the risk of getting a relationship as strong as the one you have found, when there is no relationship in the population, is no lighter than 5 in 100, you would reject the null hypothesis. Therefore, you are implying that the results are unlikely to have occurred by chance.
The chi-square test
The chi-square test is applied to contingency tables and it allows us to establish how confident we can be that there is a relationship between the two variables in the population. The test works by calculating for each cell in the table an expected frequency or value, that is, one that would occur in the basis of chance alone. The chi-square value can be meaningful only when it is interpreted in relation to its associated level of statistical significance.
Correlation and statistical significance
When we examine the statistical significance of a computed correlation coefficient, which is based on a randomly selected sample, it provides us information about the likelihood that the coefficient will be found in the population from which the sample was taken.
Whether a correlation is statistically significant or not will be affected by two factors:
the size of the computed coefficient
the size of the sample, the larger the sample the more likely it is that a computed correlation coefficient will be found to be statistically significant.
What is important to be noted here, also regarding the question of whether a correlation coefficient is statistically significant which depends on the sample size, is that you should always examine both the correlation coefficient and the significance level, and that you should not examine one at the expense of the other. This is something that applies to both Pearson’s r and Spearman’s rho.
Comparing means and statistical significance
The procedure of testing statistical significance by comparing means entails treating the total amount of variation in the dependent variable as made up of two types:
variation within the subgroups that make up the independent variable (which is called error variance) and
variation between them (which is called explained variance).
A test of statistical significance for the comparison of means entails relating the two types of variance to form what is as the F statistic. This statistic expresses the amount of explained variance in relation to the amount of error variance.
Usually, you may use computer software to analyse your data. This approach is more common as it is closer to the way in which quantitative data analysis is carried out in real research nowadays and it also helps to equip you with a useful transferable skill.
SPSS for Windows is possibly the most widely used computer software for the analysis of quantitative data for social scientists. SPSS can be used to implement the techniques learned in the previous section (above).
However, learning new software requires perseverance and at times the results obtained may not seem to be worth the learning process. But it is worth it, it would take you far longer to perform calculations on a sample of around 100 than to learn the software.
It is better to become familiar with SPSS before you begin designing your research instruments, so you are aware of difficulties you might have in presenting your data in SPSS at an early stage.
Basic operations in SPSS
the SPSS Data Editor: this is the sphere of SPSS into which data are entered and subsequently edited and defined and it is made up in two screens: the Data Viewer and the Variable Viewer.
the Data Viewer: this is the spreadsheet into which your data are entered, when you start up SPSS the Data Viewer will be facing you
the Variable Viewer: this is another spreadsheet, but this one displays information about each of the variables and allows you to change that information.
the Output: this is called “chart” in SPSS, when you perform an analysis or produce a diagram then your Output will be deposited here
the Output Viewer: superimposes itself over the Data Editor after an analysis has been performed or a chart generated.
a Variable Name: this is the name that you give to a variable and the name must be no more than eight characters
a Variable Label: this is the label that you can give to a variable but which is not restricted to eight characters
a Value Label: this is a label that you can attach to a code that has been used when entering data for all types of variables other than interval/ ratio variables.
Missing Values: when you do not have data for a particular case when entering data for a variable, you must specify how you are denoting missing values for that variable. Missing Values are generated from the Variable Viewer.
Recode: a procedure that allows codes or numbers to be changed and it is especially helpful when you need to combine groups of people.
Compute: a procedure that allows you to combine two or more variables to form a new variable
Analyze: you choose which method of analysis you want to select
Graphs: you choose which chart you want to select
Chart Editor: from the Editor you can make various changes and enhancements to your graph.
Qualitative research differs from quantitative research because it is concerned with words rather than numbers; it is a research strategy that usually emphasizes words rather than quantification in the collection and analysis of data. In addition, there are three further features of qualitative research which worth to be noted:
an inductive view of the relationship between theory and research, whereby the former is generated out of the latter
an epistemological position described an interpretivist, meaning that the stress is on the understanding of the social world through an examination of the interpretation of that world by its participants contrary to a natural scientific model in quantitative research
an ontological position described as constructionist, which implies that social properties are outcomes of the interactions between individuals, rather than phenomena “out there” and separate from those involved in its construction.
Four traditions of qualitative research
naturalism: seeks to understand social reality in its own terms, as it really is, it provides rich descriptions of people and interaction in natural settings
ethnomethodology: seeks to understand how social order is created through talk and interaction, it has a naturalistic orientation
emotionalism: exhibits a concern with subjectivity and gaining access to inside experience, it is concerned with the inner reality of humans
postmodernism: emphasizes “method talk”, it is sensitive to the different ways social reality can be constructed
Qualitative research methods
ethnography/ participant observation: in many respects, they both refer to similar approaches to data collection in which the researcher is immersed in a social setting for some time to observe and listen with a view to gaining an appreciation of the culture of a social group
qualitative interviewing: it refers to a wide range of interviewing styles, as well as to the fact that qualitative researchers employing ethnography or participant observation typically engage in qualitative interviewing
focus groups: interview using open questions to ask interviewees questions about a specific situation or event , but interviewees discuss the specific issue in groups
language- based approaches to the collection of qualitative data, such as discourse and conversation analysis
the collection and qualitative analysis of texts and documents
The main steps in qualitative research
general research questions:
selection of relevant site(s) and subjects
collection of relevant data
interpretation of data
conceptual and theoretical work
5a. tighter specification of the research question(s)
5b, collection of further data
writing up findings/ conclusions
Theory and research and the research questions in qualitative research
Most qualitative researchers when writing about their craft emphasize a preference for treating theory as something that emerges out of the collection and analysis of data. On the other hand, there are qualitative researchers who argue that qualitative data can and should have an important role in relation to the testing of theories as well. Much qualitative research entails the testing of theories in the course of the research process. In the end, pre-specified theories can be and sometimes are tested with qualitative data, but the generation of theory tends to be the preferred approach.
Research questions in qualitative research are stated with varying degrees of explicitness. Sometimes the research question is embedded within a general statement of the orientation of the article, we there are others who opt for a more explicit treatment of research questions. So that, a factor that may affect the degree of explicitness with which research questions are stated is the outlet in which the research is published.
Concepts in qualitative research
Definitive concepts are excessively concerned with what is common to the phenomena that the concept is supposed to subsume rather than with variety. Instead, the concepts that social researchers use are sensitizing concepts in that they provide a general sense of reference and guidance in approaching empirical instances. So that, concepts should be employed in such a way that they give a very general sense of what to look for and act as a means for uncovering the variety of forms that the phenomena to which they refer can assume.
Sampling in qualitative research
Typically, qualitative researchers engage in purposive sampling, that is a types of sampling that is essentially to do with he selection of units (which may be people, organizations, documents, departments, etc.), with direct reference to the research questions being asked. The term “purposive sampling” covers a wide variety of approaches. What links them is that the sampling is conducted with reference to the goals of the research, so that units of analysis are selected in terms of criteria that will allow the research questions to be answered.
Also, probability sampling may be used in qualitative research, though it is more likely to occur in interview-based rather than in ethnographic qualitative analysis. Because it is important for the qualitative researcher to be able to generalize to a wider population, probability sampling is likely to be a more compelling sampling method. If the research questions do not suggest particular categories of people, there may be a case for sampling randomly, which, if it occurs, probability sampling warrants consideration.
Reliability and validity in qualitative research
external reliability: the degree to which a study can be replicated; a qualitative researcher replicating ethnographic research needs to adopt a similar social role to that adopted by the original researcher , otherwise what a researcher conducting a replication sees and hears will not be comparable to the original research
internal reliability: when there is more than one observer, members of the research team agree about what they see and hear , this is a similar notion to inter-observer consistency
internal validity: whether there is a good match between researchers’ observations and the theoretical ideas they develop, it is a strength of qualitative research because the prolonged participation in the social life of a group over a long period of time allows the researcher to ensure a high level of congruence between concepts and observations
external validity: refers to the degree to which findings can be generalized across social settings, it represents a problem for qualitative researchers because of their tendency to employ case studies and small samples
Alternative criteria for evaluating qualitative research: trustworthiness and authenticity
credibility, which parallels internal validity. Credibility ensures that the research was carries out according to the canons of good practice and submitting research findings to the members of the social world who were studied for confirmation that the investigator has correctly understood that social world (respondent validation). Another recommended technique is triangulation.
transferability, which parallels external validity. Qualitative research typically entails the intensive study of a small group, and thus, qualitative findings tend to be oriented to the contextual uniqueness and significance of the aspect of the social world being studied.
dependability, which parallels reliability. This entails ensuring that complete records are kept of all phases of the research process in an accessible manner. It also includes assessing the degree to which theoretical inferences can be justified.
confirmability, which parallels objectivity. It is concerned with ensuring that, while recognizing that complete objective is impossible in social research, the researcher can be shown to have acted in good faith; it should be apparent that he/she has not overtly allowed personal values or theoretical inclinations manifestly to sway the conduct of the research and findings deriving form it.
Respondent validation is also called “member validation” and it is a process whereby the researcher provides the people on whom he/she has conducted research with an account of his/her findings. The aim is to seek corroboration or otherwise of the account that the researcher has arrived at. It is common among qualitative researchers because they usually want to ensure that there is a good correspondence between their findings and the perspectives and experiences of their research. There are several different forms of respondent validation:
the researcher provides each research participant with an account of what he/she has said to the researcher in they interview and conversations , or what the researcher observed by watching that person in the course of an observation study
the researcher feeds back to a group of people or an organization his/her impressions and findings in relation to that group or organization
the researcher feeds back to a group of people or an organization some of his/her writings that are based on a study of that group or organization
The goal is to seek confirmation that the researcher’s findings and impressions are congruent with the views of those on whom the research was conducted and to seek our areas in which there is a lack of correspondence and the reasons for it.
However, there are some difficulties on respondent validation, that is:
respondent validation may occasion defensive reactions from the participants or censorship
it may result in participants developing relationships with the researcher and there may be a reluctance to be critical
it is questionable whether research participants can validate a researcher’s analysis, since this entails inferences being made for an audience of social science peers
Triangulation entails using more than one method or source of data in the study of social phenomena. Triangulation can operate within and across research strategies.
Denzin (1970) has used the term to refer to an approach that uses multiple observers, theoretical perspectives, sources of data and methodologies, with the emphasis on methods of investigation and sources of data.
The authenticity criteria are thought-provoking but have not been influential because their emphasis on wider impact of research is controversial. They have certain points of affinity with action research. The emphasis on practical outcomes differentness tit from most social research. These are the five authenticity criteria:
fairness: does the research fairly represent different viewpoints among members of the social setting?
Ontological authenticity: does the research help members to arrive at a better understanding of their social milieu?
Educative authenticity: does the research help members to appreciate better the perspectives of other members of their social setting?
Catalytic authenticity: has the research acted as an impetus to members to engage in action to change their circumstances?
Tactical authenticity: has the research empowered members to take the steps necessary for engaging in action?
Action research can be broadly defined as an approach in which the action researcher and members of social setting collaborate in the diagnosis of a problem and in the development of a solution based on the diagnosis. It can take a variety of forms. The collection of data is likely to be involved in the formulation of the diagnosis of a problem and in the emergence of a solution. The investigator becomes part of the field of study. Action research can involve both the collection of qualitative and quantitative data. It is more common is some social science areas, such as business and management research and social policy.
It has been dismissed for lacking rigour and for being to partisan in approach, while it has been advocated for its commitment to involving people in the diagnosis of and solutions to problems rather than imposing on then solutions to pre-defined problems.
Action research should not be confused with evaluation research, which denoted the study of the impact of an intervention, such as a new social policy or new innovation in organizations.
Quality criteria for qualitative research
A contested area in recent years is the issue of research quality in relation to qualitative investigations, with several criteria being proposed as possible alternatives to reliability and validity. For instance, Yardley (2000) has proposed the four following criteria:
sensitivity to context: sensitivity not just to the context of the social setting in which the research is conducted but also to potentially relevant theoretical positions and ethical issues
commitment and rigour: substantial engagement with the subject matter, having the necessary skills and through data collection and analysis
transparency and coherence:> research methods clearly specified, clearly articulated argument and a reflexive stance
impact and importance: importance of having an impact on and significance for theory, the community on which the research is conducted and for practitioners
Also, Spencer et al. (2003) produced a comprehensive list of criteria for qualitative research. The criteria they suggest should e used when appraising the quality of a qualitative research study are the following:
How credible are the findings?
Has knowledge/ understanding been extended by the research?
How well does the evaluation address its original aims and purposes?
Scope for drawing wider influences- how well is this explained?
How clear is the basis of the evaluative appraisal?
How defensible is the research design?
How well defended is the sample design/target selection of cases/documents?
Sample composition/case inclusion- how well is the eventual coverage described?
How well was the data collection carried out?
How well has the approach to, and formulation of, the analysis been conveyed?
Contexts of data sources-how well are they retained and portrayed?
How well has diversity of perspective and content been explored?
How well has detail, depth and complexity (richness?) of the data been conveyed?
How clear are the links between data, interpretation and conclusions-meaning how well can the route to any conclusions be seen?
How clear and coherent is the reporting?
How clear are the assumptions/ theoretical perspectives/ values that have shaped the form and output of the evaluation?
What evidence is there of attention to ethical issues?
How adequately has the research process been documented?
The main preoccupations of qualitative researchers
Preoccupations of qualitative research, as well as of quantitative research, reflect epistemologically grounded beliefs about what constitutes acceptable knowledge. There are five distinct preoccupations among qualitative researchers:
seeing through the eyes of the people being studied: the subject matter of the social sciences, that is people and their social world, does differ from the subject matter of natural sciences; the objects of natural sciences, that is atoms, molecules, chemicals, metals, and so on, cannot attribute meaning to events and their environment, but people do. Therefore, qualitative researchers have suggested that a methodology is required for studying people that reflects these differences between people and the objects of the natural sciences, and so they express a commitment to viewing events and social world through the eyes of the people that they study. The social world must be interpreted from the perspective of the people being studied, rather than as though those subjects were incapable of their own reflections on the social world. This stance is very much in tune with interpretivism and demonstrates the epistemological links with phenomenology, symbolic interactionism, and Weber’s notion of Verstehen. However, there are practical difficulties, such as the risk of “going native” and losing sight of what you are studying, as well as the possibility that researcher will be able to see through the eyes of only some of the people who form part of a social scene but not others.
description and the emphasis on context: qualitative researchers are more inclined to provide a grate deal of descriptive detail when reporting their findings, they are concerned with explanation, and the extent to which qualitative researchers ask “why” questions is frequently understated. Many qualitative researchers provide a detailed account of what goes on in the setting being investigated; these details provide an account of the context within which people’s behaviour takes place. They typically emphasize the importance of the contextual understanding of social behaviour, meaning that behaviour, values, or whatever must be understood in context. Multi-case study (conducting research in more than one setting) can be helpful in identifying the significance of context and the ways in which it influences behaviour and ways of thinking.
emphasis on process: qualitative research tends to view social life in terms of processes. There is often a concern to show how events and patterns unfold over time, and consequently qualitative evidence conveys a strong sense of change and flux. Ethnographic methods emphasize on process and ethnographers are typically immersed in a social setting for a long time, so that they observe the ways in which events develop over time or the ways in which the different elements of a social system interconnect. This may also be achieve through semi-structured and unstructured interviewing, by asking participants to reflect on the processes leading up or following on form an event. Therefore, the emphasis on process in qualitative research can be seen in the use of quite different approaches to data collection.
flexibility and limited structure: if a structured method of data collection is employed, certain decisions must have been made about what the researcher expects to find and about the nature of the social reality that would be encountered, therefore, the researcher is limited in the degree to which he/she can genuinely adopt the world view of the people being studied. As a result, most qualitative researchers prefer a research orientation that entails as little prior contamination of the social world as possible. Keeping structure to the minimum enhances the opportunity of genuinely revealing the perspectives of the people you are studying. Showing their preference to an unstructured approach to the collection of data, qualitative researchers adopt methods of research that do not require the investigator to develop very specific research questions and devise instruments specifically for those questions to be answered. This allows researchers to submerge themselves in a social setting with a fairly general research focus in mind and gradually to formulate a narrower emphasis by making as many observations of that setting as possible. After that, they can formulate more specific questions out of their data. This offers the prospect of flexibility; the researcher can change direction in the course of his/her investigation more easily than in quantitative research.
concepts and theory grounded in data: for qualitative researchers, concepts and theories are usually inductively arrived at from the data that are collected.
The critique of qualitative research
Critique to qualitative research mainly comes from quantitative researchers. Some of the most common points of critique are the following:
qualitative research is too subjective: qualitative research is criticized for being too impressionistic and subjective; qualitative findings rely too much on the researcher’s often unsystematic views about what is significant and important, and upon the close personal relationships that the researcher frequently strikes up with the people studied.
difficult to replicate: because it is unstructured and often reliant upon the qualitative researcher’s ingenuity, it is almost impossible to conduct a true replication, since there are hardly any standard procedures to be followed. The investigator is the main instrument of data collection, so what is observed and heard and what the researcher decides to concentrate upon is very much a product of his/her predilections.
problems of generalization: the scope of qualitative findings is restricted. In cases of participant observation of unstructured interviews conducted in a small number of individuals in a certain organization or locality, it is impossible to know how the findings can be generalized to other settings. The findings of qualitative research are to generalize to theory rather than to other populations. It is the quality of theoretical inferences that are made out of qualitative data that is crucial to the assessment of the generalization.
lack of transparency: sometimes it is difficult to establish from qualitative research what the researcher actually did and how he/she arrived at the study’s conclusions. Qualitative research reports are sometimes unclear about such matters as how people were chosen for observation or interview. Also, the process of qualitative data analysis is frequently unclear. These areas of a lack of transparency are increasingly being addressed by qualitative researchers.
Contrasts between quantitative and qualitative research
numbers vs., words: quantitative researchers are often portrayed as preoccupied with applying measurement procedures to social life, while qualitative researchers are seen as using words in the presentation of analyses
point of view of researcher vs. point of view of participants: in quantitative research, the investigator is in the driving seat. The set of concerns that he/she brings to an investigation structures the investigation. In qualitative research, the perspective of those being studied provides the point of orientation
researcher is distant vs. researcher is close: in quantitative research, researchers are involved with their subjects and in some cases, as in research based on postal questionnaires or on hired interviewers, may have no contact with them at all. On the other hand, the qualitative researcher seeks close involvement with people being investigated so that he /she can genuinely understand the world.
theory and concepts tested in research vs. theory and concepts emergent from data: quantitative researchers typically bring a set of concepts to bear on the research instruments being employed, so that theoretical work precedes the collection of data, whereas in qualitative research concepts and theoretical elaboration emerge out of data collection.
static vs. process: quantitative research is frequently depicted as presenting a static image of social reality with its emphasis on relationships between variables. Qualitative research is often depicted as attuned to the unfolding of events over time and to the interconnections between the actions of participants of social settings.
structured vs. unstructured: quantitative research is typically highly structured, so that the investigator is able to examine the precise concepts and issues that are the focus of the study. In qualitative research the approach is invariably unstructured, so that the possibility of getting at actors’ meanings and of concepts emerging out of data collection is enhanced.
generalization vs. contextual understanding: Quantitative researchers want their findings to be generalizable to the relevant population. Qualitative researcher seeks an understanding of behaviour, values, and so on in terms of the context in which the research is conducted.
hard, reliable data vs. rich, deep data: quantitative data are often depicted as “hard” in the sense of being robust and unambiguous, owing to the precision offered by measurement. Qualitative researchers claim that their contextual approach and their prolonged involvement is a setting engender rich data.
macro vs. micro: quantitative researchers are often depicted as involved in uncovering large-scale social trends and connections between variables, whereas qualitative researchers are seen as being concerned with small-scale aspects of social reality, such as interaction.
behaviour vs. meaning: Quantitative researcher is concerned with people’s behaviour and the qualitative researcher with the meaning of action
artificial settings vs. natural settings: Quantitative researchers conduct research in a contrived context, while qualitative researchers investigate people in natural environments.
Similarities between quantitative and qualitative research
both are concerned with data and data reduction: both quantitative and qualitative researchers collect large amounts of data, which represent a problem for researchers because they have to distil the data, therefore they have to reduce the amount of data so that they make sense. In quantitative research, the process of data reduction takes the form of statistical analysis while in qualitative research, researchers develop concepts out of their often rich data.
both are concerned with answering research questions: although the nature of research questions asked in quantitative and qualitative research are typically different, they are both fundamentally concerned with answering questions about the nature of social reality.
both are concerned with relating data analysis to the research literature: both quantitative and qualitative researchers are typically concerned to relate their findings to points thrown up by the literature relating to the topics on which they work; the researcher’s findings take on significance in large part when they are related to the literature.
both are concerned with variation: both quantitative and qualitative researchers seek to uncover and then to represent the variation that they uncover, in different ways though. Both are keen to explore how people differ and to explore some of the factors connected to that variation, although, the form that the variation takes differs.
both treat frequency as a springboard for analysis: in quantitative research, frequency is a core outcome of collecting data. In qualitative research issues of frequency concerns the common occurrence of certain themes.
both seek to ensure that deliberate distortion does not occur: researchers seek to ensure that “wilful bias” or “consciously motivated misrepresentation” des not occur.
both argue for the importance of transparency: both quantitative and qualitative researchers seek to be clear about their research procedures and how their findings were arrived at. This allows others to judge the quality and importance of their work.
both must address the question of error: in quantitative research, error must be reduced as far as possible so that variation that is uncovered is real variation and not product of problems with how questions are asked or how research instruments are administered. In qualitative research, the investigator seeks to reduce error by ensuring that there is a good fit between his/her concepts and the evidence that has been amassed.
research methods should be appropriate to the research questions: both groups of researchers seek to ensure that when they specify research questions, they select research methods and approaches to the analysis of data that are appropriate to those questions.
Feminism and qualitative research
The notion that there is an affinity between feminism and qualitative research has at least two main components to it:
a view that quantitative research is inherently incompatible with feminism
a view that qualitative research provides greater opportunity for a feminist sensitivity to come to the fore
Quantitative research is frequently viewed as incompatible with feminism for the following reasons:
quantitative research suppresses the voices of women either by ignoring them or by submerging them in a torrent of facts and statistics
the criteria of valid knowledge associated with quantitative research are ones that turn women, when they are focus of research, into subjects
the emphasis on controlling variables further exacerbates this last problem, and the very idea of control is viewed as a masculine approach
the use of predetermined categories in quantitative research results in an emphasis on what is already known and consequently in the “silencing of women’s own voice”
the criteria of valid knowledge associated with quantitative research also mean that women are to be researched in a value-neutral way, when in fact the goals of feminist research should be to conduct research specially for women
it is sometimes suggested that the quest for universal laws is inconsistent with feminism’s emphasis on the situated nature of social reality, which is seen as embedded in the various social identities that are unique to individuals
Contrary, qualitative research has been viewed by many feminist as either compatible with feminism or as more capable of being adapted to those tenets. In contrast to quantitative research, qualitative research allows:
women’s voices to be heard
exploitation to be reduced by giving as well as receiving in the course of fieldwork
women not to be treated as objects to be controlled by the researcher’s technical procedures
the emancipatory goals of feminism to be realized
However, there has been a softening of attitude among some feminist writers towards quantitative research in recent years. Examples of this softening are as follows:
there is a recognition that many of the worst excesses of discrimination against women might not have come to light so clearly were it not for the collection and analysis of statistics revealing discrimination
quantitative research can be enlisted as an aid to implementing social change for feminists
it is difficult to see why feminist research that combines quantitative and qualitative research would be incompatible with the feminist cause
there has also been a recognition of the fact that qualitative research is not ipso facto feminist in orientation and some researchers have preferred to write about feminist research practice rather than about feminist methods
Both ethnographer and participant observer/ethnographer immerses him/herself in a group for an extended period of time, observing behaviour, listening to what is said in conversations both between others and with the fieldworker, and asking questions. “Ethnography” as a term, is preferred because “participant observation” seems to imply just observation, though in practice participant observers do more than simply observe. Typically, ethnographer and participant observer will gather data through interviews and the collection of documents. Therefore, the emphasis on observation in the term “participant observation” has meant that an apparently more inclusive term would be preferable, even though in fact it is generally recognized that the method entails a wide range of methods of data collection and sources. Ethnography is also sometimes taken to refer to a study in which participant observation is the prevalent research method but that also has a specific focus on the culture of the group in which the ethnographer is immersed.
Yet, ethnography has an additional meaning, in that it simultaneously refers to both a method of research of the kind outlined above and the written product of that research. Ethnography frequently denotes both a research process and the written outcome of the research.
Ethnography means a research that:
is immersed in a social setting for an extended period of time
makes regular observations of the behaviour of members of that setting
listens to and engages in conversations
interviews informants on issues that are not directly amenable to observation or that the ethnographer is unclear about
collects documents about the group
develops an understanding of the culture of the group and people’s behaviour within the context of that culture
and writes up a detailed account of that setting
So that, ethnography is being taken to include participant observation and is also taken to encapsulate the notion of ethnography as a written product of ethnographic research.
One of the key and difficult issues in ethnography is gaining access to a social setting that is relevant to the research problem you are interested. There are several dimensions on the approach of the way of gaining access, one of which lies on whether the setting is a relatively open one or a relatively closed one.
Overt versus covert ethnography
One way to access problem is to assume a covert role, that is, not to disclose the fact that you are a researcher. This strategy obviates the need to negotiate access to organizations or to explain why you want to intrude into people’s lives and make them objects of study.
Three points should be made about the distinction between open/public setting versus closed settings:
the open/public setting versus closed settings distinction is not a hard-and-fast one. Gaining access to groups can have a near formal quality or organizations sometime create contexts that have a public character
the overt versus covert distinction is not without problems; an ethnographer may seek access through an overt route or also some ethnographer may need to move between the two roles
entries are more likely to be in an overt role than a covert one, the preference of an overt role has to do with practical and ethical issues
Access to closed settings
In selecting a particular social setting to act as a case study in which to conduct an ethnographic investigation, the researcher may employ several criteria. These criteria are determined by the general research area in which the researcher is interested.
use friends, academics and contact to help you gain access, provided the organization is relevant to your research question
try to get support from someone within the organization, who will act as your champion (“sponsor”)
usually you will need to get access through top management/senior executives
offer something in return, for instance a report. This helps to create a sense of trustworthy
provide a clear explanation of your aims and methods and be prepared to deal with concerns
be prepared to negotiate
be reasonably honest about the amount of people’s time you are likely to take up
Access to open/public settings
There are two main ways to gain access to groups, via gatekeepers and via acquaintances who then act as sponsors. Also, hanging around is another strategy, which typically entails either loitering in an area until you are noticed or gradually becoming incorporated into or asking to join a group.
Access does not finish when you enter a group, because you will still have to access people. Securing access in many ways is an ongoing activity, which is likely to prove a problem in closed contexts like organizations.
people will have suspicious about you, perhaps seeing you as an instrument of top management
people will worry that what they say or do may get back to bosses or colleagues
if people have these worries, they may go along with your research but in fact sabotage it, engaging in deceptions, misinformation and not allowing access to “back regions”
There are three things you can do to make easier the path of ongoing access:
use your past work and experience, your knowledge of the organization and your understanding of their problems
be non-judgemental when things are said to you about informal activities or about the organization, make sure information given to you does not go back to others, bosses or peers
you may need a role, mostly in case research involves a lot of participant observation, the role will be part of your position in the organization, be consistent and not ambiguously
Similar considerations apply to research in public settings
make sure you have thought about ways in which people’s suspicion can be allayed
be prepared for tests of either competence or credibility
be prepared for changes in circumstances
Sponsors or gatekeepers who smooth access for the ethnographer to the organization may become key informants. They often develop an appreciation of the research and direct the ethnographer to situations, events, or people likely to be helpful to the progress of the investigation. Also, key informants can be of great help to the ethnographer and frequently provide a support that helps with the stress of fieldwork. However, the risk is that the ethnographer may develop an undue reliance on the key informant, and rather than seeing social setting, the researcher is seeing social reality through the eyes of the key informant. Moreover, the ethnographer will encounter people who act as informants, their accounts may be solicited or unsolicited. Some researchers prefer the latter because of greater spontaneity and naturalism.
Roles of ethnographers
The role the ethnographer adopts in relation to the social setting and its members is also related to the issue of the ongoing process. There are several schemes to describe the various roles that can be adopted by the ethnographers. According to Gold (1958) the four roles are:
complete participant: the complete participant is a fully functioning member of the social setting and his/her true identity is not known to members
participant-as-observer: the role is the same as the complete participant one, but members of the social setting are aware of the researcher’s status as a researcher, the researcher is engaged in regular interaction with people and participates in their daily life
observer-as-participant: the researcher, in this role, is an interviewer. There is some observation but very little of it involves any participation
complete observer: in this role, the researcher does not interact with people, people do not have to take the researcher into account
According to Gans (1968), who has devised a classification of participant observer roles, these roles coexist in any project. The three roles he outlines will be employed at different times in the course of ethnographic research and for different purposes. The three roles are:
total participant: the ethnographer is completely involved in a certain situation and has to resume a researcher stance once the situation has unfolded and then write down notes
researcher-participant: the ethnographer participates in a situation but is only semi-involved, so that he/she can function fully as a researcher in course of the situation
total researcher: entails observation without involvement in the situation, the researcher does not participate in the flow of events
“Going native” refers to a plight hat is supposed sometimes to afflict ethnographers when they lose their sense of being a researcher and become wrapped up in the world view of the people they are studying. Going native is mainly a problem because the ethnographer can lose sight of his/her position as a researcher and therefore find it difficult to develop a social scientific angle on the collection and analysis of data. However, this is the only risk to do with how participant observers relate to the social situations in which they find themselves.
Active or passive
Another issue about the situation in which the ethnographer participates is the degree to which he/she should be or can be an active or passive participant. Even when the ethnographer is in an observer-as-participant role, there may be contexts in which either participation in unavoidable or a compulsion to join in in a limited way may be felt. Also, in many instance the researcher has no choice. Ethnographers may feel they have no choice in getting involved, because a failure to participate actively might indicate to members of the social setting a lack of commitment and lead to a loss of credibility. This can often lead to researchers’’ dilemmas when the activities in which they take part are illegal or dangerous.
Ethnographers may be forced to gather information from whatever sources are available to them, often, they face opposition or at least indifference to their research and are relieved to glean information or views form whoever is prepared to divulge such details. Also, ethnographers who take a role that is closer to that of observer-as-participant rely on formally asking for names of others who might be relevant and who might have been contacted.
In any case, the question raised concerns the degree to which either can result in a representative sample of informants. Probability sampling is almost never used in ethnographic research; it is not feasible to conduct a probability sampling exercise because it can be difficult and often impossible to map the population from which a random sample might be taken-to create a sampling frame.
In qualitative research, purposive sampling considerations often apply to the sampling of the cases in which research will be conducted and then to people within those cases.
Theoretical sampling: in grounded theory, you carry on collecting data until you have achieved theoretical saturation.
The steps in theoretical sampling are:
1. General research question
2. Sample theoretically
3. Collect data
4. Analyse data (concepts, categories)
5. Theoretical situation
6. Generate hypothesis
The key idea is that you carry on sampling theoretically until a category has been saturated with data, until, no new or relevant data seem to be emerging regatding a category, until the category is well developed in terms of its properties and dimensions demonstrating variation, until the relationships among categories are well established and validated.
The ethnographic sampling is not just about people but also about other things. It is may also be important to sample in terms of context. People’s behaviour is influenced by contextual factors so that it is important to ensure that such behaviour is observed in a variety of locations.
Ethnographers have to take notes based on their observations, detailed summaries of events and behaviour and the researcher’s initial reflections on them. There are some general principles:
write down notes, as quickly as possible after seeing or hearing something
write up full notes at the very latest at the end of the day and include such details
people prefer a digital recorder, though there may be a problem of needing to transcribe a lot of speech
notes must be vivid and clear
you need to take copious notes, so they may be of different types
Types of field notes
mental notes: useful when it is inappropriate to be seen taking notes
jotted notes: very brief notes written down on pieces of paper or in small notebooks to jog one’s memory about events that should be written up later
full field notes: as soon as possible make detailed notes, which wil;lbe your main data source, they should be written at the end of the day, write as promptly as possible information about events, ideas, impressions and feelings
A feminist ethnography
There can be a distinctively feminist ethnography that both draws on the distinctive strengths of ethnography and is informed by feminist tenets. Feminist ethnography is significant in terms of feminism, because:
it documents women’s lives and activities
it understands women from their perspective
it understands women in context
Visual materials, such as photographs and video, have attracted considerable interest among ethnographers in recent years, not just as adjuncts to data collection but as objects of interest in their own right. There are two stances on the role of visual images in ethnography, the realist one, and the reflexive one. The traditional framework is the realist one, in which the photograph or the video recording simply captures an event or setting that then become a fact for the ethnographer to interpret along with his/her other data. The reflexive one entails an awareness of and sensitivity to the ways in which the researcher as a person has an impact on what a photograph reveals.
Photographs are the visual medium that has received the greatest attention. There are a number of ways photographs have been employed by qualitative researchers.
as an aide-memoire: the images become components of the ethnographer’s field notes
as sources of data in heir own right and not simply as adjuncts to the ethnographer’s field notes
as prompts for discussion by research participants
Differences between the structured interview and the qualitative interview
Ethnography involves a substantial amount of interviewing and this contributes to the widespread use of the interview by qualitative researchers. There are some differences between the structured interview and the qualitative interview. Qualitative interviewing is usually very different from interviewing in quantitative research in a number of ways:
the approach tends to be much less structured in qualitative research. In quantitative research, the approach is structured to maximize the reliability and validity of measurement of key concept, as well as the researcher has a clearly specified set of research questions. Instead, in qualitative research, the emphasis is on the formulation of initial research ideas and on interviewee’s own perspectives
in qualitative interviewing, is much interest in the interviewee’s point of view. In quantitative research the interview reflects the researcher’s concern
in qualitative interviewing, rambling or going of at tangents is encountered and gives insight into what the interviewee see as relevant and important. In quantitative research, it is usually regarded as a nuisance and discouraged
in qualitative interviewing, interviewees can depart significantly from any schedule or guide that is being used. In quantitative interviewing, interviewers compromise the standardization of the interview process and hence the reliability and validity of measurement.
Consequently, qualitative interviewing tends to be flexible, responding to the direction in which interviewees take the interview and perhaps adjusting the emphases in the research as a result of significant issues that emerge in the course of interviews. By contrast, quantitative interviews are typically inflexible, because of the need to standardize the way in which each interviewee is dealt with
In qualitative interviewing, the researcher wants rich, detailed answers. In structured interviewing, the interview is supposed to generate answers that can be coded and process quickly
In qualitative interviewing, the interviewee may be interviewed on more than one and sometimes even several occasions. In structured interviewing, unless the research is longitudinal, the person will be interviewed on one occasion only.
Asking questions in the qualitative interview
Qualitative interviewing varies in the approach taken by the interviewer, in two major types:
the almost totally unstructured interview: the researcher uses an aide-mémoire as a brief set of prompts to him/herself to deal with a certain range of topics. The interviewer asks a question and the interviewee is allowed to respond freely, like in a conversation.
a semi-structured interview: the researcher has a list of questions or fairly specific topics to be covered, referred as an interview guide, but the interviewee has a great deal of leeway in how to reply. In general, all the questions will asked, even though they may not follow a schedule, and a similar wording will be used from the interviewee to interviewee.
Most qualitative interviews are close to the one type or the other. However, the two types of interview in qualitative research are extremes, and there is quite a lot of variability between them. In neither case does the interviewer follow a schedule, as done in quantitative research; in semi-structured interviews the interview does follow a script to a certain extent.
There is a variety of factors that affect the choice of whether to veer towards one type rather than the other:
researchers who are concerned that the use of even the most rudimentary interview guide will not allow genuine access to the world views of members of a social setting or of people sharing common attributes are likely to favour an unstructured interview
if the researcher is begging the investigation with a fairly clear focus, rather than a very general notion of wanting to do research on a topic, it is likely that the interviews will be semi-structured ones, so that the more specific issues can be addressed
if more than one person is to carry out the fieldwork, in order to ensure a modicum of comparability of interviewing style, it is likely that semi-structured interviewing will be preferred
if you are doing multi-case study research, you are likely to find that you will need some structure in order to ensure cross-case comparability
Life history and oral history interview
Life history is associated with the life history method, where it is often combined with various kinds of personal documents such as diaries, photographs, and letters. This method is often referred as biographical method. It invites the subject to look back in detail across his/her entire life course. The life history method is very much associated with the life history interview, which is kind of unstructured interview covering the totality of an individual’s life.
An oral history interview is more specific in tone that the subject is asked to reflect upon specific events or periods in the past. The emphasis is less upon the individual and his/her life than on the particular events in the past. It too sometimes is combined with other sources, such as documents. The problem with oral history interview is the possibility of bias introduced by memory lapses and distortions. On the other hand, oral history testimonies have allowed the voices to come through of groups that are typically marginalized in historical research, either because they lack power or because they are typically regarded as exceptional.
Preparing an interview guide
The term “interview guide” may be used to refer to the brief list of memory prompts of areas to be covered that is often employed in unstructured interviewing or to the more structured list of issues to be addressed or questions to be asked in semi-structured interviewing.
You should consider an appreciation of what the interviewee sees as significant and important in relation to each of your topic areas. Your questioning must cover the areas that you need, but from the interviewee’s perspective. This means that even though qualitative research is predominately unstructured, it is rarely so unstructured that the researcher cannot at least specify a research focus.
Here are some basic elements for the preparation of your interview guide:
create a certain amount of order on the topic areas, so that your questions on that flow reasonably, but be prepared to alter the order of the questions during the interview
formulate interview questions or topics in a way that will help you to answer your research question
try to use a language that is comprehensible and relevant to the people you are interviewing
just as an interviewing in quantitative research, do not ask leading questions
remember to ensure that you ask or record “facesheet” information of a general kind and a specific kind, because such information is useful for contextualizing people’s answers
There are some practical details to attend before the interview:
make sure you are familiar with the setting in which the interviewee works or lives or engages in the behaviour of interest to you
get hold of a good-quality machine and microphone, since qualitative researchers nearly always record and then transcribe their interviews
make sure as far as possible that the interview takes place in a setting that is quiet and private
prepare yourself for the interview by cultivating as many of the criteria of a quality interviewer (as they follow in the next section)
interviewing is very demanding, and especially for students, it worth to conduct some pilot interviews, to test how well the interview flows and to gain some experience
After the interview, make some notes about:
how the interview went
where the interview took place
any other feelings about the interview
Interviewing for the first time
unexpected interviewee behaviour or environmental problems: bear in mind that interviewees may say surprising things or that there maybe distractions and try to limit their impact on you and on the course of the interview
intrusion of own biases and expectations: some trainees were surprised when they read their transcripts how their own biases and expectations were evident in the ways they asked questions and followed up on replies
maintaining focus in asking questions: ask follow up questions and clarify so that they do not lose sight on the research topic
dealing with sensitive issues: students may ask questions that cause interviewees to become upset and the response may have an adverse impact on the course of the interview
transcription: transcription may be difficult and time- consuming
Criteria of a successful interviewer
knowledgeable: thoroughly familiar with the focus on the interview
structuring: gives purpose for interview
clear: asks simple questions
gentle: lets people finish
sensitive: listens attentively to what is said and how it is said
open: responds to what is important to interviewee
steering: knows what he/she wants to find out
critical: is prepared to challenge what is said
remembering: relates what is said to what has previously been said
interpreting: clarifies and extends meanings of interviewee’s statements, but without imposing meaning to them
Kinds of questions
introducing questions- “please tell me about when your interest in...began”
follow up questions- getting the interviewee to elaborate his/her answer
probing questions- following up what has been said through direct questioning
specifying questions- “what did you do then”
direct questions- do you find it easy to …”
indirect questions- what do most people around here think of the…”
structuring questions- “I would like to move on to a different topic”
silence- allow pauses to signal that you want to give the interviewee the opportunity to reflect and amplify an answer
interpreting questions- “do you mean that..”
Also, vignette questions may be used in qualitative interviewing as well as in structured interviewing and represent one way of asking specific questions.
Photographs are a further way in which questioning in qualitative interviews may be grounded.
images help to ground the researcher’s interview questions
stimulating interviewees to engage visually with familiar settings and objects
the use of photographs may stimulate the interviewee to remember people or events or situations that might otherwise have been forgotten
Recording the transcription
There are some approaches, such as conversation analysis or discourse analysis, where the recording of conversations and interviews is mandatory. Also, researchers who use qualitative interviews and focus groups record and afterwards transcribe interviews. When transcribing interviews you have to bear in mind that you should allow sufficient time for transcription and be realistic about how many interviews you are going to be able to transcribe in the time available. Recording and transcribing interviews has the following advantages:
it helps to correct the natural limitations of our memories and of the intuitive glosses that we might place on what people say in interviews
it allows more thorough examination of what people say
it permits repeated examinations of the interviewees’ answers
it opens up the data to public scrutiny by other researchers, who can evaluate the analysis that is carried out by the original researchers of the data
it allows the data to be reused in other ways from those intended by the original researcher
Conventions when using direct quotations from an interview
use quotation marks to indicate that this is a direct quote or indicate this by consistently setting them out so they stand out from the main box- this makes the reader able to identify that this is a direct quote and it enables you to differentiate between your presentation of data and your analysis of it
if it is appropriate in relation to ethical considerations, indicate who is speaking in the quote, either introducing the speaker before the quote or attribute the quote to the interviewee immediately afterwards
if you wish to quote the first sentence from a section of speech and then a sentence or two further on from the transcript, use the convention of three consecutive dots to indicate the break point
if an interviewee omits a word from a sentence that is a grammatical omission or if the interviewee refers to a subject in a way that does not make its meaning clear and you need to provide the readers with more contextual information so that they can understand the quote, use the convention of brackets or square brackets in which you insert the words you have added
in order to create a smooth flow to the text because of the switches between your voice, as researcher, and the voices of the interviewees, it is important to introduce direct quotes before you present then and then take a sentence or two of your analysis to explain in your own words how you have interpreted them- in this way, you construct a narrative that guides the reader through your data and shows why you have chosen the particular quotes you have as illustrative of particular themes or concepts
Feminist research and interviewing in qualitative research
It is specifically interviewing of the kind conducted in qualitative research that is seen as having potential for a feminist approach, not the structured interview with which social survey is associated. That is because:
it is one-way process- the interviewer extracts information or views from the interviewee
the interviewer offers nothing in return for the extraction of information
the interviewer-interviewee relationship is a form of hierarchical or power relationship
the element of power is also revealed be the fact that the structured interview seeks out information from the perspective of the researcher
because of these points, the standard survey interview is inconsistent with feminism when women interview other women- it is viewed as indefensible for women to “use” other women is these ways
Instead of this framework of conducting interviews, feminist researchers advocate one that establishes:
a high level of rapport between interviewer and interviewee
a high degree of reciprocity on the part of the interviewer
the perspective of the women being interviewed
a non-hierarchical relationship
Qualitative interviewing versus participant observation
Qualitative interviewing and participant observation are the two most prominent methods of data collection in qualitative research.
Advantages of participant observation in comparison to qualitative interviewing
seeing through others’ eyes: the participant observer is in much closer contact with people for a longer period of time and he/she participates in many of the same kinds of activity as the members of the social setting being studied
learning the native language: apart from the formal language, the participant observer has to learn the “argot”, the special uses if words and slang that are important to penetrate that culture
the taken for granted: in participant observation, implicit features of social life are more likely to be revealed as a result of the observer’s continued presence and because of the ability to observe behaviour rather than just rely on what is said
deviant and hidden activities: ethnographers conducting participant observation are more likely to place themselves in situations in which their continued involvement allows them gradually to infiltrate such social worlds and to insinuate themselves into the lives of people who might be sensitive to outsiders
sensitivity to context: the participant observer interacts with people in a variety of different situations and possibly roles, so that the links between behaviour and context can be forged
encountering the unexpected and flexibility: except with the most unstructured forms of interview, the interview process is likely to entail some degree of closure as the interview guide is put together, which may blinker the researcher slightly. Also, participant observatory may be more flexible because of the tendency for interviewers to instil an element of comparability in their questioning of different people
naturalistic emphasis: participant observations comes closer to naturalistic emphasis because the qualitative researcher confronts members of a social setting in their natural environments
Advantages of qualitative interviewing in comparison to participant observation
issues resistant to observation: there is a range of issues that are not amenable to observation and therefore, asking people about them represents the only viable means of finding out about them within a qualitative research strategy
reconstruction of events: qualitative research entails the reconstruction of events be asking interviewees to think back over how a certain series of events unfolded in relation to a current situation
ethical considerations: there are several areas that could be observed but would raise ethical considerations
reactive effects: it might be anticipated that the presence of a participant observer would result in reactive effects, also people knowing they are being observed may make them behave less naturally. Interviews do not have to face this problem, though it cold be argued that the unnatural character of the interview encounter can also be regarded as a context within which reactive effects may emerge
less intrusive in people’s lives: interviews in qualitative research can sometimes be very long and re-interviewing is not common, but the impact on people’s time will probably be less than having to take observers into account on a regular basis, though it is likely that this feature will vary from situation to situation.
longitudinal research easier: interviewing can be carried out within a longitudinal research design more easily because repeat interviews may be easier to organize than repeat visits to participant observer’s research settings, though the latter is not impossible
greater breadth of coverage: interviewing can allow access to a wider variety of people and situations, while in participant observation the researcher is invariably constrained in his/her interactions and observations to a restricted range of people
specific focus: because qualitative research sometimes begins with a specific focus, qualitative interviewing would seem to be better suited to such a situation, since the interview can be directed at that focus and its associated research questions
The focus group technique is a method that involves more than one, usually at least four, interviewees. Essentially, it is a group interview. Yet, a distinction appears between the focus group and the group interview techniques; there are three reasons about that:
focus groups typically emphasize a specific theme or topic that is explored in depth, whereas group interviews often span very widely
group interviews are carried out so that the researcher is able to save time and money by carrying out interviews with a number of individuals simultaneously, but focus groups are not carried out for this reason
the focus group practitioner is invariably interested in the ways in which individuals discuss a certain issue as members of a group, rather than simply as individuals
The focus group method is a form of group interview in which:
there are several participants
there is an emphasis in the questioning on a particular fairly tightly defined topic
and the accent is upon interaction within the group and the joint construction meaning
the focus group contains elements of two methods, the group interview and the focused interview. In the former, several people discuss a number of topics and in the latter interviewees are selected because they are known to have been involved in a particular situation and are asked about this involvement
the focus group method appends to the focused interview the element of interaction within groups as an area of interest and is more focused than the group interview.
Uses of focus groups
the original idea for the focus group-the focused interview- is that people who are known to have had a certain experience could be interviewed in a relatively unstructured way about that experience
the technique allows the researcher to develop an understanding about why people feel the way they do; he focus group approach offers the opportunity of allowing people to probe each other’s reasons for holding a certain view
in focus groups participants are allowed to bring to the fore issues in relation to a topic that they deem to be important and significant
in the context of a focus group, individuals will often argue with each other and challenge each other’s view
the focus group offers the researcher the opportunity to study the ways in which individuals collectively make sense of a phenomenon and construct meanings around it
Conducting focus groups
In the focus group session, it is best if it is recorded and subsequently transcribed, because:
writing down what people say in a context of an interview involving several people would be extremely disruptive
the researcher will be interested in who expresses views within the group, such as whether certain individuals seem to act as opinion leaders or dominate the discussion
conducting focus group research makes it possible to study the processes whereby meaning is collectively constructed within each session. This could not be done easily by taking notes because of the need to keep track of who says what
the focus group practitioner will be interested in not just what people say but how they say it
How many groups, size of groups
A factor that may affect the number of groups is whether the researcher feels that the kinds and range of views are likely to be affected by socio-demographic factors, such as age, gender, class, and so on. Also, one more point to consider is that more groups will increase the complexity of your analysis. However, it takes time to transcribe the recordings that are made, but what is important is to justify the number of groups you have chosen and why your data are still significant.
It is suggested that the appropriate size of a group is ten members. One problem though, concerns people who agree to participate and do not turn up the next day, but it is impossible to control for “no-shows”.
Level of moderator involvement
The aim is to get at the perspective of those being studied. Consequently, the approach should not be intrusive and structured. Therefore, there is a tendency for researchers to use a fairly small number of very general questions to guide the focus group session. There is a tendency for moderators to allow quite a lot of latitude to participants, so that the discussion can range fairly widely. A moderator may need to be involved in responding to specific points that are of potential interest to the research questions but that are not picked up by the other participants.
However, the moderator has to straddle two positions; allowing the discussion to flow freely and intervening to bring out especially salient issues, particulary when group participants do not do so. The best advice is to err on the side of minimal intervention, other than to start the group on a fresh set of issues, but to intervene when the group is struggling in its discussions or when it has not alighted on something that is said n the course of the session that appears significant for the research topic.
Anyone for whom the topic is relevant can logically be an appropriate participant. There are certain topics that do not require participants of a particular kind, so there is little or no restriction on who might be appropriate. Yet, this is an uncommon situation, and usually some restriction is required. When a range of people is required, they are organized into separate groups in terms of stratifying criteria, such as age, gender, education, occupation, and so on. Participants for each group can be selected randomly or through some kind of snowball sampling method.
Another issues related to the selection of group participants is whether to select people who are unknown to each other or to use natural groupings (such as friends, co-workers, students, and so on). However, opting for recruiting people entirely from natural groups is not always feasible, because of difficulties of securing participation.
The focus group as a feminist method
The use of focus groups by feminist researchers has grown considerably in recent years. There are three aspects of the method stand out in terms of their compatibility with the ethics and politics of feminism:
focus group research is less artificial than many other methods and people are able to discuss in situations that are quite normal for them. As a result, there is greater opportunity to derive understandings that chime with the “lived experience” of woman.
feminist researchers have expressed a preference for methods that avoid decontextualization, that is, that successfully study the individual within a social context
feminist researchers are suspicious of research methods that are exploitative and create a power relationship between the female researcher and the female respondent. As a result, participant’s points of view are much more likely to be revealed than in a traditional interview
Limitations of focus groups
The researcher has less control over proceedings than with the individual interview. All writers on focus groups perceive this as a disadvantage, while feminist researchers see it as an advantage. However, it is not clear the degree to which it is appropriate to surrender control over a focus group to its participants, especially when there is a reasonably explicit set of research questions to be answered, as in the case of funded research.
The data are difficult to analyse. Developing a strategy of analysis that incorporates both themes in what people say and patterns of interactions is not easy.
They are difficult to organize. Not only you have to secure the agreement of people to participate in your study, but you also need to persuade them to turn up at a particular time.
The recordings are probably more time-consuming to transcribe than equivalent recordings of individual interviews, because of variations in voice pitch and the need to take account of who says what.
There are problems with focus groups that are not encountered in individual interviews, most notably the tendency for two or more participants to speak at the same time, which is a problem for the transcription process.
There are possible problems of group effects, such as the obvious problem of dealing with reticent speakers and with those who hog the stage.
In group contexts, participants may be more prone to expressing culturally expected views than in individual interviews.
There are circumstances when focus groups may not be appropriate, because of their potential for causing discomfort among participants, for example, when intimate details of private lives need to be revealed or when participants may not be comfort with each other’s presence.
Conversation analysis (CA) is the fine-gained analysis of talk as it occurs in interaction in naturally occurring situations. The talk is recorded and transcribed so that the detailed analyses can be carried out. These analyses are concerned with uncovering the underlying structures of talk in interaction and as such with the achievement of other interaction.
Assumptions of conversation analysis
Talk is structured: talk comprises invariant patterns, that is, it is unstructured. Participants are implicitly aware of the rules that underpin these patterns.
Talk is forged contextually: action is revealed in talk and as such talk must be analysed in terms of its context. This means that we must seek to understand what someone says in terms of the talk that has preceded it and that therefore talk is viewed as exhibiting patterned sequences
Analysis is grounded in data: conversation analysts shun prior theoretical schemes and instead argue that characteristics of talk and of constitutive nature of social order in each empirical instance must be induced out of data.
Transcription and attention detail
Conversation analysis, as an approach, requires the analyst to produce detailed transcripts of natural conversations. Some of the basic notational symbols employed in CA are:
.hh – h’s preceded by a dot indicate an intake of breath, if no dot is present it means breathing out
We:ll – a colon indicates that the sound that occurs directly before the colon is prolonged, more than one colon means further prolongation (e.g. ::::::::::)
(0.8) – a figure in parentheses indicates the length of a period of silence, usually measured in tenths of one second, this (0.8) signals eight-tenths of a second of silence
You and knowing - and underline indicates an emphasis in speaker’s talk
(.) – indicates a very slight pause
Some basic tools of conversation analysis
Turn-taking: one of the ways in which order is achieved in everyday conversations is through turn-taking. This illustrates that talk depends on shared codes, which if they did not exist there would not be smooth transitions in conversation. There must be codes for indicating the ends of utterances.
Adjacency pairs: one way in which turn-talking is revealed is through the examination of adjacency pairs: a question followed by an answer, an invitation followed by a response, or a greeting followed by a returned greeting. The first phase implies that the other part of the adjacency pair will be forthcoming. The second phase indicates that compliance with the putative normative structure of the pairing indicates an appreciation of how one is supposed to respond to the initial phase
Preference organization: some responses are clearly preferential to others; this has implications for the structure of a conversation. The preference structure is discovered by the analyst through the response to an initial statement
Accounts: in CA, the accounts are analysed in context, that is, the form that they assume is handled as being occasioned by what precedes it. Also, in CA, accounts are not unusual phenomena to be deployed when things go wrong but are intrinsic to talk in a variety of situations.
Discourse analysis (DA) is an approach that can be applied to forms of communication other than talks, like texts, such as newspaper articles, and is in this respect more flexible than CA. There is not one specific version of discourse analysis. Discourse analysis shares many features with CA but there are several differences. DA can be applied to a wider variety of phenomena than CA, which is just concerned with naturally occurring talk. Discourse is conceived as a means of conveying meaning. DA practitioners display a greater inclination to relate meaning in talk to contextual factors.
Discourse analysis is not simply a neutral device for imparting meaning. People seek to accomplish things when they talk or when they write; DA is concerned with the strategies they employ in trying to create different kinds of effect. This is an action-orientated version of DA, that is, a way of getting things done.
There are two distinctive features at the level of epistemology and ontology that DA exhibits:
it is anti-realist: it denies that there is an external reality awaiting a definitive portrayal by the researcher and it therefore disavows the notion that any researcher can arrive at a privileged account of the aspect of the social world being investigated.
it is constructionist: the emphasis is placed on the versions of reality propounded by members of the social setting being investigated and on the fashioning of that reality through their renditions of it. The constructionist emphasis entails recognition that discourse entails a selection from many viable renditions and that in the process a particular depiction of reality is built up.
Four themes in discourse analysis
discourse is a topic: discourse is a focus of enquiry itself and not just a means of gaining access to social reality that lie behind
language is constructive: discourse is a way of constructing a particular view of social reality. Also, choices are made regarding the most appropriate way of presenting it and these will reflect the disposition of the person responsible for devising it.
discourse is a form of action: language is viewed as practice on its own right, language is a way of accomplishing acts. A person’s discourse is affected by the context that he/she is confronting.
discourse is rhetorically organized: DA practitioners recognize that discourse is concerned with establishing one version of the world in the face of competing versions, there is a recognition that we want to persuade others when we present a version for events or whatsoever.
Critical discourse analysis
Critical discourse analysis (CDA) emphasizes the role of languages as a power resource that is related to ideology and socio-cultural change. It draws on the approaches and the theories of the Foucault (1977), who sought to uncover the representational properties of discourse as a vehicle for the exercise of power through the construction of disciplinary practices. The notion of discourse is therefore defined more broadly than in fine-gained approaches, it is defined as an interrelated set of texts, and the practices of their production, dissemination, and reception, that brings the object into being; social reality is produced and made more real through discourses, and social interactions cannot be fully understood without reference to the discourses that give them meaning.
Analysis of a discursive event is usually carried out according to a “three-dimensional” framework, which proceeds as follows:
examination of the actual content, structure, and meaning of the text under scrutiny (the text dimension)
examination of the form of discursive interaction used to communicate meaning and beliefs (the discursive practice dimension)
consideration of the social context in which the discursive event is taking place (the social practice dimension)
A fairly heterogeneous source of data is the diaries, letters, autobiographies, newspapers, magazines. The emphasis is placed on documents that have not been produced at the request of a social researcher, instead the objects that are the focus are simply “out there” waiting to be assembled and analysed. Once, documents are collected, considerable interpretative skills are required to ascertain the meaning of the materials that have been uncovered.
Documents are the kind of materials that:
can be read
have not been produced specifically for the purpose of social research
are relevant to the concerns of the social researcher
The quality of documents can be assessed by four criteria:
authenticity: is the evidence genuine and of unquestionable origin?
credibility: is the evidence free from error and distortion?
representativeness: is the evidence typical of its kind, and, if not, is the extent of its untypicality known?
meaning: is the evidence clear and comprehensible?
Qualitative content analysis
Qualitative content analysis is the most prevalent approach to the qualitative analysis of documents and compromises a searching-out of underlying themes in the materials being analysed. The processes through which the themes are extracted is often not specified in detail. The extracted themes are usually illustrated, for example, with brief quotations from a newspaper article or magazine.
“Ethnographic content analysis” (ECA) represents a codification of certain procedures that might be viewed as typical of the kind of qualitative content analysis. With ECA is much more movement back and forth between conceptualization. The required steps for the researcher are to:
generate a research question
become familiar with the context within which the documents were/are generated
become familiar with a small number of documents
generate some categories that will guide the collection of data and draft a schedule for collecting the data in terms of the generated categories
test the schedule by using it for collecting data from a number of documents
revise the schedule and select further cases to sharpen it up
Semiotics is referred to as the “science of signs”. It is an approach to the analysis of symbols is everyday life and as such can be employed in relation not only to documentary sources but also to all kinds of other data because of its commitment to treating phenomena as texts. Semiotics is concerned to uncover the hidden meanings that reside in texts as broadly defined, for example the CV is the curriculum vitae. The main terms employed are:
the sign, that is, something that stands for something else
the sign is made up of: a signifier and the signified
the signifier is the thing that points to an underlying meaning (the term sign vehicle is sometimes used instead of signifier)
the signified is the meaning to which the signifier points
a denotative meaning is the manifest or more obvious meaning of a signifier and as such indicates its function
a sign-function is an object that denotes a certain function
a connotative meaning is a meaning associated with a certain social context that is in addition to its denotative meaning
polysemy refers to a quality of signs-namely, that they are always capable of being interpreted in many ways
the code is the generalized meaning that interested parties may seek to instil in a sign, a code is sometimes also called a sign system
Hermeneutics has been influential in the general formulation of interpretivism as an epistemology and the idea of hermeneutics is more or less synonymous with Weber’s notion of Verstehen. The central idea in hermeneutics is that analysts seek to bring out the meanings of a text from the perspective of its author. This entails attention to the social and historical context within which the text was produced.
Philips and Brown (1993) have identified an approach to the interpretation of company documents described as a critical hermeneutic approach. They have entailed the examination of the advertisements in terms of three “moments”:
the social-historical moment: this involves an examination of the producer of the text, its intentional recipient, its referent in the world, and the context within it is produced
the formal moment: involves a formal analysis of the structural and conventional aspects of the text, which means that the texts must be examined in terms of the constituent parts of each text and the writing conventions employed
the interpretation-reinterpretation moment: involves the interpretation of the results of the first two moments
General strategies of qualitative data analysis
Analytic induction is an approach to the analysis of data in which the researcher seeks universal explanations of phenomena by pursuing the collection of data until no cases that are inconsistent with a hypothetical explanation of a phenomenon are found. It begins with a rough definition of a research question, proceeds to a hypothetical explanation of that problem, and then continues on to the collection of data (examination of cases). If a case is inconsistent with the hypothesis, the analyst either redefines the hypothesis so as to exclude the deviant or negative case or reformulates the hypothesis and proceeds with further data collection. In this sense, analytic induction is an extremely rigorous method of analysis.
However, there are two problems about it. First, the final explanations that analytic induction arrives at specify the conditions that are sufficient for the phenomenon occurring but rarely specify the necessary conditions. Second, it does not provide useful guidelines as to how many cases need to be investigated before the absence of negative cases and the validity of the hypothetical explanation can be confirmed.
Grounded theory has been defined as theory that was derived from data, systematically gathered and analysed through the research process. In this method, data collection, analysis, and eventual theory stand in close relationship to one another’. Thus, two central features of grounded theory are that it is concerned with the development of theory out of data and that the approach is iterative, or recursive, meaning that the data collection and analysis proceed in tandem, repeatedly referring back to each other. Although, it has been suggested that grounded theory is a strategy for generating theory out of data, in many cases, using a grounded theory approach generate concepts rather than theory as such.
Tools of grounded theory
Theoretical sampling: is the process of data collection for generating theory whereby the analyst jointly collects, codes and analyses his data and decides what data to collect next and where to find them, in order to develop his theory as emerges.
Coding: the key process in grounded theory, whereby data are broken down into component parts, which are given names. It begins soon after the collection of data. in grounded theory different types or levels of codes are recognized:
Open coding: the process of coding yield concepts, which are later to be grouped and turned into categories
Axial coding: a set of procedures whereby data are put back together in new ways after open coding, this is done by linking codes to contexts, to consequences, to patterns of interaction, and to causes
Selective coding: the procedure of selecting the core category, the core category is the central issue or focus around which all other categories are integrated
Constant comparison: it refers to a process of maintaining a close connection between data and conceptualization, so that the correspondence between concepts and categories with their indicators is not lost. The researcher constantly compares phenomena being coded under a certain category so that a theoretical elaboration of that category can begin to emerge. It also entails being sensitive to contrasts between the categories that are emerging.
Outcomes of grounded theory
concepts: refers to labels given to discrete phenomena; concepts are referred to as the “building blocks of theory” and are produced through open coding
category-categories: a concept that has been elaborated so that is regarded as representing real-world phenomena, as such, categories are at a higher level of abstraction than concepts.
properties: attributes or aspects of a category
hypothesis: initial hunches about relationships between concepts
theory: two types or levels of theory have been pointed, substantive and formal theory. Substantive theory relates to theory in a certain empirical instance or substantive area. A formal theory is at a higher level of abstraction and has a wider range of applicability to several substantive areas, suggesting that higher-level processes are at work.
Memos in grounded theory are notes that researchers might write for themselves and for those with whom they work concerning such elements of grounded theory as coding or concepts. They serve as reminders about what is meant by terms being used and provide the building blocks for a certain amount of reflection, therefore they are helpful to researchers in helping them crystallizing ideas and not to lose track of their thinking on various topics.
Criticism of grounded theory
it has been questioned, whether, as according to the grounded theory, researchers can suspend their awareness of relevant theories or concepts until a quite late stage of analysis. It is generally agreed that what we “see” when we conduct research is affected by many factors, and mainly about what we already know about social world being studied.
Researchers are required to spell out the possible implications of their planned investigation, something that is frequently disdained in grounded theory
There are practical difficulties with grounded theory, such as the time required to transcribe recordings of interviews
It is doubtful in many stances whether grounded theory really results in theory; it provides a rigorous approach to the generation of concepts, but it is often difficult to see what theory, in sense of explanation, is being put forward
Grounded theory is still vague on certain points like the differences between concepts and categories, because of the many subtle changes in its presentation. For instance, the term “categories” is still used rather than concept, and this inconsistent is not helpful
Competing accounts of the ingredients of grounded theory does not make it easy to characterize it or establish how to use it, for instance the suggestion that must grounded theory is objectivist and that an alternative constructivist approach is preferable
Basic operations in qualitative data analysis
Steps and considerations in coding
code as soon as possible: coding as you go along sharpens your understanding of your data and help with theoretical sampling, as well as it helps to alleviate the feeling of being swamped by your data
read through your initial set of transcripts, field notes, documents etc.: without taking notes or considering any interpretation, only the points where you stuck
do it again: read through your data again, and begin making marginal notes about significant remarks or observations. Giving themes to your data using keywords and names is coding
review your codes: in relation to your transcripts do not use more than one word to describe the same phenomenon
consider more general theoretical ideas in relation to codes and data: begin generating some general theoretical ideas about your data and try to outline connections between concepts and categories you are developing
remember that any one item or slice of data can and often should be coded in more than one way.
do not worry about generating what seem to be too many codes: the important thing is to be as inventive as possible, you can worry about tidying things up later
keep coding in perspective: do not equate coding with analysis; it is part of the analysis- a mechanism for thinking about the meaning of your data and for reducing the vast amount of data that you are facing
Narrative analysis is a term that covers a variety of approaches that are concerned with the search for and analysis of the stories that people employ to understand their lives and the world around them. It has become particularly prominent in connection with the life history and biographical approach; however it is not exclusively concerned with life histories. The term is used to emphasize the examination of the storied nature of human recounting of lives and events, and to refer to the sources themselves, that is, the stories that people tell in recounting their lives. At the very least, narrative analysis entails a sensitivity to the connections in people’s accounts of past, present, and future events and states of affairs, people’s sense of their place within those events and states of affairs, the stories they generate about them, and the significance of context for the unfolding of events and people’s sense of their own role within them.
According to Riessman (2004) there are four models of narrative analysis:
thematic analysis: an emphasis on what is said rather than on how it is said
structural analysis: an emphasis on the way a story is related; issues of content do not disappear but there is an emphasis on the use of narrative mechanisms for increasing the persuasiveness of a story
interactional analysis: an emphasis on the dialogue between the teller of a story and the listener; prominent is the co-construction of meaning by the two parties, though content and form are by no means marginalized
performative analysis: an emphasis on narrative as a performance that explores the use of words and gestures to get across a story; an examination of the response of an audience to the narrative
Narrative research in organizations
Rhodes and Brown (2005) provide a narrative review on business and management literature, and they identify five principle areas that narrative analysis has explored, assessing the theoretical value each has added:
sensemaking: focuses on the role of stories as a device through which people make sense of organizational events
communication: explores how narratives are used to create and maintain organizational culture and power structure
learning/ change: analyses how stories help people to learn and subjectively make sense of change
politics and power: considers the role of shared narratives in the control of organizational meaning
identity and identification: focuses on the role of stories in creating and maintaining organizational identity
In thematic analysis the emphasis is on what is said rather than on how it is said. When searching for themes, Ryan and Bernard (2003) recommend looking for:
repetitions: topics that recur again and again
indigenous typologies or categories: local expressions that are either unfamiliar or are used in an unfamiliar way
metaphors and analogies: the ways in which participants represent their thoughts in terms of metaphors or analogies
transitions: the ways in which topics shift in transcripts and other materials
similarities and differences: exploring how interviewees might discuss a topic in different ways or differ from each other in certain ways or exploring whole texts like transcripts and asking how they differ
linguistic connectors: examining the use of words like “because” or “since”, because such terms point to causal connections in the minds of participants
missing data: reflecting on what is not in the data by asking questions about what interviewees, for example, omit their answers to questions
theory-related material: using social scientific concepts as a springboard for themes
Secondary analysis of qualitative data
Recently, secondary analysis of qualitative data has become a growing focus of discussion and interest. It allows the researcher to mine data that were not examined by the primary investigators or that new interpretations may be possible. However, there are some difficulties with the reuse of qualitative data, such as the difficulty of making settings and people anonymous and the ethical problems involved in such reuse associated with promises of confidentiality. Also, the reuse of qualitative data may be hindered by the secondary analyst’s lack of the social context within which the data were produced. Also, ethical issues arise, such as that the original researcher may not have obtained the consent of research participants for the analysis of data by others. Yet, and in spite of certain practical difficulties, secondary analysis offers rich opportunities, because the tendency for qualitative researchers to generate large and unwieldy sets of data means that much of the material remains under-explored.
In recent years, in qualitative research is notable the arrival of computer software that facilitates the analysis of qualitative data. Computer-assisted qualitative data analysis (CAQDAS) has been a growth area in terms of both the proliferation of programs that perform such analysis and the members using it. It allows the analyst to code text while working at the computer and to retrieve the coded text. The computer takes over manual tasks associated with the coding process. Still, the analyst has to to interpret his/her data, code, and then retrieve the data, but the computer takes over the manual labour involved.
Is the CAQDAS like quantitative data analysis software?
It has been commented that CAQDAS cannot help with decisions about the coding of textual materials or about the interpretation of findings. However, in quantitative research, as well, the investigator sets out the crucial concepts and ideas in advance rather than generating them out of his or her data, and once the analyses have been preformed, it is still necessary to interpret them. Creativity is required by both types of software. CAQDAS differs on the environment within it operates.
SPSS is close to be the industry leader, it is not the only software but it is dominant, widely known and used. No parallel situation exists with the CAQDAS. More and more programs have been coming onto the market. The most well know is currently the NVivo.
Unlike quantitative data analysis, in which the use of computer software is both widely accepted and to all intents and purposes a necessity, among qualitative data analysts its use is by no means universally embraced.
Finally, in case you have a very small data set, it is not worth the time and trouble navigating your way around a new software program. But, in case you have a larger data set, CAQDAS can be an invaluable tool.
It has been indicated that there is an important distinction between quantitative and qualitative research, reflecting basically two points. First, there are differences between quantitative and qualitative research in terms of research strategy. Second, this distinction is a useful means of organizing research methods and approaches to data analysis.
Moreover, there are differences between quantitative and qualitative research in terms of their epistemological and ontological commitments. However, the connection between research strategy, and epistemological and ontological commitments, is not deterministic. There is a tendency for quantitative and qualitative research to be associated with the epistemological and ontological positions, but the connections are not perfect. Qualitative research sometimes exhibits features normally associated with a natural science model, and quantitative research aims occasionally to engage with an interpretivist stance.
The contrast between quantitative and qualitative research suggest a somewhat hard-and-fast set of distinctions and differences. First, the distinction is sometimes drawn between behaviour versus a focus on meanings. Quantitative research frequently involves the study of meanings in the form of scales, and qualitative researchers may feel the tendency for attitude scales to be preformulated and imposed on research participants means that they do not really gain access to meanings. So that, both quantitative and qualitative researchers are interested in both what people do and what they think, but go about the investigation of these areas in different ways. Thus, the degree to which the behaviour versus meaning contrast coincides with qualitative and quantitative research should not be overstated.
Secondly, it is suggested that theory and concepts are developed prior to undertaking a study in quantitative research reflects the tendency to characterize research as driven by a theory testing approach. Although, concepts have to be measured, the nature of their interconnections is frequently supposed, and consequently, the analysis of quantitative data from social surveys is often more exploratory than is generally appreciated and so, it offers opportunities for the generation of theories and concepts. The suggestion that, unlike an interpretivsit stance, quantitative research is solely concerned with the testing of ideas that have been previously formulated fails to recognize the creative work of the analysis of quantitative data and the interpretation of findings. And equally, as indicated already, qualitative research can be used to the testing of theories.
Third, one basic element of distinction refers to the numbers versus words difference. Qualitative researchers sometimes undertake a limited amount of quantification of their data. It has been argued that some quantification of findings from qualitative research can often help to uncover the generality of the phenomena described. In general, it has been noted that qualitative researchers engage in quasi-quantification through the use of terms, such as “many”, “often” and “some”.
Fourth, there is also an artificial versus natural distinction. It is often assumed that quantitative research employs research instruments that are applied to the people being studied, such as questionnaires, structured interview schedules, structured observation schedules and so on, which provides an artificial account of how the social world operates. However, when qualitative research is based on interviews, the depiction “natural” is possibly less applicable. Thus, there are areas and examples that lead to question the degree to which the quantitative/qualitative contrast is rigid. Yet, this is not to suggest that the contrast is unhelpful, but that we should be wary assuming that talking about quantitative and qualitative we are referring to two absolutely divergent and inconsistent strategies. In the end, a quantitative research approach can be employed for the analysis of qualitative studies and a qualitative research approach can be employed to examine the rhetoric of quantitative researchers. Also, some qualitative researchers employ quantification in their work.
Consequently, mixed methods research has been developed. Mixed method research is research that combines quantitative and qualitative research; research methods that cross the two research strategies. Using both quantitative and qualitative research should involve a mixing of the research methods involved and not just using them in tandem. The quantitative and qualitative data deriving from mixed methods research should be mutually illuminating.
However, mixed methods research has received negative criticism. The argument against mixed methods research tends to be based on two kinds of arguments. First of all, the idea that research methods carry epistemological commitments. Secondly, in the idea that quantitative and qualitative research are separate paradigms. As a consequence, there are two different versions about the nature of quantitative and qualitative research, an epistemological one and a technical one.
The epistemological version sees quantitative and qualitative research as grounded in incompatible epistemological principles and ontological as well. According to this version of their nature, mixed methods research is not possible. The technical version is the position taken by most researchers and gives greater prominence to the strengths of the data-collection and data-analysis techniques with which quantitative and qualitative research are each associated and sees these as capable of being fused. There is recognition that quantitative and qualitative research is each connected with distinctive epistemological and ontological assumptions. A research method from one strategy is viewed as capable of being pressed into the service of another. The technical version views the two research strategies as compatible, so that mixed methods research becomes both feasible and desirable.
However, mixed methods research is becoming far more common in recent times. It may provide a better understanding of a phenomenon than if just one method had been used. Yet, it is subjected to the same considerations and constraints as any research method and design.
Deze samenvatting is te gebruiken bij de 4e druk
Wat is ‘sociaal onderzoek’?
Na het lezen van dit boek zul je veel meer verstand hebben van sociale onderzoeksmethoden. Dit is vooral belangrijk om twee redenen: (1) vanwege de vele valkuilen waar beginnende onderzoekers tegen aanlopen te ontwijken, en (2) om bekend te raken met de manier waarop onderzoek uitgevoerd wordt zodat je artikelen beter kunt begrijpen. De term ‘sociaal onderzoek’ refereert naar wetenschappelijk onderzoek waarbij de hoofdvragen opgesplitst zijn in verschillende sociaal wetenschappelijke gebieden (zoals de sociologie en criminologie). Sociaal onderzoek wordt vooral uitgevoerd omdat er bepaalde aspecten die in de huidige maatschappij spelen, onopgelost zijn gebleken.
De context van sociale onderzoeksmethoden
Sociaal onderzoek is niet hermetisch afgesloten van de buitenwereld: veel factoren creëren gezamenlijk de context voor jouw onderzoek en kunnen bepalend zijn bij alle beslissingen. De voornaamste invloeden zijn: de theorie, de epistemologische overwegingen en de ontologische overwegingen. Hieronder de voornaamste invloeden en enkele aanvullende invloeden op het sociale onderzoek:
Theorieën: de informatie binnen een onderzoeksgebied is vaak al meermaals in theorieën gegoten. Deze theorieën beïnvloeden jouw onderzoek omdat ze je helpen om de achtergrond beter te begrijpen en om jouw onderzoek vorm te geven;
Literatuur: je moet bekend zijn met wat er al onderzocht is met betrekking tot datgene waar je geïnteresseerd in bent;
Jouw visie op de relatie tussen de theorie en het onderzoek: je kan beginnen met een theorie (en daarop je hypothese baseren), of eindigen met een theorie;
Epistemologische overwegingen: de aannames en visies over hoe het onderzoek uitgevoerd moet worden. Wat is de beste manier om dit sociale fenomeen te meten?
Ontologische overwegingen: de aannames van het sociale fenomeen zelf. Is het fenomeen wel door ons te beïnvloeden? Is het een product van onze interacties?
De normen en waarden binnen de maatschappij: wat is ethisch verantwoord?
Het doel van het onderzoek: draagt het enkel bij aan de stapel met kennis die er is over de sociale wereld is of moet het praktisch nut hebben en de wereld veranderen? Twee soorten onderzoeken die min of meer exclusief zijn ontwikkeld om implicaties te hebben voor het alledaagse leven zijn evaluatie onderzoek en actie onderzoek.
De politieke context: veel onderzoek wordt gefundeerd door overheidsinstellingen. Bepaalde onderwerpen zijn dus aantrekkelijker om te onderzoeken, krijgen meer aandacht en worden vaker gepubliceerd;
De training en persoonlijke waardes en voorkeuren van de onderzoeker zelf.
Kernelementen van sociaal onderzoek
De volgende elementen zijn volgens de auteur van het boek de kernelementen van sociaal onderzoek die meestal in deze zelfde volgorde afgewerkt moeten worden.
Literatuuronderzoek. In dit onderdeel moet de onderzoeker kritisch kijken naar (1) wat er al bekend is over het onderwerp (2) welke concepten en theorieën zijn gevormd (3) welke onderzoeksmethoden al zijn gebruikt om dat onderwerp te onderzoeken (4) welke conclusies er zijn getrokken en of er controversies zijn ontstaan, en (5) wie de meeste toonaangevende onderzoekers zijn op dat gebied.
Theorieën. Zoals al eerder besproken kunnen theorieën of de basis vormen voor onderzoek of juist de uitkomst ervan. Dit ligt ten grondslag aan het verschil tussen de deductieve en inductieve benadering van de relatie tussen respectievelijk de theorie en het onderzoek. Vertrouwd raken met de concepten en theorieën is van groot belang omdat het (1) ons helpt om de gegevens te interpreteren door een ‘framework’ te bieden en (2) een beeld geeft van hoe nuttig of juist beperkt de theorieën zijn in het ontrafelen van sociale fenomenen.
Onderzoeksvragen. Het denken over onderzoeksvragen dwingt ons om te kijken naar wat we werkelijk willen weten en onderzoeken. Slecht (niet concreet) geformuleerde onderzoeksvragen resulteren in slecht onderzoek, omdat je warrig te werk gaat. Onderzoeksvragen hebben zeven belangrijke functies(1) het leiden van jouw onderzoek (2) het leiden van jouw beslissingen met betrekking tot het onderzoeksdesign (3) het leiden van jouw beslissingen met betrekking tot het kiezen van jouw data (werving) (4) het leiden van jouw data-analyse (5) het leiden van de manier waarop jij jouw data rapporteert (6) het zorgt ervoor dat je niet onnodige zijwegen inslaat, en (7) het zorgt ervoor dat het de lezers duidelijk wordt gemaakt waar jouw onderzoek over gaat.
Steekproef met ‘cases’. De term ‘cases’ wordt toegevoegd, omdat de steekproeven niet altijd (maar meestal) bestaan uit mensen. Soms wordt de data gehaald uit bijvoorbeeld kranten. Vaak bestaan ‘cases’ dus wel uit mensen en wordt er geprobeerd om een zo representatief mogelijke steekproef te verkrijgen die de populatie goed weerspiegeld. Soms wordt er een enkele ‘case’ onderzocht, dit heet dan een ‘case study’. De bedoeling bij deze opzet isRead more