AECT Handbook of Research

Table of Contents

40: Qualitative Research Issues and Methods: An Introduction for Educational Technologists
PDF

  Introduction
40.1 Introduction to Qualitative Research
40.2 Qualitative Research Methods
40.3 Analyzing Qualitative Data
40.4 Writing Qualitative Research Reports
40.5 Ethical issues in Conducting Qualitative Research
40.6 Criteria for Evaluating Qualitiative Studies
40.7 Learning More about doing Qualitative Research
  References
Search this Handbook for:

 

40.2 QUALITATIVE RESEARCH METHODS

Designing qualitative studies is quite different from designing experimental studies. In fact, designs and methods are continually refined while the researcher conducts a qualitative study. As suggested by Jacobs (1987), the researcher initially chooses methods based on the questions to be addressed; however, the questions, issues, and topics of the study themselves may change as the researcher's conception of the reality of the "world" being studied changes. This may be uncomfortable for those experienced with more quantitative, experimental, or quasi-experimental research. However, most qualitative researchers recommend this process of continual refinement. Goetz and LeCompte (1984), for example, note that methods are "adjusted, expanded, modified, or restricted on the basis of information acquired during the mapping phase of field-work.... Only after final withdrawal from the field can researchers specify the strategies they actually used for a particular study" (p. 108).

Lincoln and Guba (1985) address the contradictory idea of "designing" a naturalistic study completely prior to beginning the study, calling this a "paradox" in that most funding agencies require specificity regarding methods, while methods in a good qualitative study may be expected to change as the study progresses. Erlandson et al. (1993) take the middle road. They say that the answer to whether a naturalistic study should be designed in advance is "Yes-to some extent" (p. 66). They recommend beginning the study by specifying a research problem, selecting a research site, developing working hypotheses, and using interactive processes to refine the research questions. They further suggest that the researcher plan for the stages of conducting the study. These may include negotiating entry to the site, planning for purposive (rather than random) sampling and for data collection, planning for data analysis, determining how quality will be ensured in the study, deciding how the findings of the study will be disseminated, and developing a logistical plan. [For further information regarding the logistical operations of field research, the reader may refer to Fiedler's book (1978), Field Research: A Manual for Logistics and Management of Scientific Studies in Natural Settings.] Erlandson et al. (1993) also recommend reviewing the design of the study regularly.

In determining what the research problem is, Bernard (1988, p. 11) suggests that the researcher ask himself or herself five questions:

  1. Does this topic (village -- i.e., setting, school, organization, institution-and data collection method) really interest me?
  2. Is this a problem that is amenable to scientific inquiry?
  3. Are adequate resources available to investigate this topic? (To study this population? To use this particular method?)
  4. Will my research question, or the methods I want to use, lead to unresolvable ethical problems? (Ethical issues will be addressed later in this chapter.)
  5. Is the topic (community, method) of theoretical interest?

Once a question or issue has been selected, the choice of qualitative methods falls roughly into the categories of observations, interviews, and document and artifact analysis. Qualitative methods, however, form continua on various dimensions, and researchers espouse many views of how methods may be categorized and conceptualized.

Pelto and Pelto (1978) in their frequently cited text on anthropological research methods remind us that the human investigator is the primary research instrument. These authors categorize methods as either verbal or nonverbal techniques. Verbal techniques include participant observation, questionnaires, and various forms of structured and unstructured interviews. Nonverbal techniques include observations and measures of interactions; proxemics, kinesics, and research involving videotaped observations; use of various types of technical equipment for collecting data; content analysis; and analysis of artifacts and records. Pelto and Pelto (1978) add that methods may be described as having an "emic" or insider's view, as in participant observation, versus an "etic" or outsider's view, as in nonparticipant stream-of-behavior analyses.

Other researchers use variations of these taxonomies. Goetz and LeCompte (1984) divide methods into interactive (participant observation and several types of interviews) versus noninteractive methods (forms of nonparticipant observation, as well as artifact collection and analysis). Lincoln and Guba (1985) classify methods as those that collect data from human sources (observations and interviews) as opposed to those that collect data from nonhuman sources (documents and records).

Other authors, however, note that methods can rarely be classified as simple dichotomies, such as interactive or not, in large part because the researcher is a human being and thus involved, and plays a role even in nonparticipant observation (see Atkinson & Hammersley, 1994). Bogdan and Biklen (1992) provide the example of the "participant/observer continuum" (p. 88), describing the ways observers who refrain from being overt participants may still interact to varying degrees with those subjects. Researchers who work using an ethnographic perspective consider all methods "doing fieldwork" (cf. Bogdan & Biklen, 1992). Similarly, Bernard (1982) calls participant observation the "foundation of anthropological research" (p. 148); some would say this deep, involved method of interacting with subjects defines qualitative research.

It is assumed that educational technologists will use methods ethically and with a view to doing quality research, but may not always be bound by anthropological tradition. We are in another field with other questions to answer than those in which anthropologists or sociologists may be interested. For instance, it is now possible to design instruction using a multitude of techniques, using many delivery systems. As noted by McNeil and Nelson (1991) and Reeves (1986), many design factors contribute to the success of instruction using new technologies, such as distance education, interactive multimedia, and Internet-based delivery systems. Educational technologists may successfully use and adapt qualitative methods to investigate new and challenging questions.

In this chapter, we will discuss specific methods that may be called observations, interviews, and document and artifact analysis. As in all qualitative research, it is also assumed that educational technology researchers will use and refine methods with the view that these methods vary in their degree of interactiveness with subjects. Each of these methods, in their various forms, along with several research perspectives, will be examined in detail below (see 23.6 and 41.2 in this handbook for discussion of other aspects of qualitative methods).

40.2.1 Grounded Theory

Grounded theory is considered a type of qualitative methodology. Strauss and Corbin (1994), however, in their overview of grounded theory note that it is "a general methodology for developing theory that is grounded in data systematically gathered and analyzed' (p. 273), adding that it is sometimes called the constant comparative method and that it is applicable as well to quantitative research. In grounded theory, the data may come from observations, interviews, and videotape or document analysis, and, as in other qualitative research, these data may be considered strictly qualitative or may be quantitative. The purpose of the methodology is to develop theory, through an iterative process of data analysis and theoretical analysis, with verification of hypotheses ongoing throughout the study. A grounded theory perspective leads the researcher to begin a study without completely preconceived notions about what the research questions should be, assuming that the theory on which the study is based will be tested and refined as the research is conducted.

The researcher collects extensive data with an open mind. As the study progresses, he or she continually examines the data for patterns, and the patterns lead the researcher to build the theory. Further data collection leads to further refinement of the questions. The researcher continues collecting and examining data until the patterns continue to repeat and few relatively, or no clearly, new patterns emerge. The researcher builds the theory from the phenomena, from the data, and the theory is thus built on, or "grounded" in, the phenomena. As Borg and Gall (1989) note, even quantitative researchers see the value of grounded theory and might use qualitative techniques in a pilot study without completely a priori notions of theory to develop a more-grounded theory on which to base later experiments.

A recent example of a grounded-theory approach in an educational technology study is that of Oliver's (1992). This research investigated and described the activities used in a university televised distance education system, analyzing the use of camera techniques as they related to interaction in class. Oliver videotaped hours of two-way video instruction and analyzed the amount and kind of classroom interactions that occurred. She also examined and described the various television shots and transitions used. Outside observers also coded the videotapes. Using grounded-theory techniques, Oliver used the data she transcribed and the emerging categories of data to create a theory of televised instruction. The theory involved the use of close-up camera techniques and the "clean-cut" transition to enhance interaction.

40.2.2 Participant Observation

Participant observation is a qualitative method frequently used in social science research. It is based on a long tradition of ethnographic study in anthropology. In participant observation, the observer becomes "part" of the environment, or the cultural context. The method usually involves the researcher's spending considerable time "in the field," as anthropologists do. Anthropologists typically spend a year or more in a cultural setting in order to really understand the culture in depth, even when they begin the study with a broad overall research question. The hallmark of participant observation is interaction among the researcher and the participants. The main subjects take part in the study to varying degrees, but the researcher interacts with them continually. For instance, the study may involve periodic interviews interspersed with observations so that the researcher can question the subjects and verify perceptions and patterns. These interviews may themselves take many forms, as noted in an upcoming section. For example, a researcher may begin by conducting open-ended unstructured interviews with several teachers to begin to formulate the research questions. This may be followed by a set of structured interviews with a few other teachers, based on results of the first series, forming a sort of oral questionnaire. Results of these interviews may then determine what will initially be recorded during observations. Later, after patterns begin to appear in the observational data, the researcher may conduct interviews asking the teachers about these patterns and why they think they are occurring, or if indeed those are categories of information. Similarly, a researcher might conduct videotaped observations of a set of teachers, analyze the tapes to begin to make taxonomies of behaviors, and then conduct interviews with the teachers, perhaps while they view the tapes together, to determine how the teachers themselves categorize these behaviors. Thus, the researcher becomes a long-term participant in the research setting.

Educational researchers have come under some criticism, at times legitimately so, for observing in educational settings for very brief periods of time, such as once for a few hours, and then making sweeping generalizations about teachers, schools, and students from these brief "slices of time." Yet, educational researchers typically do not have the resources to "live" in the observed settings for such extended periods of time as anthropologists do. There are several exceptions" including, but not limited to, Harry Wolcott's studies of a Kwakiutl village and school (1967) and of one year in the life of a school principal (1973); John Ogbu's (1974) ethnography of urban education; and Hugh Mehan's (1979) collaborative study of social interactions in a classroom, done with Courtney Cazden and her cooperating teacher, LaDonna Coles.

It is reasonable that fine educational technology research can be conducted using participant observation techniques, with somewhat limited research questions. Not every phenomenon can possibly be recorded. Most qualitative observational studies rely on the researcher's writing down what occurs in the form of extensive field notes. The researcher then analyzes these notes soon after observations are carried out, noting patterns of behaviors and events and phenomena to investigate in further observations. Still, the researcher is the instrument in most participant observations and, being human, cannot observe and record everything. Therefore, in most educational research studies, the investigator determines ahead of time what will be observed and recorded, guided but not limited by the research questions.

In an example of a limited participant observation case study, Robinson (1994) observed classes using "Channel One" in a midwestern middle school. While Robinson was not there for more than one semester, she did observe and participate in the class discussions for many hours of classroom instruction, as well as interview about 10% of the students. She did not focus on all school activities, or on all the categories of interaction within the classrooms, but focused her observations and field notes on the use of the televised news show and the reaction to it from students, teachers, administrators, and parents.

It should be noted that novice observers initially think they can avoid the observational limitations by simply videotaping everything that goes on in the setting, such as the classroom. The use of videotape and audiotape in data collection is useful, particularly in nonparticipant observational studies of particular behaviors and phenomena. However, it can be readily seen that videotaping everything is usually not a way to avoid defining or focusing research questions. For instance, without an exceptionally wide-angle lens, no video camera can record all that goes on in one classroom. If such a lens is used, then the wide view will preclude being able to see enough detail to understand much of what is going on. For example, computer screens will not be clearly visible, nor will specific nonverbal behaviors. In addition, if conversations are of interest in order to understand the types of behaviors students are engaged in, no one camera at the back of the room will be able to record all the conversations. Finally, those who have conducted microanalysis of videotaped classroom observations find it is not unusual to require 10 hours to analyze the behaviors and language recorded in I hour of videotape. It can easily be seen that the decision to videotape dozens of hours of classroom behaviors with one camera in the room might result in little useful data being collected, even after hundreds of hours of analysis. Videotape can successfully be used in data collection when the researcher knows what he or she wants to analyze. The preceding note of caution is just a reminder to the qualitative researcher that "shotgun" data collection is no substitute for determining ahead of time what the study is all about.

What can happen with videotape can also happen with written field notes. Trying to glean meaning by sifting through notebook after notebook of descriptions of classroom happenings, especially long after observations were made, is nearly impossible. What is needed is for observations to be at least loosely guided by purposes and questions. Even in studies using a grounded theory approach, observers generally analyze for patterns in observations throughout the entire data collection phase.

Spradley's (1980) book details how to conduct participant observations. He discusses the variety of roles the observer might take, noting that the observer becomes to varying degrees an "insider," in line with what Pelto and Pelto (1978) call the emic view. Spradley suggests that the research site and setting, of course, be selected to best answer the research questions, but with an eye toward simplicity, accessibility, possibility of remaining relatively unobtrusive, permissibleness, assurance that the activities of interest will occur frequently, and degree to which the researcher can truly become a participant.

Spradley (1980) provides specific techniques for conducting observations, for conducting iterative interviews with subjects, and for analyzing behaviors, especially language used by informants in interviews. In particular, he notes that cultural domains, or categories of cultural meaning, can be derived from interviews and observations with participants. Finally, he provides advice regarding how to analyze data and write the ethnography.

The stages of participant observation, from an anthropological perspective, have been delineated by Bernard (1988). He describes the excitement, and sometimes fear, of the initial contact period; the next stage, which is often a type of shock as one gets to know the culture in more detail; a period of intense data collection he identifies with discovering the obvious, followed by the need for a real break; a stage in which the study becomes more focused; followed by exhaustion, a break and frantic activity; and finally, carefully taking leave of the field setting.

Spradley (1980) advises that ethical issues be addressed throughout the study. These issues are common to most types of qualitative research methods. For instance, Spradley advises that the researcher consider the welfare and interests of the informants, that is, the collaborating subjects first. He says informants' rights, interests, and sensibilities must be safeguarded; informants should not be exploited. Subjects should be made aware of the purposes of the research study. Their privacy should be protected. Many of these issues are common to all types of research. However, Spradley adds that reports should be made available to informants, so that they too are participants in the study. In some of the interview techniques described later, in fact, verifying analyses and preliminary reports with subjects is one way to ensure the authenticity of the results and to delve deeper into the research questions. Ethical issues in qualitative research, as well as criteria for evaluating the rigor and quality of such research, will be discussed in further detail later in this chapter.

Borg and Gall (1979) discuss the types of questions one might address using participant observation techniques. These include such questions as who the participants are; what are their typical and atypical patterns of behavior; and where, when, how, and why the phenomena occur. In short, participant observation is often successfully used to describe what is happening in a context and why it happens. These are questions that cannot be answered in the standard experiment.

Another example of participant observation is described by Reilly (1994). His use of videotaping and video production instruction as a project in a California high school involved defining a new type of literacy, combining print, video, and computer technologies. Students produced videotapes that were then transferred to disc and made available for others' use. The research involved many hours of in-school data collection and analysis, and was very action oriented, with a product from the students as well as a written report from the researcher.

The work of Higgins and Rice (1991) is another excellent example of a qualitative study with an educational technology focus. These researchers investigated teachers' perceptions of testing. They used triangulation, by using a variety of methods to collect data; however, a key feature of the study was participant observation. Researchers observed six teachers for a sample of 10 hours each. Trained observers recorded instances of classroom behaviors that could be classified as assessment.

Another exemplary study that used multiple methods to triangulate data but that relied primarily on participant observation is that of Moallem (1994). This researcher investigated an experienced teacher's model of teaching and thinking by conducting a series of observations and interviews over a 7-month period. Using a constant comparative style, she analyzed the data, which allowed categories of the teacher's frames of reference, knowledge and beliefs, planning and teaching techniques, and reflective thinking to emerge. She then built a model of the teacher's conceptions. This study may also be called a form of case study.

The study and the triangulation of data and refinement of patterns using progressively more-structured interviews and multidimensional scaling will be described in more detail later in this chapter.

40.2.3 Nonparticipant Observation

Nonparticipant observation is one of several methods for collecting data considered to be relatively unobtrusive. Many recent authors cite the early work of Webb, Campbell, Schwartz, and Sechrest (1966) as laying the groundwork for use of all types of unobtrusive measures.

Several types of nonparticipant observation have been identified by Goetz and LeCompte (1984). These include stream-of-behavior chronicles, recorded in written narratives or using videotape or audiotape; proxemics and kinesics, that is, the study of uses of social space and movement; and interaction analysis protocols, typically in the form of observations of particular types of behaviors, categorized and coded for analysis of patterns. Bernard (1988) describes two types of nonparticipant observation, which he calls disguised field observation and naturalistic field experiments. He cautions in the first case for care to be taken that subjects are not harmfully deceived. Reflecting recent postmodern and constructivist (as well as deconstructionist) trends, Adler and Adler (1994) extend paradigms of observational research to include dramaturgical constructions of reality, and auto-observation, as well as more typical ethnomethodology.

In nonparticipant observation, the observer does not interact to a great degree with those he or she is observing [as opposed to what Bernard (1988) calls direct, reactive observation]. The researcher primarily observes and records, and has no specific role as a participant. Usually, of course, the observer is "in" the scene, and thus affects it in some way; this must be taken into account. For instance, observers often work with teachers or instructors to have them explain to students briefly why the observer is there. Care should be taken once more not to bias the study. It is often desirable to explain the observations in general terms rather than to describe the exact behaviors being observed, so that participants do not naturally increase those behaviors. Some increase may occur; if the researcher suspects this, it would be appropriate to note it in the analyses and report.

As with participant observation, nonparticipant observers may or may not use structured observation forms, but are often more likely to. In this type of study, often several trained observers make brief sampled observations over periods of time, and observation forms help to ensure consistency of the data being recorded.

Nonparticipant observation is often used to study focused aspects of a setting, in order to answer specific questions within a study. This method can yield extensive detailed data, over many subjects and settings, if desired, in order to search for patterns, or to test hypotheses developed as a result of using other methods, such as interviews. It can thus be a powerful tool in triangulation. Observational data may be coded into categories, frequencies tabulated, and relationships analyzed, yielding quantitative reports of results.

Guidelines for conducting nonparticipant observation are provided by Goetz and LeCompte (1984), among others. They recommend that researchers strive to be as unobtrusive and unbiased as possible. They suggest verification of data by using multiple observers. The units of analysis, thus data to be recorded, should be specified before beginning; recording methods should be developed; strategies for selection and sampling of units should be determined; and finally all processes should be tested and refined, before the study is begun in earnest. (See 35.5 and 35.6 for a discussion of research on social interaction in cooperative learning settings.)

Examples of studies in which observations were conducted that could be considered relatively nonparticipant observation were Savenye and Strand's (1989) in the initial pilot test, and Savenye (1989) in the subsequent larger field test of a science videodisc and computer-based curriculum. Of most concern during implementation was how teachers used the curriculum. Among other questions researchers were interested in are: how much teachers followed the teachers' guide, the types of questions they asked students when the system paused for class discussion, and what teachers added to or didn't use from the curriculum. In the field test (Savenye, 1989), a careful sample of classroom lessons was videotaped and the data coded. For example, teacher questions were coded according to a taxonomy based on Bloom's (1984), and results indicated that teachers typically used the system pauses to ask recall-level rather than higher-level questions.

Analysis of the coded behaviors for what teachers added indicated that most of the teachers in the sample added examples to the lessons which would provide relevance for their own learners, and that almost all of the teachers added reviews of the previous lessons to the beginning of the new lesson. Some teachers seemed to feel they needed to continue to lecture their classes; therefore they duplicated the content presented in the interactive lessons.

Developers used the results of the studies to make changes in the curriculum and in the teacher training that accompanied it. Of interest in this study was a comparison of these varied teacher behaviors with the student achievement results. Borich (1989) found that learning achievement among students who used the interactive videodisc curriculum was significantly higher than among control students. Therefore, teachers had a great degree of freedom in using the curriculum, and the students still learned well.

If how students use interactive lessons is the major concern, researchers might videotape samples of students using an interactive lesson in cooperative groups, and code student statements and behaviors, as did Schmidt (1992). In a study conducted in a museum setting, Hirumi, Allen, and Savenye (1994) used qualitative methods to measure what visitors learned from an interactive videodisc-based natural history exhibit.

Nonparticipant observations may be used in studies that are primarily quantitative experimental studies in order to answer focused research questions about what learners do while participating in studies. For instance, a researcher may be interested in what types of choices learners make while they proceed through a lesson. This use of observations to answer a few research questions within experimental studies is exemplified in a series of studies of cooperative learning and learner control in television or computer-delivered instruction by Klein, Sullivan, Savenye, and their colleagues.

Jones, Crooks, and Klein (1995) describe the development of the observational instrument used in several of these studies. Klein and Pridemore (1994), in a study of cooperative learning in a television lesson, observed four sets of behaviors. These were coded as helping behaviors, on-task group behaviors, on-task individual behaviors, and off-task behaviors. In a subsequent experimental study using a computer-based lesson, Crooks, Klein, Jones, and Dwyer (1995) observed students in cooperative dyads, and recorded, coded, and analyzed helping, discussion, or off-task behaviors.

In another study of cooperative use of computer-based instruction (Wolf, 1994), only one behavior was determined to be most related to increased performance, and that was giving elaborated explanations, as defined by Webb (1991, 1983). Instances of this behavior, then, were recorded and analyzed.

An example of using technology to assist in recording and analyzing behaviors is shown in Dalton, Hannafin, and Hooper's (1989) study on the achievement effects of individual and cooperative use of computer-based instruction. These researchers audiotaped the conversations of each set of students as they proceeded through the instruction.

A variation on nonparticipant observations represents a blend with trace-behavior, artifact, or document analysis. This technique, called read-think-aloud protocols, takes the form of asking learners to describe what they do and why they do it, that is, their thoughts about their processes, as they proceed through an activity, such as a lesson. Smith and Wedinan (1988) describe using this technique to analyze learner tracking and choices. Researchers may observe and listen as subjects participate, or researches can use audiotape or videotape to analyze observations later. In either case, the resulting verbal data must be coded and summarized to address the research questions. Techniques for coding are described by Spradley (1980). However, protocol analysis (cf. Ericsson & Simon, 1984) techniques could be used on the resulting verbal data. These techniques also relate to analysis of documentary data, such as journals, discourse, recalled learning measures, and even forms of stories, such as life or career histories (see 23.6.3).

Many qualitative studies using observational techniques are case studies, and many in educational technology have involved the use of computers in schools. One such study was conducted by Dana (1994), who investigated how the pedagogical beliefs of one first-grade teacher related to her classroom curriculum and teaching practices. The teacher was an experienced and creative computer user who modeled the use of computers for her peers. Many hours of interviews and observations of the classes were made. Classroom videotapes were coded by outside reviewers who were trained to identify examples of the teacher's beliefs, exemplified in classroom practice. Her study provided insights into the methodology and teaching and learning in a computer-rich environment. She suggested changes that schools could make to encourage teachers to become better able to incorporate technology into their classrooms.

Another qualitative case study was conducted by Pitts (1993). He investigated students' organization and activities when they were involved in locating, organizing, and using information in the context of a research project in a biology class. Pitts relied on cognitive theory and information models in developing her theoretical construct. She described how students conducted their research leading to their preparation and use of video to present the results of their research.

40.2.3.1. Scope. A study using observational techniques may investigate a broad set of research questions, such as how a reorganization has affected an entire institution, or it may be much more narrowly focused. The outcome of the study may take the form of a type of "rich story" that describes an institution or a classroom or another type of cultural setting. A more narrowly focused participant observation study, however, may investigate particular aspects of a setting, such as the use of an educational innovation or its effects on particular classroom behaviors.

While some qualitative researchers might believe that only studies rich in "thick description," as described by Lincoln and Guba (1985) (cf. Geertz, 1973), are legitimate, other researchers might choose to use qualitative techniques to yield quantitative data. This blend of qualitative and quantitative data collection is also being used in anthropological studies. An example of a more narrowly focused relatively nonparticipant observation study is the Savenye and Strand (1989) study described earlier, in which the researchers chose to focus primarily on what types of interactive exchanges occurred between students and teachers while they used an electronic curriculum.

40.2.3.2. Biases. Educational researchers who choose to do observational studies would do well to remember that although they do not spend years observing the particular instructional community, they may quickly become participants in that community. Their presence may influence results. Similarly, their prior experiences or upbringing may bias them initially toward observing or recording certain phenomena, and later in how they "see" the patterns in the data. In subsequent reports, therefore, this subjectivity should be honestly acknowledged, as is recommended in ethnographic research.

40.2.3.3. The Observer's Role. In participant observation studies, the researcher is a legitimate member in some way in the community. For instance, in the videodisc-science curriculum study mentioned above, Strand was the senior instructional designer of the materials, Savenye had been an instructional design consultant on the project, and both researchers were known to the teachers through their roles in periodic teacher-training sessions. Observers have limited roles to play in the setting, but they must be careful not to influence the results of the study, that is, to make things happen that they want to happen. This may not seem so difficult, but it may be-for example, if the researcher finds himself or herself drawn to tutoring individuals in a classroom, which may bias the results of the study. Schmidt (1992) describes an example in which she had difficulty not responding to a student in class who turned to her for help in solving a problem; in fact, in that instance, she did assist. More difficult would be a researcher observing illegal behaviors by students who trust the researcher and have asked him or her to keep their activities secret. Potential bias may be handled by simply describing the researcher's role in the research report, but the investigator will want to examine periodically what his or her role is, and what type of influence may result from it.

40.2.3.4. What Should Be Recorded. What data are recorded should be based on the research questions. For example, in a study of classroom behaviors, every behavior that instructors and students engage in could potentially be recorded and analyzed, but this can be costly in money and time and is often not possible. A researcher using a completely "grounded-theory" approach would spend considerable time in the field recording as much as possible. However, another researcher might legitimately choose to investigate more narrowly defined research questions and primarily collect data related to those questions. Again, what is excluded may be as important as what is included.

Therefore, even in a more-focused study, the researcher should be observant about other phenomena occurring and be willing to refine data collection procedures to collect emerging important information, or to change the research questions as the data dictate, even if this necessitates added time collecting data.

40.2.3.5. Sampling. In observational research, sampling becomes not random but purposive (Borg & Gall, 1989). For the study to be valid, the reader should be able to believe that a representative sample of involved individuals were observed. The "multiple realities" of any cultural context should be represented. The researcher, for instance, who is studying the impact of an educational innovation would never be satisfied with only observing the principals in the schools. Teachers and students using the innovation would obviously need to be observed. What is not so obvious is that it is important in this example to observe novice teachers, more experienced teachers, those who are comfortable with the innovation and those who are not, along with those who are downright hostile to the innovation. Parents might also be observed working with their youngsters or interacting with the teachers. How these various individuals use the innovation becomes the "reality of what is," rather than how only the most enthusiastic teachers or experienced technologists use it.

40.2.3.6. Multiple Observers. If several observers are used to collect the data, and their data are compared or aggregated, problems with reliability of data may occur. Remember that human beings are the recording instruments, and they tend to see and subsequently interpret the same phenomena in many different ways. It becomes necessary to train the observers and to ensure that observers are recording the same phenomena in the same ways. This is not as easy as it may sound, although it can be accomplished with some effort. A brief description of these efforts should be described in the final research report, as this description will illustrate why the data may be considered consistent.

One successful example of a method to train observers has been used by Klein and his colleagues in several of the studies described earlier (cf. Klein & Pridemore, 1994; Klein, Erchul & Pridemore, 1994). In the study investigating effects of cooperative learning versus individual learning structures, Crooks et al. (1995) determined to observe instances of cooperative behaviors while students worked together in a computer-based lesson. Several observers were trained using a videotape made of a typical cooperative-learning group, with a good-quality audio track and with close views of the computer screens. Observers were told what types of cooperative behaviors to record, such as instances of asking for help, giving help, and providing explanations. These behaviors were then defined in the context of a computer-based lesson and the observation record form reviewed. Then observers all watched the same videotape and recorded instances of the various cooperative behaviors in the appropriate categories. The trainer and observers next discussed their records, and observers were given feedback regarding any errors. The following segment of videotape was viewed, and the observers again recorded the behaviors. The training was repeated until observers were recording at a reliability of about 95%. Similarly, Wolf (1994) in her study trained observers to record instances of just one behavior, providing elaborated explanations.

It should be noted that in studies in which multiple observers are used and behaviors counted or categorized and tallied, it is desirable to calculate and report interrater reliability. This can easily be done by having a number of observers record data in several of the same classroom sessions or in the same segments of tape, and then computing the degree of their agreement in the data.

Other references are also available for more information about conducting observational studies in education, for example, Croll's (1986) book on systematic classroom observation (see 41.2.4).

40-2.4 Interviews

In contrast with the relatively noninteractive, nonparticipant observation methods described earlier, interviews represent a classic qualitative research method that is directly interactive (see 41.2.2). Interview techniques, too, vary in how they may be classified, and again, most vary in certain dimensions along continua, rather than being clearly dichotomous. For instance, Bernard (1988) describes interview techniques as being structured or unstructured to various degrees. He describes the most informal type of interviewing, followed by unstructured interviewing that has some focus. Next, Bernard mentions semistructured interviewing and finally structured interviews, typically involving what he calls an interview schedule, which others call interview protocols, that is, sets of questions, or scripts. Fontana and Frey (1994) expand this classification scheme by noting that interviews may be conducted individually or in groups. Again, exemplifying modem trends in qualitative research, these authors add that unstructured interviews now may include oral histories, and creative and postmodern interviewing, the latter of which may include use of visual media and polyphonic interviewing, that is, almost verbatim reporting of respondents' words, as well as gendered interviewing in response to feminist concerns.

Goetz and LeCompte (1984) note that other classification schemes may include scheduled versus nonscheduled or standardized versus nonstandardized. However, their division of interview techniques into key-informant interviews, 'career histories, and surveys represents a useful introduction to the range of interviewing techniques.

An interview is a form of conversation in which the purpose is for the researcher to gather data that address the study's goals and questions. A researcher, particularly one who will be in the setting for a considerable period of time or one doing participant observations, may choose to conduct a series of relatively unstructured interviews that seem more like conversations with the respondents. Topics will be discussed and explored in a somewhat loose but probing manner. The researcher may return periodically to continue to interview the respondents in more depth, for instance to focus on questions further or to triangulate with other data.

In contrast, structured interviews may be conducted in which the researcher follows a sort of script of questions, asking the same questions, and in the same order, of all respondents. Goetz and LeCompte (1984) consider these to be surveys, while other authors do not make this distinction, and some consider surveys and questionnaires to be instruments respondents complete on their own without an interview.

Interviews or a series of interviews may focus on aspects of a respondent's life and represent a standard technique in anthropology for understanding aspects of culture from an insider's view. Fontana and Frey (1994) call these oral histories. Goetz and LeCompte (1984) note that for educators such interviews, which focus on career histories, may be useful for exploring how and why subjects respond to events, situations, or, of interest to educational technologists, particular innovations.

Guidelines for conducting interviews are relatively straightforward if one considers that both the researcher, as data-gathering instrument, and the respondents are human beings with their various strengths and foibles at communicating The cornerstone is to be sure that one truly listens to respondents and records what they say, rather than to the researcher's perceptions or interpretations. This is a good rule of thumb in qualitative research in general. It is best to maintain the integrity of raw data, using respondents' words, including quotes liberally. Most researchers, as a study progresses, also maintain field notes that contain interpretations of patterns, to be refined and investigated on an ongoing basis. Bogdan and Biklen (1992) summarize these ideas:

Good interviews are those in which the subjects are at ease and talk freely about their points of view.... Good interviews produce rich data filled with words that reveal the respondents' perspectives (p. 97).

Bernard (1988) suggests letting the informant lead the conversation in unstructured interviews, and asking probing questions that serve to focus the interview at natural points in the conversation. While some advocate only taking notes during interviews, Bernard stresses that memory should not be relied on, and tape recorders should be used to record exact words. This may be crucial later in identifying subjects' points of view and later in writing reports.

Ensuring the quality of a study by maintaining detailed field journals is also emphasized by Lincoln and Guba (1985). They suggest keeping a daily log of activities, a personal log, and a methodological log. They add that safeguards should be implemented to avoid distortions that result from the researcher's presence, and bias that arises from the researcher, respondents, or data-gathering techniques. They add that participants should be debriefed after the study.

Stages in conducting an interview are described by Lincoln and Guba (1985). They describe how to decide whom to interview, how to prepare for the interview, what to say to the respondent as one begins the interview [Bogdan & Biklen mention that most interviews begin with small talk (1992)], how to pace the interview and keep it productive, and, finally, how to terminate the interview and gain closure.

One example of the use of interviews is described by Pitlik (1995). As an instructional designer, she used a case study approach to describe the "real world' of instructional design and development. Her primary data source was a series of interviews with individuals involved in instructional design. She conducted group interviews with members of the International Board of Standards for Performance and Instruction, and conducted individual interviews with about 15 others. From the data she collected, she approached questions about the profession, professional practices, and the meaning of the term instructional designer Her data included interview transcripts and literature on the profession. She coded her data and found that themes that emerged described four distinct types of practitioners. Her results led to recommendations for programs that train instructional designers, as well as for practitioners.

Many old, adapted, new, and exciting techniques for structured interviewing are evolving. For example, Goetz and LeCompte (1984) describe confirmation instruments, participant-construct instruments, and projective devices. Confirmation instruments verify the applicability of data gathered from key-informant interviews or observations across segments of the population being studied. (It may be added that this type of structured interview could be adapted as a questionnaire or survey for administering to larger subject groups; see 41.2.1). Participant-construct instruments may be used to measure degrees of feelings that individuals have about phenomena, or to use in having them classify events, situations, techniques, or concepts from their perspective. Goetz and LeCompte say that this technique is particularly useful in gathering information about lists of things, which respondents can then be asked to classify.

One example of such a use of interviews is in the Higgins and Rice (1991) study mentioned earlier. At several points during the study teachers were asked to name all the ways they test their students. In informal interviews, they were asked about types of assessment observers recorded in their classrooms. The researchers later composed lists of the types of tests teachers mentioned and asked them to sort the assessment types into those most alike. Subsequently, multidimensional scaling was used to analyze these data, yielding a picture of how these teachers' viewed testing.

A third type of structured interview mentioned by Goetz and LeCompte is the interview using projective techniques. Photographs, drawings, other visuals or objects may be used to elicit individuals' opinions or feelings. These things may also be used to help the researcher clarify what is going on in the situation. Pelto and Pelto (1978) describe traditional projective techniques in psychology, such as the Rorschack inkblot test and the Thematic Apperception Test. Spindler (1974), for example, used drawings to elicit parents', teachers', and students' conceptions of the school's role in a German village. McIssac, Ozkalp, and Harper-Marinick (1992) effectively used projective techniques with subjects viewing photographs.

Types of questions to be asked in interviews are also categorized in a multitude of ways. Goetz and LeCompte (1984) describe these as "experience, opinion, feeling questions, hypothetical questions, and propositional questions" (p. 141). Spradley (1980) provides one of the more extensive discussions of questions, indicating that they may be descriptive, structural, or contrast questions. He further explains ways to conduct analyses of data collected through interviews and observations. In an earlier work (1972), he explicates how cultural knowledge is formed through symbols and rules, and describes how language can be analyzed to begin to form conceptions of such knowledge.

Of particular use to educational technologists may be the forms of structured interviews that Bernard (1988) says are used in the field of cognitive anthropology. Educational technologists and psychological researchers are interested in how learners learn, and how they conceive of the world, including technological innovations. Some of the techniques that Bernard suggests trying out include having respondents do free listing of taxonomies, as was done in the Higgins and Rice (199 1) study of teachers' conceptions of testing. These items listed can later be ranked or sorted by respondents in various ways. Another technique is the frame technique or true/false test. After lists of topics, phenomena, or things are developed through free listing, subjects can be asked probing questions, such as, "Is this _ an example of _?" Triad tests are used to ask subjects to sort and categorize things that go together or do not. Similarly, respondents can be asked to do pile sorting, to generate categories of terms and how they relate to each other, forming a type of concept map. Bernard adds that other types of rankings and ratings can also be done.

To learn further techniques and the skills needed to use them, the reader may refer to Weller and Romney's book, Systematic Data Collection (1988). Also for a more in-depth perspective on analyzing verbal protocols and interview data for insight into cognitive processes, one may look to several chapters in the Spradley (1972) work mentioned earlier. For instance, Bruner (1972) discusses categories and cognition, and Frake (1972) presents uses of ethnographic methods to study cognitive systems. More recent works include work in semiotics (see 16.4.2. 1; cf. Manning & Cullum-Swam, 1994).

The earlier-mentioned study by Moallem (1994) relied heavily on use of interviews along with participant observation to build the model of an experienced teacher's teaching and thinking. Another good study in educational technology which used interview techniques as one of several methods to gather data is that of Reiser and Mory (199 1). These researchers investigated the systematic planning techniques of two experienced teachers. The teachers were administered a survey at the beginning of the year and were interviewed early in the year about how they planned and designed lessons. They were subsequently observed once a week while they taught the first science unit of the year.

Before and after each observation, the teachers were interviewed in depth. In addition, copies of their written plans were collected (a form of document analysis, to be discussed later in this chapter). Thus a deep case study approach was used to determine the ways experienced teachers plan their instruction. In this study, the teacher who had received instructional design training appeared to use more systematic planning techniques, while the other planned instructional activities focused on objectives.

As with observations, interviews may be conducted as part of an experimental, quantitative study in educational technology. For instance, Nielsen (1990) conducted an experimental study to determine the effects of informational feedback and second attempt at practice on learning in a computer-assisted instructional program. He incorporated interviews with a sample of the learners in order to further explain his findings. He found that some of his learners who received no feedback realized their performance depended more on their own hard work, so they took longer to study the material than did those who determined that they would receive detailed informational feedback, including the answers.

Other detailed examples of how interview techniques may be used are illustrated in Erickson and Shultz's work, The Counselor as Gatekeeper (1982).

40.2.5 Document and Artifact Analysis

Beyond nonparticipant observation, many unobtrusive methods exist for collecting information about human behaviors. These fall roughly into the categories of document and artifact analyses, but overlap with other methods. For instance, the verbal or nonverbal behavior streams produced during videotaped observations may be subjected to intense microanalysis to answer an almost unlimited number of research questions. Content analysis, as one example, may be done on these narratives. In Moallem (1993), Higgins, and Rice (199 1) and Reiser and Mory (199 1) studies of teacher's planning, thinking, behaviors, and conceptions of testing, documents developed by the teachers, such as instructional plans and actual tests, were collected and analyzed.

This section will present an overview of unobtrusive measures. [Reader's interested in more detailed discussion of analysis issues may refer to DeWalt and Pelto's (1985) work, Micro and Macro Levels of Analysis in Anthropology, as well as other resources cited in this chapter.]

Goetz and LeCompte (1984) define artifacts of interest to researchers as things that people make and do. The artifacts of interest to educational technologists are often written, but computer trails of behavior are becoming the objects of analysis as well. Examples of artifacts that may help to illuminate research questions include textbooks and other instructional materials, such as media materials; Memos, letters, and, now, e-mail records, as well as logs of meetings and activities; demographic information, such as enrollment, attendance, and detailed information about subjects; and personal logs kept by subjects. Webb et al. (1966) add that archival data may be running records, such as those in legal records or the media, or they may be episodic and private, such as records of sales and other business activities and written documents.

Physical traces of behaviors may be recorded and analyzed. Webb et al. (1966) describe these as including types of wear and tear that may appear on objects or in settings naturally, as in police tracing of fingerprints or blood remains.

In recent studies in educational technology, researchers are beginning to analyze the patterns of learner pathways and decisions they make as they proceed through computer-based lessons. Based on the earlier work of Hicken, Sullivan, and Klein (1992), Dwyer and Leader (1995) describe the development of a Hypercard-based researcher's tool for collecting data from counts of keypresses to analyze categories of choices made within computer-based instruction, such as the mean numbers of practice or example screens chosen, In a recently conducted study, Savenye, Leader, Dwyer, Jones, and Schnackenberg (1996) used this tool to collect information, about the types of choices learners made in a fully student-controlled, computer-based learning environment. In a similar use of computers to record data, Shin, Schallert, and Savenye (1994) analyzed the paths that young learners took when using a computer-based lesson to determine the effects of advisement in a free-access, learner-controlled condition.

As noted earlier, the records made using videotape or audiotape to collect information in nonparticipant observation may be considered documentary data and may be subjected to microanalysis.

Guidelines for artifact collection are provided by Goetz and LeCompte (1984). They identify four activities involved in this type of method: "locating artifacts, identifying the material, analyzing it, and evaluating it" (p. 155). They recommend that the more informed the researcher is about the subjects and setting, the more useful artifacts may be identified and the more easily access may be gained to those artifacts.

Hodder (1984) suggests that from artifacts, a theory of material culture may be built. He describes types of objects and working with respondents to determine how they might be used. (Anyone who has accompanied older friends to an antique store, especially one that includes household tools or farm implements from bygone eras, may have experienced a type of interactive description and analysis of systems and culture of the past based on physical artifacts.) Hodder continues with discussion of the ways that material items in a cultural setting change over time and reflect changes in a culture.

Anthropologists have often based investigations about the past on artifacts such as art pieces, analyzing these alone, or using them in concert with informant and projective interviews. As noted in some of the current debate in anthropology or regarding museum installations that interpret artifacts, the meaning of artifacts is often intensely personal and subjective, so that verification of findings through triangulation is recommended. [The reader intrigued with these ideas may wish to refer to some of the classic anthropological references here cited, or to current issues of anthropology and museum journals. Two interesting examples appear in the January 1995 issue of Smithsonian magazine. Secretary 1. Michael Heyman discusses the many points of view represented in the public's perceptions of the initial form of the installation of the Enola Gay exhibit. In a different vein, Haida Indian artist Robert Davidson describes how he used art and dance and song to help elders in his tribe remember the old ways and old tales (Kowinski, 1995)]

Content analysis of prose in any form may also be considered to fall into this artifact-and-docurnent category of qualitative methodology. Pelto and Pelto (1978) refer to analysis of such cultural materials as folktales, myths, and other literature, although educational technologists would more likely analyze, for example, content presented in learning materials For more information about content analysis see, for instance, Manning and Cullum-Swan (1994).

This concludes our introduction to general methods in conducting qualitative research. We can look forward to other methods being continually added to the repertoire.


Updated August 3, 2001
Copyright © 2001
The Association for Educational Communications and Technology

AECT
1800 North Stonelake Drive, Suite 2
Bloomington, IN 47404

877.677.AECT (toll-free)
812.335.7675

AECT Home Membership Information Conferences & Events AECT Publications Post and Search Job Listings