AECT Handbook of Research

Table of Contents

23: Rich environments for active learning
PDF

23.1 Chapter purposes
23.2 Need for educational change
23.3 Rich Environments for Active Learning
23.4 The Main Attributes of REALs
23.5 Research and REALs
23.6 Methodological Issues
23.7 Research Issues and Questions
23.8 Conclusion
References
Search this Handbook for:

23.6 Methodological Issues

Performing studies with REALs is raising two important issues in instructional technology research methodology. First, the movement from experimental research methods to qualitative research methods is affecting the way in which researchers conduct their studies and report their results. Second, the field of instructional technology has had a fixation on performing research about media rather than about methods and learning. (Part VII of this text covers these issues and others in a more thorough manner.)

23.6.1 Experimental versus Qualitative Issues

Much of the current debate in educational research centers on experimental versus qualitative and naturalistic strategies. While I see the value for both kinds of research, I argue that professional researchers need to expand their conceptualizations of research methodologies when investigating the effects of REALs. Bissex (1987, p. 12) holds that

The assumption of external and controllable causation would seem to underlie all studies based on experimental and control groups. Yet observational studies of children's language development and preschool literacy have revealed children to be creators rather than mere recipients of their learning (p.. 12).

Bissex's statement mirrors the same issues we presented at the beginning of the chapter when discussing conventional versus REAL teaching strategies. A conventional classroom is didactic, based on the assumption that learners receive their learning from a controlling teacher. A REAL classroom is founded on the assumption that learners are the managers and creators of their own learning under the guidance of a teacher. She goes on to argue that experimental research focuses on issues of teaching and control, while observational studies focus on issues of learning and internal processes. It seems logical, then, to expect that new teaching methods demand new research methods.

Another factor in the methodological debate is the issue of generalizability. Bissex again argues that experimental research has limited generalizability because of the artificial nature of controls placed on treatment interventions. Experimental researchers try to isolate some factors and eliminate others; however, this is does not reflect the natural events within classrooms. Qualitative research, including observational and case study methods, forms more generalizable theories because the results are based on natural classroom and learning events that encompass many of the factors that influence learning and student behavior.

Bissex also argues that researchers must come to know the people they are working with quite closely. This is contrary to experimental studies where the researcher attempts to maintain distance from the subjects for fear of contaminating the results of the experiment. In observational techniques, researchers must take time to see their students in many situations and gain their confidences to gain valid and realistic perspectives of how the many factors in instruction are affecting the learners.

Again, I state that there is a role for both kinds of methodologies, usually within the same study. However, we do hold that Bissex's comments are valid and present good reasons for adopting more qualitative research methodologies. REAL classrooms are complex, with many factors in operation. They are, in essence, uncontrollable. To understand what is happening with students and teachers in the classroom, we cannot isolate or eliminate variables because we would be destroying the natural environment. The results would be meaningless.

For an example of some the above ideas, I can examine the 1992 study by Sigurdson and Olson that we cited earlier. While it has some interesting findings in the area of individual differences, it is also fraught with the difficulties and weaknesses inherent in conducting experimental research in a classroom. The purpose of their study was to compare three teaching approaches: one emphasizing meaning, another emphasizing other skills and procedures in mathematics, and the third using a traditional approach. Although their basic problem is worth investigating, we believe that they severely limited the generalizability of their results by using the experimental approach. To attempt for consistency among treatment groups and teachers, they varied seatwork but tried to keep homework, class talk, and questioning the same. This artificially constrains the use of the meaning-treatment methods. It is also unlikely that all of those factors were controlled adequately among several for an experimental study, thereby confounding the results. (In fact, they had to throw out some results because some teachers did not follow the program adequately.) They used a posttest and a retention test composed of multiple-choice questions covering computation, comprehension, and problem-solving questions. All treatment groups took the same test. Again, this is a necessity in an experimental study, but hardly reflective of actual classrooms. REALs use authentic assessment strategies, matching assessment with the objectives and methods. It is unlikely that an actual classroom using a meaning approach would use conventional testing strategies. The experimental nature of the study violates a teaching principle of evaluating in ways that match the ways students are taught. So, although they found some differences among the groups, the generalizability of those findings is limited because of the artificial constraints placed on the teaching and assessment methods.

23.6.2 Media Versus Method

Another major research issue, especially for instructional technologists, is the question of media versus method. Clark (1994) states that instructional methods are confounded with media, and it is instructional methods that influence learning, not the media. He contends that media are simply interchangeable delivery platforms whereby any necessary teaching method can be designed in a variety of media. Clark states that:

... if learning occurs as a result of exposure to any media, the learning is caused by the instructional method embedded in the media presentation. Method is the inclusion of one of a number of possible representations of a cognitive process or strategy that is necessary for learning but which students cannot or will not provide for themselves (p. 26).

I agree with Clark and believe that our instructional technology research needs to focus more on processes and less on media. One need only attend conferences fairly regularly to see that our field runs through fads or favorite media in- cycles. At the time of this writing, hypermedia and multimedia are popular applications and foci of research. Yet both are simply kinds of media used to support instructional methods.

Researchers and developers sometimes argue that certain kinds of media encourage different kinds of reasoning. For example, researchers using hypermedia applications sometimes claim that hypermedia affects learners' reasoning and processing activities because of the nonlinear construction of hypermedia applications (Nielsen, 1990). Yet the media are the same used in other applications. If hypermedia or multimedia encourage a specific kind of reasoning, it is because of the methods designed in die program, such as using hypermedia to apply cognitive flexibility theory (Borsook & Higginbotham-Wheat, 1992; Jacobson & Spiro, 1991; Jonassen, Ambruso & Olesen, 1992). Clark (1994) would contend, then, that cognitive flexibility could be applied equally well to other media, including video, text, or illustrations. If studies that show that a specific medium or set of media attributes cause learning, then they are at the same time confounded because the study has failed to control for instructional method. Instructional method is always present, while a medium or media attributes are surface features that are replaceable.

This is an important issue for instructional technologists because it delineates what is important in a learning environment. Whether a REAL is supported by technology or not is not important. Research that focuses on the use of specific media diverts attention from what is actually important in a REAL, the instructional methods that permit students to take initiative for their own learning within authentic contexts. Clark (1994) describes this as an economic issue:

The question is critical because if different media or attributes yield similar learning gains and facilitate achievement of necessary performance criteria, then in a design science or an instructional technology, we must always choose the less expensive way to achieve a learning goal. I must also form out theories around the underlying structural features of the shared properties of the interchangeable variables and not base theory on the irrelevant surface features (p. 22).

23.6.3 Methodology Strategies

Research strategies, then, need to emphasize learning processes. Therefore we need to develop and to find strategies that let us examine the cognitive processes of learning rather than just the products. Below, we describe several strategies worth noting, hopefully to entice you into doing further research into their applicability to your own research programs.

23.6.3.1. Think-Alouds. One strategy gaining increasing use is think-aloud protocols. Ericsson and Simon (1984) demonstrated that protocol analysis can produce detailed, quantifiable, and reproducible data on human thought processes. Think-alouds ask learners to literally think aloud while solving a problem. This illuminates their understanding of concepts and the presence of any misconceptions. Pellegrino et al. (1991), Stoiber (1991), and Palincsar and Klenk (1992) all used some form of think-alouds in the reviews above.

23.6.3.2. Written-Question Generation. Torney-Purta (1990) describes the use of question generation techniques. Torney-Purta asked students to generate four questions dealing with a problem. The questions indicate the depth of understanding and often illuminate cause-and-effect relationships. Students that generate what-if questions usually indicate a deeper understanding of the content because they can wonder about variables and the relationship of one to another.

23.6.3.3. Ranking and Classification Techniques. Another technique used by Torney-Purta. (1990) is a ranking or classification task. She asked students to rank items within a group according to a specific dimension and to give reasons for the rankings (combining the think-aloud protocol). Classification indicates depth of understanding of the dimensions used for ranking and can point up misconceptions. Palincsar and Klenk (1992) also used a classification technique to study reciprocal teaching. Students sorted pictures into two piles and talked aloud to explain why one picture belonged with the others. This indicated the kind of dimension (i.e., physical or thematic) that students were using and the depth of understanding related to the text they read.

23.6.3.4. Concept Maps. Robertson (1990) and Dunlap and Grabinger (1992) have shown that concept maps, generated either by students or by computer programs, can also represent the depth of understanding and complexity of understanding of related concepts. They provide a tool for students to compare and analyze their own thinking. The maps could be combined with think-alouds by having students try to explain their own conceptions.

23.6.3.5. Analysis of Recordings. Stoiber (1991) used an individual video-stimulated interview technique. She had students reflect on a videotaped performance to examine their ability to use concepts and understandings to critique the tape. The interviews indicated how creative the students were in applying classroom management techniques.

23.6.3.6. Dependent Measures and Assessment Foci. What do the above strategies aim to measure? In general terms, I present four complexities defined by Torney-Purta (1990). These "complexities" (generalized the author; the original article focused on social studies) define areas in which we hope any REAL is successful. These areas, then, provide assessment goals and emphases for research studies. They suggest dependent measures and assessments to measure the effectiveness of REALs and the growth of students' problem-solving abilities.

  1. Students with complex structures should be able to visualize or access a variety of different solutions to a problem. A research study could give students opportunities to explore different options and observe how the thinking of the student changes as a result of the exploration and experimentation.
  2. Students with complex knowledge structures can see constraints on the effectiveness of possible solutions. Pellegrino et al. (19.9 1) discovered that students experienced in multiple-step problem solving could identify constraints within the context more easily than students who did not have the same experience. They used think-alouds to monitor student processes.
  3. Students with complex scripts as a structure for understanding a domain can see the relevance of potential actions. They can pose questions and hypotheses that represent the depth of the domain. Students who can do this can ask what-if questions and have also developed a sense of experimentation and wonderment.
  4. Students with complex structures are able to rank or categorize definable groups along a complex set of dimensions. Learners must indicate depth of understanding in evaluations, and a classification task is good way to visualize that depth.

Torney-Purta's complexities are indications of the effectiveness of REALs in developing problem-solving skills. Ultimately, we must examine student performance with authentic problems both close in and removed from the original learning situation.


Updated August 3, 2001
Copyright © 2001
The Association for Educational Communications and Technology

AECT
1800 North Stonelake Drive, Suite 2
Bloomington, IN 47404

877.677.AECT (toll-free)
812.335.7675

AECT Home Membership Information Conferences & Events AECT Publications Post and Search Job Listings