AECT Handbook of Research

Table of Contents

12. Research On and Research With Emerging Technologies

12.1 Overview
12.2 Evolution of Computer-Based Instruciton: Historical Perspectives
12.3 Effectiveness of Emerging Technologies
12.4 Emerging Constructs and Learning Systems
12.5 Into the 21st Century
12.6 Summary and Conclusions
References
Search this Handbook for:

 

12.2 EFFECTIVENESS OF EMERGING TECHNOLOGIES

Few issues have the broad appeal of, yet remain as elusive as, the basic question: Did it work? Interest in testing the effectiveness, or lack thereof, of computer technologies has been long standing. Several approaches have been advanced. Some have attempted to identify the unique learning contributions of computer-based learning systems. Others have focused on the overall effectiveness of computer-based versus noncomputer-based learning systems, while still others have emphasized cost effectiveness. Not surprisingly, results have proved highly variable depending on the focus of the effort and the methods employed to address the question.

This section contains three subsections: cost effectiveness, findings from meta-analyses, and design research. The cost effectiveness subsection analyzes the ways in which technology-supported learning has been approached from a cost-benefit perspective. The meta-analysis subsection emphasizes research undertaken to address "big questions" about the effectiveness of computers. Finally, research is organized according to its implications for the design of computer-based learning systems.

12.2.1 Cost Effectiveness

The value of computers in instruction has been of considerable interest for some time [see Niemiec (1989) for a review of several cost effectiveness studies related to computer-based instruction]. Early in the evolution of computer-assisted instruction, it became clear that developing courseware and acquiring needed hardware would be an expensive proposition. Often, researchers used "cost-added" models, where the marginal gains associated with such systems were evaluated relative to the additional costs incurred in obtaining them [see, for example, the methods described by Levin & Meister (1986) and Niemeier, Blackwell & Walberg (1986) in Kappan ~ issue on the effects of computerassisted instruction]. The approaches were often near-term in nature, with capital costs (computer hardware) and software associated with specific learning tasks evaluated over relatively short durations. Comparisons of recurring costs and maintenance were rare. Using cost-added models, the costs associated with computer-aided learning were rarely considered feasible.

Cost replacement approaches evolved to evaluate the relative costs associated with learning via "traditional" approaches-usually teacher-led, textbook-based methodsversus computer-aided methods. The underlying question of these approaches shifted from assessing the marginal gains using "add-on" technologies to one in which the costs and outcomes associated with the overall delivery system were evaluated. This was the essence of Bork's (1986) "grand experiment," in which he advocated that computerintensive learning systems, designed for optimal impact versus educational convenience, be developed and tested fairly. Judgments as to the true value of computers versus traditional classroom-based teaching on learning could then be assessed, appropriate designs and models could be implemented, true costs (immediate, recurring, long term) associated with each could be identified, and the relative effectiveness of each method could be benchmarked without undue confounding.

Needless to say, these ideas have met with considerable resistance over the past 30 years. The political consequences of altering laws, revising policies and procedures related to teacher preparation and certification, and other educational traditions were, and remain to this day, formidable. The resistance is largely the product of concern over one possible model-total elimination and replacement of teachersrather than a response to a widely held view. In truth, it is not yet known what roles, if any, teachers might play in truly alternative technology-based learning systems.

While Bork's proposed experiment has yet to be implemented on a significant scale, technology has become the centerpiece of many school reform initiatives. Collectively, educators have become more amenable to, and at the forefront of, technology in school reform. A wide range of innovative efforts has been reported, with roles for technology varying from primary delivery of new information via integrated learning systems,.to knowledge-building tools, to management of performance and educational decision making, to comprehensive multimedia in both teaching and learning. The promise is great; however, comparatively little hard data have been generated.

Cost effectiveness is arguably the most critical issue to reconcile from a purely pragmatic perspective. Yet, it may be the most fundamentally flawed in how it has been studied. Increased learning outcomes, even at moderate costs, are unlikely to engender much adoption and diffusion. Consumers seek additional value for their investments. They seek not simply to move marginally further along the same yardstick but to evolve different metrics and ways to measure. They seek to spend less, not more. The effectiveness of computers is unlikely to be assessed simply by comparing their costs with specific teaching methods and traditional learning outcomes. We need to evolve a more inclusive understanding of computer capabilities to support and alter the everyday lives of learners. Our approaches and evaluation models need to emphasize qualitative differences in areas such as the nature of learning via powerful learning technologies, logistical advantages gained by their use, and potential to access and employ varied resources. We need to evolve approaches that better represent the complexity of the problem posed, not approaches that unnecessarily and artificially simplify it We need to better represent situational factors that influence what is considered to be cost effective in one setting but not another. In short, we need to better frame the real questions we want to address, consider how the uniquely important factors related to cost effectiveness need to be weighed, and develop models that provide data sensitive to the decisions being made. Until greater value, defined according to the situational needs of adopters and investors, can be garnered from significant technology investments, and the costs associated with them, reluctance to commit significant financial resources will likely continue.

12.2.2 Meta-Analyses

Meta-analysis is a statistical process whereby the findings of several studies, focusing on a common problem or topic, are pooled in an effort to draw inferences as to the meaning of a collective body of research. In effect, the goal is to answer "bigger" questions not readily addressed in a single study by aggregating data related to the question at hand across a large number of studies.

Early studies focused on "box score' approaches, which were used to determine the proportion of studies in which favorable results were reported. Several reviews indicated that varied forms of computer-assisted instruction (tutorials, simulations, drills) improved the arithmetic achievement of elementary students (Visonhaler & Bass, 1972) as well as students in elementary school through high school students (Edwards, Norton, Taylor, Weiss & Dusseldorp, 1975). Subsequent meta-analysis studies, which focused on computer drills and tutorials in arithmetic in elementary and secondary schools, also reported significant improvements in computational arithmetic skills, with effect sizes ranging from.37 to.42 standard deviations [see, for example, Bums (1981) and Hartley (1978)]. These reviews and analyses were based heavily on studies published prior to the microcomputer revolution beginning in the late 1970s.

Beginning in the early 1980s, several meta-analyses related to the effects of computers on learning were published by Kulik and his associates at the University of Michigan [see J. Kulik, C-L. Kulik & Cohen (1980) for an initial analysis, and Kulik & Kulik (1987) for a more recent analysis]. These studies included mainframe CAI studies as well as the emerging literature on microcomputer instruction. Significant, but generally declining, effect sizes have been reported for learning via computer-assisted instruction from elementary school [ranging from.47 in J. Kulik, C-L. Kulik & Bangert-Drow ns (1985b)], to secondary school [.26 to .36 in Bangert-Drowns, J. Kulik & C-L. Kulik (1985)], to college [.26 in C-L. Kulik & J. Kulik (1986)] and adult learners [.42 in J. Kulik, C-L. Kulik & Schwalb (1986)]. The reported effects have also been quite similar between experimental and quasi-experimental studies, suggesting that, on balance, the findings are not simply artifacts of the specific research methods employed (cf. C-L. Kulik & J. Kulik, 1991).

What does all this tell us about the effectiveness of computer-based instruction? Unfortunately, we are left with more questions than answers. Richard Clark (1985), for example, also employed meta-analysis methods in computerbased instruction research, but in ways quite different from those of Kulik. Clark computed an initial apparent effect, then recalculated effect sizes after systematically eliminating confounding influences such as instructor-designer differences, design integrity, and so forth. Similar to his conclusions concerning the lack of media effects in general (Clark, 1983), he concluded that no evidence existed to support the inherent superiority of computer-assisted instruction to alternative approaches. [See J. Kulik, C-L. Kulik & Bangert-Drowns (1985a) for an interesting rejoinder.]

Other problems have surfaced as well. Often, studies that are quite dated have been utilized, so the actual nature of the delivery system is unclear. While researchers have often attempted to balance the influence of editorial gatekeeping in published research with use of nonjoumal sources (e.g., dissertation studies), it has tended to further obscure interpretation. What domains were under study, using what strategies, and in what quantity and quality? Meta-analysis research attempts to determine whether or not computer-assisted instruction is effective, but fails to improve our understanding as to what constitutes effective computer-assisted instruction.

To many, the focus on whole-effect, meta-analysis research is misdirected. The "big question, big answer" goal often obscures potentially more important questions and issues. The issue, it is argued, should not be simply one of "if' computers are effective in promoting learning but how best to utilize them to redefine, support, or compliment teaching and/or learning efforts. The task is not simply one of replacing old media with computers, thereby "increasing the horsepower" of traditional methods, but unleashing the capabilities of both computer technologies and learners (Hannafin, 1992; Hooper & Hannafin, 1991). In the following section, we report design research, where the goal is to identify such methods and approaches.

12.2.3 Design of Computer-Based Instruction

12.2.3.1. Orienting the Learner. Learners often experience difficulty accessing important lesson content due to poorly integrated knowledge or the complexity of lesson presentations. Some are easily disoriented because of the lesson structure, while others are unable to deal with the cognitive demands associated with increased decision making in hypermedia learning environments (Jonassen, 1989). Furthermore, although computer-based instruction is often rich in opportunities for students to interact and receive feedback, designers often neglect to provide students with the support supplied by effective classroom teachers (Hawk, McLeod & Jonassen, 1985). To support the learner, orienting activities are often provided to establish expectancies for, and perspectives on, forthcoming lesson content.

Verbal-orienting activities perform two cognitive functions. They cue important lesson content and help to link new with existing knowledge. Hannafin and Hughes (1986), in extrapolating research and theory to the design of interactive video, outlined several distinctions between learning that results from using. explicit objectives derived from behaviorist traditions and advance organizers based in cognitive traditions. Behavioral objectives (see 2.3.6), precise statements concerning anticipated learner behaviors following instruction, focus attention on specific lesson content. Advance organizers are more abstract and general, and help learners to establish an inclusive anticipatory framework for learning.

The specificity of the learning objective is inversely related to transfer. Specific orienting activities incorporated within interactive video instruction (e.g., preinstructional objectives, prequestions, etc.), for example, promote intended learning selectively by eliciting greater attention to highlighted information (Ho, Savenye & Haas, 1986). Students may use specific behavioral objectives to identify and learn information long enough to pass a related test. However, explicitly stated learning outcomes often limit students' ability to use new information in situations that are dissimilar to those in which initial learning occurred. In contrast, advance organizers tend to stimulate higher-level learning [see, for example, studies by Krahn & Blanchaer (1986) in the teaching of chemistry via computer simulation], but often fail to stimulate factual learning.

Orientation has evolved a different connotation in the study of hypermedia. Orientation is viewed as the individual's awareness of his or her location within a hypermedia system and the individual's capacity to respond meaningfully given these perceptions. The concept of disorientation, or of being "lost in hyperspace" (Edwards & Hardman, 1989), has been used to characterize the aimless state wherein users find themselves unable to determine where they are or what to do. Disorientation, in effect, is the product of insufficient initial orientation to the system and inadequate ongoing guidance in the nature and use of the system.

12.2.3.2. Presenting the Lesson. It is widely assumed that multimodal instruction enhances learning more than information presented from a single source. For example, many believe that presenting information via sound, picture, and text improves understanding. It is not surprising, therefore, that many believe that use of computers and other information technologies will inherently improve learning (see, for example, Clark, 1985). Well-documented research on cognitive resource allocation has established conclusively that more is not necessarily better when presenting stimuli. Individuals possess limited ability to process information presented simultaneously in multiple channels (e.g., aural, visual). Typically, one channel is largely ignored by learners since the messages cause undue competition for cognitive resources (Gavora & Hannafin, 1995). Once automated, however, individuals increase in their capacity to process messages in multiple channels, since the cognitive demands are drastically reduced.

In some cases multimodal presentations (see 29.3) may indeed improve learning. They are particularly likely to be beneficial when information is encoded via multiple coding mechanisms. Individuals possess multiple channels through which information may be encoded. For example, information presented in pictures may be encoded once as a picture and then again by the verbal description given to the picture. In contrast, text is encoded only once, via a verbal channel. Information that is processed more than once often adds to it retrievability. This so-called "dual-coding" of information (see 29.2.3.) essentially doubles the probability that it will be recalled (Paivio, 1979).

12.2.3.2. 1. Illustrations. Visual representations can reveal the hierarchical structure of text and illustrate relationships (see, for example, Perkins & Unger, 1994, and see 16.11). They serve three important roles. First, they help to make explicit the structure of to-be-learned information, which reduces uncertainty and clarifies relationships among important lesson concepts. Next, they are readily recalled and, when associated with important content, tend to enhance recall of that information (Dunston, 1992). Finally, as Kenny (1993) concluded based on several studies of pictorial graphic organizers, pictures help learners to integrate lesson content during computer-based instruction (see 26.4).

Kenny also suggested that learners be encouraged to generate their own graphic organizers rather than simply viewing those created by other people. Whereas "given" graphic organizers may encourage learners to adopt passive (reactive) roles during instruction, learner-generated organizers involve searching a body of information to locate key concepts, thus they are inherently more proactive in nature.

12.2.3.2.2. Animation. Animation has been the focus of recent attention (see, for example, Park & Gittleman, 1992). Mayer and Anderson (1992) studied the influence of animation either as a support to, or replacement of, oral presentation methods. They examined how hypermedia features facilitate college students' sense-making of mechanical operations concepts. Consistent with dual-coding models of learning (see 29.2.3), animations paired with oral presentations proved more effective than either animation alone or oral presentations alone.

One system, Space Shuttle Commander (Rieber, 1992), utilizes animation both to illustrate basic physics concepts and to provide visual feedback for responses made by the student. Learners were encouraged to alter a number of attributes and values and to see the influence of their manipulations on objects in space. Animation, in this sense, provided not only coding support but also a feedback mechanism to illustrate the influence of learner choices in a Newtonian microworld.

Rieber (1990) also analyzed several studies on the effects of computer-based animation on the learning of science concepts and derived several conclusions for its use. In general, animation should be used when one or more of its attributes (e.g., attention gaining, demonstration, and reinforcement) can enhance learning. Rieber reported that manipulating systematically the attributes of animation significantly facilitated learning compared to providing static graphics or no graphics at all when such attributes were integral to the learning task, that is, when the animated object's attributes are consonant with the learning task.

However, the benefits of animation can easily be neutralized by unnecessary complexity, ineffective design, or poor cue attendance. The prior knowledge and expertise of the user may also influence the effects of animation: Novices (who are often poor cue attenders) are often unable to perceive cues highlighted through animation. In addition, learners may need to participate and actively locate important information rather than simply view animations. Finally, animation may be most effective when students are encouraged to interact with the animations; manipulation appears to have important affective and cognitive outcomes (see 26.4.3).

12.2.3.2.3. Fidelity. The term fidelity is commonly used to describe the extent to which computer-based instruction approximates the form and function of the stimuli it represents. Computerized, three-dimensional cockpit simulators, for example, are often of such high fidelity that users are unable to distinguish between flying the simulator and "real" flying. In contrast, aviation manuals on how to fly often employ line diagrams for features such as aircraft instruments. Students may learn basic principles by reading the manual but would be unlikely to confuse their knowledge with real flying.

Proponents of high-fidelity instruction often laud computer-based instruction for its capacity to reflect accurately "on-the-job" performance. Until recently, few researchers questioned the benefits of high-fidelity design. However, it is now widely believed that low-fidelity stimuli are often superior to high-fidelity stimuli, especially for novices. High-fidelity designs increase the number of stimuli to which students must attend and consequently place greater demands on working memory These demands may lower performance when students possess little background knowledge.

Alessi (1988) proposed that the fidelity employed during computer-based instruction should increase as student performance improves. In other words, the importance of fidelity is determined largely by the expertise of the student. In the flying example, a novice is unlikely to learn to fly effectively by immediate immersion into a flight simulator, but may benefit considerably from a narrated video. In contrast, experienced pilots benefit more from experiencing the effects of complex physical phenomena in a flight simulator, wherein they may experiment in relative safety. -

12.2.3.2.4. Screen Design and Display. Effective screen design appears to be as much art as science. Although several authors have generated screen design guidelines, empirical evidence supporting this advice has been scarce. Human factors research literature addresses the ergonomics of design and configurations for optimizing text processing, but guidance for improving learning is often based more on heuristics than proven cause-effect relationships (see 36.3.6).

Recent screen design researchers have examined the effects of varying information placement and access. Aspillaga (1991) compared the effects of three approaches to text display: overlapping important parts of graphic images, consistent placement, and random placement. Consistent usage and relevant text placement yielded higher achievement than random placement, suggesting that learners were cued to specific content and responses. Grabinger (1993) generated three "rules of thumb" to guide screen design: Divide the screen into consistently placed functional areas; use organizing techniques to illustrate the structure of a screen; and design screens that are interesting but not complex.

Recently, researchers have also examined the effects of information metered via windows (Billingsly, 1988). Window environments, which allow multiple files or applications to be displayed simultaneously, are generally either tiled and overlapped. Tiled windows are always visible and do not overlap, whereas overlapping windows allow foreground files to obscure background files. Bly and Rosenberg (1986) found that tiling systems helped learners to locate information significantly faster than did overlapping system. Students using the tiling system had fewer mental operations to perform than did students using the overlapping system. Moreover, results indicated that overlapping systems are better for experienced users, but that tiling systems may be easier for novices. Tasks that involve little window manipulation are accomplished more efficiently with tiled windows, and tasks that involve much window manipulation are more efficiently accomplished with overlapping windows.

Benshoof and Hooper (1993) also examined the effects Of Presenting information in tiled and overlapped windows. High- and low-ability sixth-graders learned novel information and rules governing its use. Students were given the same information via tiled or overlapped window treatments, but those in the overlapping treatment had access to only one information source at a time. High-ability students in the overlapping-window treatment demonstrated higher achievement than did all other students. Apparently, overlapping induced the more able students to invest greater effort than did the tiled treatment.

Gender appears to affect preference for some screen design elements. For example, Jakobsd6ttir, Krey, and Sales (1994) examined the effects of gender on boys' and girls' graphics preferences. Students in grades 2, 4, and 6 were asked to rate three groups of computer graphics: high female interest, high male interest, and equal interest (created by mixing elements of interest together). A significant interaction between picture type and gender was found, indicating that girls rated female-interest pictures highest, male-interest pictures lowest, and equal-interest pictures in between; the reverse was true for boys. Similarly, in a study examining the use of computer graphics among boys and girls, Freedman (1989) observed that fifth-grade girls were more concerned than were boys with using color, color combinations, and relationships among object shapes. Boys were more interested in movement, and sometimes conceptualized shapes as objects capable of violence.

Despite the intuitive appeal of using colorful displays in instructional materials, research has consistently failed to support the role of color as a primary instructional variable. Instead, the effect of color on learning appears to have, at best, secondary influences. Color is most effective when it supports other instructional strategies, such as organization of information and providing contrast between screen objects (Dwyer & Lamberski, 1982-1983). Related objects can be color coded and text structure can be highlighted to aid organization of important lesson content. However, care must be taken to avoid using color carelessly or unnecessarily, since the distraction potential of color can be quite high (Hannafin & Peck, 1988; Rieber, 1994).

Much of the past research on illustrations has focused on presenting information to learners. Emerging technologies offer unique capabilities for learners to do more than "watch"; learners can create, organize, highlight, colorize, and compare their own illustrations with those of peers, or even against those of experts. Science Vision, for example, allows students to "construct" their own computer representations of a roller coaster, and to test their designs against their own goals, those of an expert, or advanced specifications that require fine-tuned understanding (Tobin & Dawson, 1992). The Cognition and Technology Group at Vanderbilt's Jasper series utilizes video images not only to capture and maintain attention but also to establish contexts within which key problem-solving information is embedded. In still other instances, the presentation features evolve dynamically as learners make decisions and progress, providing stimuli that is uniquely sensitive to the individual user's actions. This ability to personalize, share, and revise individual interpretations is a powerful asset for expressing conceptual knowledge and its relationships.

12.2.3.3. Encoding Support. The impact of encoding support on student performance may be best understood within a meaningful learning conceptual model. Mayer (1993) outlined three phases through which learners must progress in order for learning to become meaningful. First, learners must select relevant information from that presented to them. Second, they must organize the information into a coherent outline. Finally, they must relate the outline to a structure or event with which they are familiar. When the first phase is not met, no learning occurs. When only the first phase is met, rote learning occurs. When the first two phases are met, nonmeaningful and inflexible learning occur. Meaningful learning occurs only when the third phase has been reached. Mayer's framework is consistent with Wittrock's generative learning model (Wittrock, 1990), which emphasizes improving learning by stimulating deeper processing (see 3.1.1.1). Generative learning stresses forming connections among information to be learned and linking these associations with each learner's knowledge and experiences (see 31.2).

Encoding support will be effective to the extent that it helps learners to select, organize, and integrate learning experiences within mental models that have been clearly formulated in memory (Bliss, 1994). To some degree, the specific activity must, by definition, be idiosyncratic because everyone differs in what is clearly understood. However, to a larger extent, the nature of the activity that brings about cognitive transformation may vary little from person to person. In this section, we address two key approaches for supporting encoding: personalizing instruction and designing interaction strategies.

12.2.3.3.1. Personalizing Instruction. Personalizing instruction involves integrating personally relevant information to help each learner associate unfamiliar lesson content with his or her own background and interests. Several benefits have been associated with personalized instruction. Miller and Kulhavy (1991) hypothesized that personalization improves memory by increasing the strength of association between lesson content and the personalized content. Additionally, Lopez and Sullivan (1991) suggested that the self-referencing associated with personalization reduces the cognitive 0-~mands of processing ongoing instruction.

Ross and his associates personalized computer-based instructional materials by adapting specific examples and questions to personal experience. Ross and Morrison (1988) suggested that difficulties with math story problems are not necessarily due to deficient computation skills. Rather, students are often unable to understand story problems that are situated in unfamiliar contexts. In contrast, they noted that students have little difficulty solving similar problems in which the story is closely associated with their everyday experiences. To test this hypothesis, Ross and Anand (1987, Anand & Ross, 1987) examined the effects of modifying computer-based mathematics learning materials to elementary school students' interests. Students demonstrated higher achievement and better attitudes after receiving mathematics problems that included personal information. However, Ross and Morrison (1988) cautioned against overusing personalized materials. They warned that the effects of personalization may be due, in part, to a novelty effect that might diminish over time.

Personalizing efforts have evolved during the past decade from early notions of integrating personally relevant information into lessons, to providing contexts focused on relevant learner experiences and interests. Research in situated learning and everyday cognition, for example, has suggested that people think very differently in formal (e.g., classrooms) versus everyday contexts (e.g., supermarkets) (Choi & Hannafin, 1995; Collins, 1993). As these areas continue to evolve, alternative design strategies will need to be developed and refined.

12.2.3.3.2. Interaction Methods. Though lauded by many for its ability to handle user inputs, there is little consensus with regard to the design of human-computer interactions. Indeed, disagreement even exists about the meaning of the term interactive as applied to emerging technologies. Researchers have described fundamentally different perspectives on the roles of interactions, ranging from facilitating lesson navigation to supporting encoding of specific lesson context. Floyd (1982), for example, emphasized production of an overt response that subsequently differentiated lesson branching in.-his application to interactive video. Others emphasize response frequency in terms of either duration or lesson density [see, for example, Bork (1985)].

Gavora and Hannafin (1995) described a conceptual model for the design of human-computer interactions. In their definition, cognitive restructuring is integral to the definition of interaction, not the mere presence of overt responses. Indeed, the authors argued that it is common for physical responses to be made during computer-based instruction which fail to reflect, or initiate, learner thoughtfulness and intent. Successful interactions have cognitive as well as physical requirements and are mediated by the quantity and quality of effort. Interactions cause the differential allocation of cognitive resources in accordance with the learner's familiarity with the domain under study. As critical aspects of the do ' main become familiar, the associated cognitive processes become automated, requiring few or no cognitive resources. Conversely, complex, unfamiliar domains require significant resource allocations for even small details. The lack of familiarity and perceived complexity cause competition for limited cognitive resources a problem that must be managed through interaction. In unfamiliar domains, complexity is managed by initially inducing simple, discrete interactions. These relatively modest responses eventually become automated. Once facility is attained, the amount and complexity of the interactions can be increased.

Research on interaction methods may be among the most critical. For example, new and largely unexplored possibilities have emerged with developments in virtual reality. Input methods have changed radically, from almost exclusively typing-via-keyboard to mice, touchscreens, joysticks, voice recognition, and optical scanning. Increased interest in open-ended, user-centered learning systems, construction and manipulation, and authentic learning all bode significant potential, and problems, for interaction design. The domain of possibilities has broadened substantially, yet little research has been advanced which might guide their design.

12.2.3.4. Detecting, Correcting Errors. One activity that appears to promote effective learning is error correction, Error identification helps learners recognize inadequacies in their mental models and stimulates deeper understanding. Allen, Lipson, and Fisher (1989) developed the EPOSODE model, which involves embedding errors in CBIV lessons. Successful error detection is followed by feedback, after which students are often required to classify the errors, to describe the consequences of an error, or to explain how the errors could be corrected. Finally, students observe a corrected video segment before proceeding. The computer provides advantages that are difficult, or impossible, to achieve with other media. Errors that remain undetected can be repeated by replaying the video segment, using slow-motion or freeze-frame capabilities, or by playing a video segment in reverse from the point of an error. Moreover, error isolation can be enhanced through the computer's graphic overlay capabilities. [See, also, GUIDON, an intelligent tutoring system, designed to stimulate learning through error detection (Clancy, 1986), and TORUS, a diagnostic system designed to address mathematics misconceptions (Woodward & Howard, 1994).]

Recently, Bangert-Drowns, Kulik, Kulik, and Morgan (1991) noted that, counterintuitively, feedback often failed to benefit, and sometimes even lowered, performance. However, in their meta-analysis, they noted that mindfulness and the nature of feedback were critical. Feedback must be used mindfully to be effective. Feedback is unlikely to benefit, and may even diminish, learning when students simply reproduce correct answers. The nature of feedback is strongly related to its impact on learning (see 32.5.4): Simply providing a statement concerning the accuracy of a response is less effective than providing some degree of elaboration (see 32.5).

Kulhavy and Stock (1989) outlined two feedback components necessary to improve learning in computer-based instruction: verification and elaboration (see 32.4). Verification indicates the accuracy of a response; elaboration refers to additional information made available to the student. Not surprisingly, several studies have indicated that elaborative feedback is more effective than simple knowledge of correct results. For example, a study of college undergraduates using CBI found that students demonstrated higher posttest achievement after receiving information related to the accuracy of a response, the correct response, and a brief explanation about the correct answer compared to simply receiving a statement as to the accuracy of a response (Pridemore & Klein, 1991). However, increases in achievement were accompanied by increases in leading time, suggesting that elaboration may also have improved the quantity of instruction. The effect of feedback on retrieval increases as error rates increase, as more opportunities for receiving elaborations are provided.

Litchfield (1993) noted that response certitude is essential in understanding the value of feedback. Individuals with little or no response confidence or certainty are unlikely to benefit significantly from feedback during interactive video instruction-or any instruction for that matter (see 32.4.1). Others have described the varied ways in which feedback can be applied with emerging technologies (Hannafin, Hannafin & Dalton, 1993). Feedback can be used to clarify key elements of the response-learning task itself (present consequences of responses, demonstrate impact in context, reinforce specific lesson content, approximate sensory aspects). Feedback can also be employed to provide strategic information (diagnosis, prescription, performance to date, management, learning processes) as well as affective information. It is clear that the nature of feedback has evolved considerably since the initial text-based emphasis, broadening substantially its implications for emerging technologies.

Research on error detection and correction has mostly focused on feedback. In general, research indicates that feedback is a valuable tool for correcting errors; it is timely, meaningful, and relevant. Emerging technologies offer a wide range of possibilities for delivering feedback to learners. Future research is likely to focus on determining how, when, and what types of feedback best promote desired ends.

12.2.3.5. Lesson Sequencing. While effective control is central to effective CBI, beliefs concerning who should control lesson-sequencing decisions vary according to several factors. To some, the computer should adapt dynamically to modify lesson content to match students' individual needs. Proponents of this approach use CBI to present carefully crafted, individually relevant lesson content to learners. The computer can control the presentation order of a lesson, the amount and complexity of information presented, the nature of feedback, and all related decisions. In other words, the computer allows the creation of environments in which students complete optimal instructional sequences, according to external judgments, to achieve predetermined educational goals.

To others, however, the computer presents the opportunity for student-centered learning. Learners can control a wide variety of instructional variables including access to lesson content, the context in which instruction is situated, the presentation stimuli, the option to select additional content, the amount of practice to complete, the difficulty of content and related questions, and the amount of advice to be provided during instruction (Schwier, 1992). In effect, the computer encourages learners to build relationships among lesson concepts through exploration, experimentation, and manipulation.

12.2.3.5.1. Learner vs. Designer Control. Perhaps no single topic has received as much attention by researchers as that of locus of instructional control. The question of where lesson execution control should reside-program or learner-remains a source of controversy and debate. The most externally metered program control option is completely linear. It involves presenting identical content to all students, in the same order, and with the same strategies, depth, and complexity In practice, however, such an approach often proves inefficient and is usually a poor candidate for computer-based lessons. Within any group of students, individual needs vary widely. Instructional support needed by the most able students is likely to differ greatly from that needed for the weakest students.

The benefits and liabilities of learner control are well documented (see 33.5) [see, for example, reviews by Steinberg (1977, 1989; see 33.5]. Learner control has been found to stimulate achievement and improve attitudes and motivation (Kinzie, 1990; Kinzie & Berdel, 1990; Lepper, 1985; Pollock & Sullivan, 1990). Kohn (1993) noted that learner control improved self-attribution, achievement, and behavior. On the other hand, learners have also proved poor judges of their learning needs, often seeking information that is not needed or terminating lessons prematurely (Hannafin, 1984).

Linear lessons, and all program-controlled instruction for that matter, are inherently structured. The nature of the structure either limits or manages individual variability. Linear structures provide no opportunity for students to engage selectively in activities deemed uniquely appropriate, emphasizing instead activities thought to be of greatest value to A. Generally, complete program control has been effective for domain novices and for tasks with explicit performance requirements (Chung & Reigeluth, 1992). Highly structured environments are likely to be especially limiting for high-ability and high prior-knowledge students.

Differences in learner preferences, knowledge, and styles are often accommodated by varying learner control. In cases of optimal learner control, students individually identify what they will study and seek and revisit lesson segments as they evolve new representations. Doing so involves exploring learning environments many times and from many different perspectives (Spiro, Feltovich, Jacobson & Coulson, 1991).

The issue of learner control appears to be particularly germane in the design of hypermedia learning environments (see 21.3). Hypermedia presents two critical problems for designers. First, many students may have difficulty navigating in hypermedia environments: They tend to become easily disoriented (Park & Hannafin, 1993). Second, when given unaided access to information, students may experience difficulties locating and linking information to build meaningful cognitive structures.

Despite the wealth of learner control research, Reeves (1993) suggested that much of it is little more than "pseudoscience." He argued that researchers often fail to employ consistent working definitions across experiments. Most of the research on learner control has been conducted on artificially controlled drill and practice and tutorial programs with the imposition of measured lesson control. In practice, however, diverse classes of computer software are associated with different levels of control. He suggested that research methodologies often lack internal and external validity. Instructional treatments are often too brief for students to learn how to exercise effective learner control. The materials used are often unrelated to the regular curriculum and irrelevant to students, so the amount of effort invested in a task is often not representative of normal behavior. Reeves also noted that experimental analyses are often suspect. Small sample sizes, for example, suggest that the underlying assumptions of some statistical analyses have been violated; researchers often ignore the causes of high experimental attrition. Finally, few studies are grounded in psychological theory; few researchers have established a solid theoretical base for their studies.

Although locus of instructional control has been researched extensively, continued efforts are important due to changing nature of "control" now afforded to both designers and learners. With the advent of hypermedia and hypertext, structured or linear approaches to design are no longer the only options. Radical reconceptualizations have occurred related to how we think about, and design for, software control. Design strategies that maximize the learning potentials of open-ended environments, as well as learners, will likely redefine learner vs. designer control issues into the 21st century.

12.2.3.5.2. Adapting Instruction. Adaptive control provides an alternative to either rigid program control or unassisted learner control. Adaptive control adjusts instruction to meet individual needs based on user traits (Boyd & Mitchell, 1992) and ongoing performance (Tennyson, 1984). It is tacitly assumed that the designer can better determine domain needs than the individual. This perspective is problematic from at least two perspectives. First, in many cases designers cannot effectively diagnose and prescribe, in advance, instructional remedies. Second, prescriptive models may do little to develop learners who are capable of making effective, independent instructional decisions (see 22.3, 22.4).

One approach to adapting instruction involves using mathematical equations to adjust the amount of instruction, the number of examples, display time, and other relevant factors. Ross and Morrison's (1988) model involves three steps: identifying variables that affect lesson performance, collecting data to generate equations that can be used to estimate achievement, and generating criteria to link students' performances with anticipated outcomes. The method produces equations that, when used in concert with ongoing lesson data, adapt instruction according to individual performance and need.

Tennyson and Christensen's (1988) method differs in its monitoring of student needs. Their approach involves greater computer "judgment" of learner needs and needed branching. The prescriptions generated during a lesson are similar to those employed by classroom teachers: They determine appropriate lesson content density, monitor and adapt instructional strategies, and manage the instructional process. Using multivariate analyses, the Minnesota Adaptive Instructional System (MAIS) diagnoses individual needs to prescribe instruction, adapts prescriptions to reflect ongoing lesson performance, and continuously updates its decision-making system. The system generates data on several variables that are used to modify instruction (see 19.2.3).

The adaptation abilities of the technology-the capacity to change lesson sequences, difficulty level, and pace according to unique learner needs-is among the longest standing of assets. From its early days, the computer has been touted as possessing extraordinary potential to accommodate vastly different learning styles, rates, and background knowledge. Yet, as the focus of teaching and learning continues to shift, contemporary approaches emphasize learner empowerment over "smart" computers. Many now believe that it is more important to create systems for learners to recognize metacognitively when they fail to comprehend, and seek support accordingly, than to create lessons that supplant these capabilities [see, for example, Salomon, Perkins & Globerson (1991)]. A very different kind of adaptation will be required from those that empower only the computer with the capacity to adapt.

12.2.3.5.3. Advising Learners. One of the underlying assumptions of many hypermedia systems is that students will actively explore their learning environments. In practice, however, students are often unable to benefit from the freedom associated with hypermedia (Santiago & Okey, 1992). Students may be unable, or unwilling, to explore unfamiliar computer-based environments. However, they may benefit from the expert guidance delivered via the computer, which is often referred to as advisement.

Advisement performs two important functions in lesson .execution. First, it can either augment or supplant metacognitive processing. As an augmenting resource, it can be used by capable learners as a kind of "second opinion" to reference against individual beliefs; as a supplanting resource, it can free the learner from the cognitive burdens associated with self-regulated learning in order to focus more completely on lesson content (Hannafin, Hall, Land & Hill, 1994). By providing information concerning ongoing lesson performance, the amount of instruction needed to optimize learning, and information to guide effective navigation, students can guide their on-task behavior and model effective study strategies.

Advisement may be particularly useful for reluctant, or passive, learners. Lee and Lehman (1993), working with college undergraduates on hypermedia and videodisc technologies, found that active learners consistently selected more information, spent longer on task, and demonstrated higher achievement on a posttest than did passive learners. However, when cues were provided in the form of suggestions to elicit elaborations, passive students selected more elaborated lesson content and spent longer on task. The investments of effort and time appear to stimulate higher achievement scores, while advisement cues appear to supplement deficient metacognitive strategies.

In a more general sense, emerging technologies must continue to expand their capacity to facilitate the learner's efforts rather than to advise on how to attain what the lesson's designer has deemed important. Facilitation, in this sense, addresses a more learner-centered question: How can the unique intents of each learner be supported? This poses a very perplexing problem for researchers and designers: How can systems be created capable of supporting needs and intents that cannot be fully known in advance? This is a most intriguing problem, one for which research is sorely needed.

12.2.3.5.4. Hypertext/Hypermedia Linking. One key to successful sequencing in computer-based environments is to connect to information that can be readily assimilated by the learner (see 21.4). Many systems, especially hypermedia systems, permit students to access parts of a lesson that may only be tangentially related. For example, students may have access to information that is related literally rather than semantically or conceptually. Students may link to information related in literal structure but not in contextual meaning (Gall & Hannafin, 1994; Jonassen, 1989).

Following literal rather than semantic links is more likely to occur among students with limited related prior knowledge. Given control over lesson sequence, students with high domain knowledge readily connect conceptually related ideas. In contrast, students with little domain knowledge tend to connect literal definitions and examples rather than make conceptually advanced associations (Nelson & Palumbo, 1992).

Linking in hypertext/hypermedia is among the most significant capabilities that have effected design to date. These capabilities make it possible for individuals to access information in tightly controlled or open-ended manners, literally enabling lessons of unlimited variations. Designers can now provide tools that aid thinking, personalizing, connecting, and interacting with vast amounts of information (see, for example, Bliss & Ogborn, 1989; Horwitz & Fuerzeig, 1994; Reader & Hammond, 1994). The challenge for researchers is to study how, when, and if tools and resources are managed successfully by learners, and how the decision making of users can be supported.

12.2.3.6. Motivation. Kinzie (1990) suggested that two motivational constructs, intrinsic and continuing motivation, are important for maintaining the participation necessary to flourish in CBI environments. Intrinsic motivation describes the state that exists when individuals participate in an activity for the gratification generated by the activity itself. Continuing motivation is evident when students choose to return to a lesson without the presence of external motivators (see, for example, Seymour, Sullivan, Story & Mosley, 1987). Some computer-based learning environments, such as simulations and games, seem to be inherently motivating, especially for children (Malone, 198 1; Rieber, 1992).

Keller and Suzuki (1988) adapted and elaborated a motivational model for CBI design. ARCS is an acronym that represents four categories: attention, relevance, confidence, and satisfaction. Attention gaining is often the easiest phase of the motivation process and addresses strategies to arouse and sustain performance. Relevance addresses how instruction helps students achieve their personal goals. Lessons that relate to prior or anticipated experiences are likely to increase motivation. Confidence refers to the degree to which students believe that they will succeed at a given task. Students who believe that personal success is achievable and a function of their effort are more likely to succeed than those who lack confidence or who attribute success to luck or some other uncontrollable factors. Satisfaction refers to students' perceptions about the outcomes of instruction. In general, lessons are more motivating when students feel appropriately rewarded for their efforts.

Keller and Suzuki identified three key factors: motivational objectives, learner characteristics, and learner expectations. The setting of motivational objectives is important in designing and evaluating CBI. Motivational objectives are too frequently assessed simplistically using students' responses to questionnaires and task performance. That is, students are often assumed to be motivated if they respond favorably to surveys or if they perform well on achievements tests. Alternative measures, such as task persistence, indicators of continuing motivation, and increased confidence and relevance, may be better indicators of student motivation.

A careful analysis of learner characteristics can help designers assess the motivational strategies needed. For example, students may require little additional motivation when they already perceive the content as important and relevant. However, students who lack confidence or perceive little relevance require more focus on motivational strategies. In their attempts to stimulate student interest, CBI designers must ensure that lessons do not inadvertently lower student motivation. Lessons that include irritating routines or that fail to demonstrate minimum visual standards may reduce motivation, regardless of the quality of the instructional content.

Malone and Lepper (1987) developed a taxonomy of intrinsic motivation based on a survey of computer-game preferences among elementary school children. They classified motivation into four categories: challenge, control, curiosity, and fantasy. Challenge, the level of difficulty, is optimized when balance exists between activities that are easy and hard. Challenging activities establish attainable goals (which may be determined by the student), maintain a level of uncertainty, and generate clear feedback that is frequent, informative, and encouraging.

Control of one's environment is fundamental to intrinsic motivation. The amount of control that learners exercise depends on the degree to which students' actions affect the range of outcomes. High levels of control empower students. Malone and Lepper (1987) suggest that control in CBI is promoted by strategies such as varying lesson presentations and feedback according to individual needs, and allowing students to choose lesson sequence and instructional difficulty (or at least appear as if they have chosen). However, they warn that although autonomy increases motivation, too little lesson structure may result in learners becoming frustrated and unmotivated.

Few would argue with the proposition that ensuring initial motivation, maintaining interest during instruction, and encouraging continuing interest in the subject under study (and the tools with which it is studied) are as critical to the success of computer-based instruction as to any form of instruction. Although occasional evidence is found to the contrary (for example, Kinzie & Sullivan, 1989), it seems unlikely that simply receiving instruction via computer will prove inherently motivating to learners-especially over time (see Clark, 1983, 1985). Interestingly, researchers and designers have garnered considerable input as to what motivates individuals in noneducation settings. The explosion of arcade-like home computer games, for example, underscored how challenge and competition help to initiate and sustain engagement. There remains much to be learned, however. As systems become increasingly connected, and the learner's electronic "window to the world" is opened dramatically, we face a different challenge. The diverse resources available through high-powered information retrieval systems, such as NETSCAPETm and MOSAICTm, lack unifying methods, approaches, and structures. One resource may prove captivating; others may prove boring. Since no single publisher accounts for the myriad of resources, and the nature and quality may prove very uneven in any given instance, we need to find ways to simplify leamer's task while eliciting, maintaining, and continuing interest in both topic and the system (cf. Lepper & Gurtner, 1989).

12.2.3.7. Applying Knowledge and Skills. Retrieving and using knowledge and skill are inextricably tied to encoding: One cannot retrieve what has not yet been learned. The ability to retrieve is a function of the quality and nature of initial encoding, the presence of cues (internally generated or externally supplied) that trigger appropriate processes, and the application of strategies to identify and restructure information. The ability to apply such knowledge and skill, however, is mediated by the context in which they were learned and their utility.

12.2.3.7.1. Problem Solving. Perhaps no process better illustrates the role and value of retrieval than problem solving (see 23.5.2). Lambrecht (1993) outlined three approaches to teaching problem solving for, and with, computer-based learning: general, integrated, and immersed. General problem solving is sometimes referred to as cognitive strategy training. The effectiveness of general problem-solving skills appears to improve when students learn self-regulation metacognitive skills. Delclos and Harrington (1991), for example, compared three treatments used to complete a computer problem-solving task: general problem solving, general problem solving with embedded self-monitoring activities, and a control activity. Results indicated that the monitored problem-solving group solved more complex problems in less time than did other treatments. Self-regulation training helps students transfer learning to more difficult problems within the same domain.

Similarly, King (1991) found that embedding strategic questions helped to guide fifth-graders through computer-based problem-solving activities. Students in a guided group were given strategic questions during the problem-solving process. Students in an unguided treatment were directed to interact with their partners, and those in a control group were given no instructions on how to interact. After completing four computer programs on general problem solving, students in the guided questioning group performed best on tests of general problem solving. Strategic prompting apparently helped to focus intra group interaction, which promoted successful problem solving.

The arguments supporting general approaches to problem solving are intuitively appealing: If generalizable strategies exist, they should be taught directly to make students more effective problem solvers. However, the power of general problem-solving approaches has not been established conclusively. Indeed, the generalizability of problem-solving skills is often believed to be inversely related to their effectiveness: the. more specific the strategy, the less generalizable to different problems; the more general a problem-solving technique, the less powerful for specific problems (Perkins & Salomon, 1989).

Integrated approaches attempt to teach problem-solving skills within realistic learning contexts. Kozma (1987) suggested that computer-based cognitive tools (see 24.2) can teach problem-solving strategies by modeling cognitive processes. Some writing programs, for example, model such critical-writing processes as to identify topics, to structure writing, and to edit and revise, which help students internalize and apply the basic steps involved in writing (Kozma, 1991 a).

In contrast, some have argued that domain-specific knowledge is prerequisite to effective problem solving. According to Pea and Kurland (1987), one needs to acquire strong background and proficiency within a given domain to become a proficient problem solver. If so, then differences between experts and novices may be attributable to the magnitude of their domain knowledge rather than to knowledge of problem-solving strategies (Perkins & Salomon, 1989).

Another problem-solving approach, immersion, focuses more on the nature of the learning experience than the processes involved in problem solving. Advocates propose that students learn to become effective problem solvers when they are immersed in solving real-life problems. Students acquire problem-solving skills in the "culture" in which problems exist. When concepts are used (or learned for the first time), they acquire new meanings; that is, the learning context becomes a part of the concept. Utility, therefore, is a function of the diversity of contexts in which concepts are used (Prawat, 1991).

Technology's potential to stimulate problem solving lies principally in its ability to manage processing resources and to encourage effective problem representation and transfer. Problem solving often involves managing large quantities of data that must be maintained and manipulated. The cognitive load associated with these tasks can divert attention from the problem-solving task. Computers can augment the learner's problem-solving capabilities by managing available data and performing requisite transformations and calculations (Pea, 1992).

The transformational capabilities of the computer facilitate problem representation by helping to clarify relationships between symbolic representations and physical events. Kozma (1991b) described two studies in which the computer helped students to form more accurate mental representations of physical phenomena. In one study (Mokros & Tinker, 1987) seventh- and eighth-grade students, exposed to computer-based sensors for 3 months, were better able to interpret graphs. Apparently, transforming information from electronic sensors into their graphical representations helped to clarify the relationships between symbols and the real world. Similarly, Brasell (1987) reported that instantaneous feedback generated by motion sensors in a microcomputer-based laboratory helped twelfth-grade students to associate physical motion with appropriate graphs.

Emerging technologies increase the ways (and combination of ways) designers can present and display problems for learners. Atkins and Blisset (1992), for example, applied the realistic images available through interactive video to the development of problem-solving skills. Additionally, technologies offer a variety of ways learners can directly interact and experiment with problems and their variables. Systems can encourage learners to be more active in their problem solving. For example, through direct manipulation and experimentation, learners often discover important underlying processes, principles, and logic that result in problems or alter the problems in important ways. These design strategies allow learners to model more than simple procedures or gain additional practice; they help them develop insight into the nature as well as cause of problems. Future research in problem solving should continue to explore various design strategies for problem solving, as well as help to clarify how virtual worlds and realities can be used to enhance these thinking skills.

12.2.3.7.2. Transfer. Much has been written regarding the capacity to apply knowledge and skill to similar versus dissimilar circumstances. Clark and Voogel (1985) described this as a "near"' versus "far" transfer continuum. Near transfer involves the application of previously acquired knowledge to problems of similar, or nearly identical, contexts; far transfer involves application to circumstances dissimilar to, or wholly unlike, the contexts in which knowledge and skill were initially acquired. According to Clark and Voogel, antagonism exists between initial acquisition and transfer contexts: Learning under explicitly structured approaches generally tends to facilitate near transfer, while more conceptually oriented learning tends to promote far transfer.

Similarly, Salomon and Perkins (1989) described how the nature of an instructional strategy is likely to impact transfer. Low-road (near) transfer occurs when students can immediately apply learned information or skills in situations that differ slightly from those in which initial learning occurred. High-road (far) transfer occurs when students apply learning in diverse contexts. Although technology can stimulate learning of both, low- and high-road transfer appear to require different instructional conditions. High-road transfer of problem-solving skills, for example, is stimulated by practicing in diverse contexts until automaticity has been achieved in enabling skills and processes.

Kozma (1991b) described technology's potential to facilitate the transition from novice to expert. Computers can help students incorporate complex phenomena, which are not usually included, into their mental models. White (1984) used computer microworld environments to represent abstract, physical phenomena in diverse ways. High school science students who used a microworld environment for less than I hour demonstrated improved conceptual understanding of important lesson principles and significant improvement on transfer tests.

White (1992) extended this research during a 2-month study. Students ' interacted regularly with a computer microworld incorporating several features of Newtonian physics. During the study, participants not only interacted with the microworld but also tested the authenticity of several potential principles governing the system. Following the study, the experimental group outperformed the control group on a transfer test. Interestingly, the activities engaged in by students during the study appeared to stimulate mindful abstraction, the conditions required for high-road transfer (Salomon & Perkins, 1989).

The growth of interest in constructs such as situated cognition, everyday thinking, and anchored instruction reflects considerable interest in the utilization or application of knowledge and skill in contexts in which they have meaning (Choi & Hannafin, 1995). Transfer-especially far transfer involves the ability to access and utilize knowledge acquired from one context to another. It is fundamental to virtually all conceptions of learning and performance, yet research on transfer has often proved discouraging. How much of our present "school knowledge" and "academic skill" can be situated effectively, and will situated learning contexts improve the productive value of what is learned? Will the cost and effort required to create authentic learning environments be justified by improved transfer? Will high cost environments yield high gain, in terms of transfer? These are significant issues, issues that must be studied if substantial commitments to alternative systems are to become a reality.

12.2.3.8. Contextual Factors. Context has become the cornerstone of contemporary research in areas such as situated cognition, cognitive apprenticeships, authentic learning, and anchored instruction (see 23.5.1.1; Brown, Collins & Duguid, 1989; Chiou, 1992; Choi & Hannafin, 1995; Cognition and Technology Group at Vanderbilt, 1992a, 1992b, 1993, 1994). Cognitive processes and context are viewed as inextricably related, suggesting that knowledge is rooted fundamentally in the context in which it is acquired (see 7.3.3). This has led many researchers to disdain the decontextualized teaching and instructing methods that dominate traditional approaches.

Technology has played a significant role in establishing contexts for learning. Perhaps the best-known applications have been in simulations, where to-be-learned content and processes are represented in ways deemed to capture important contextual information and processes (see, for example, the physics simulation described by Lewis, Stem & Linn, 1993). Many have focused on high-fidelity images that capture substance, processes, and affective elements of real-life events. Harless (1986), for example, developed a voice-activated interactive videodisc simulation on emergency room (ER) intake and treatment procedures. Video vignettes orient users, in this case medical interns, to the circumstances surrounding potential ER patients. The circumstances range from "real" symptoms and related medical histories to contrived symptoms and histories often provided for personal versus medical reasons. The interns must decide whether or not to admit the patients and what course of treatment should be followed. A range of paths are possible, depending upon the diagnoses and treatment plans prescribed. The context, in effect, is that of the typical emergency room, complete with potentially life-threatening illnesses, unanticipated complications, and cost accumulation.

Context has been studied in other ways as well. Dalton and Hannafin (1987) compared the effects of interactive video lessons designed to emphasize specific knowledge (i.e., isolation of key terms, definitions, and explicit cueing) or contextual elements (i.e., important concepts retained, and presented, within authentic circumstances) on declarative knowledge and problem solving by junior high school students. Contextual approaches yielded comparable recall of declarative knowledge but significantly greater application to problems than did specific knowledge-cueing strategies. Breuer and Kummer (1990) applied process learning approaches, wherein lesson content was embedded successfully within computer simulations in vocational education. In both studies, content was successfully embedded within, rather than disembodied from, the contexts that gave it meaning.

12.2.3.8. 1. Grouping. Recent studies suggest that students often complete CBI as effectively, and in some cases more effectively, with a partner than alone (Repman, 1993; Repman, Rooze & Weller, 1991). Mevarech, Silber, and Fine (1991), for example, examined the effects of paired and individual use of an integrated learning system (ILS). The ILS was designed to teach basic numeric skills. The system individualized instruction by diagnosing individual difficulties and prescribing possible solutions. After completing two 20-irtinute sessions over approximately 2 months, sixth-grade students working alone showed no benefits compared to those working with a partner. Grouping resulted in higher achievement across learners and lower math anxiety for the least able students.

Recently, researchers also compared the performance of students who completed computer-based tutorials alone versus in cooperative learning groups (see 35.9). Students often learn more effectively and enjoy instruction more when collaborating than when studying alone at the computer (Hooper, Temiyakarn & Williams, 1993; Hooper, 1992b; Johnson, Johnson & Stanne, 1985, 1986). Johnson and Johnson (1989) identified several important learning and social benefits associated with cooperative learning. From a cognitive perspective, cooperative learning produces higher achievement and productivity than do competitive or individualistic environments, and the results are strongest for complex learning rather than for cognitively low-level learning. One reason for the increased productivity may be that cooperative learning maintains higher levels of student engagement than do other approaches. Increased engagement apparently reflects improved attitudes and perceptions of belonging that are often associated with cooperative learning. Following collaboration with peers in cooperative groups, students often feel more liked, are more concerned for each other's well-being, and enjoy academic work more than after working alone. Consequently, students are willing to remain on task for longer periods of time.

Many cognitive benefits can be gained by working alongside a partner; cooperative learning appears to be particularly effective for improving student achievement (see 35.5). Cooperative learning is designed to deepen understanding of complex lesson content through student interaction and modeling. Students working in groups are made interdependent by controlling individual and group rewards, encouraging group development, stimulating appropriate intra group interaction, and maintaining high personal accountability for individual and group performance (Hooper, 1992a).

Contextual factors have demonstrated their potency for enhancing both the encoding and retrieving of information. As new technologies continue to evolve, richer capabilities for designing contexts will be developed. During the past decade, research on cooperative learning has suggested many important grouping distinctions. Design strategies have been developed which support the needs of both cooperative learning and other kinds of grouping. Future researchers need to investigate new ways for learners to model cooperative strategies as well as track accountability in very personalized ways.

12.2.4 Evolution in Perspective

Thus far, we examined research on cost effectiveness, learning effectiveness, and design. Although it needs to be examined using new metrics, we concede that, redefined or not, cost should always be a concern. Cost is an especially contentious issue as resources for education become scarce and increasingly politicized.

Three other findings seem especially significant. First, how technological capabilities are utilized is more critical than the capabilities themselves. Simply put, more is not necessarily better. Designers must be aware of the cognitive demands their systems place on learners and thoughtfully apply techniques that support, not interfere with, learner effort. Next, design must be rooted in research on teaching and learning; the tools and resources provided must support the learner's efforts (Salomon, 1993). Finally, emerging technologies have made it easier than ever to create learning systems, while simultaneously making it more complex to design effectively. For example, while technologies have increased the designer's ability to create and present rich contexts for learning, the prospect of designing contexts that are authentic, reasonably situated, appropriately anchored, well guided and supported, and motivational is no easy task. It is far easier to create something with great cosmetic appeal than an integrated learning system that is consistent with available research and theory (see, for example, Keen, 1985). So much is possible, yet we know so little about many fundamental design and development issues.

Emerging technologies have also altered the design process itself (see, for example, Collins, 1993). Emerging design methods, often adapted from other disciplines (e.g., engineering), no longer limit designers to traditional linear approaches. For example, rapid prototyping allows designers to quickly produce functional models of far more elaborate designs. These models not only help developers envision full-scale models and lesson functionality but also promote greater en-route design experimentation and creativity. Again, the dilemma is balancing what should be produced, against what can be produced.

Finally, research on design most clearly indicates that traditional design strategies have focused on how and what to teach rather than empowering to learn. These differing views ultimately affect how designs are operationalized, evaluated, and researched. While a great deal of useful research has been conducted related to traditional design methods, research on designs that empower learners is relatively rare. Developing, testing, and researching alternative design methods may be among the most daunting of tasks facing future researchers.

Although past instructional technology research has contributed to a better understanding of the effects of particular technologies on learning, it has done little to help us relate such findings to critically important contexts such as schools and classrooms (Kozma, 1991b). We have a better understanding of the parts, but are comparatively naive about the whole. As we approach the 21st century, our research needs to reflect a more integrated approach to teaching, learning, and technology.


Updated August 3, 2001
Copyright © 2001
The Association for Educational Communications and Technology

AECT
1800 North Stonelake Drive, Suite 2
Bloomington, IN 47404

877.677.AECT (toll-free)
812.335.7675

AECT Home Membership Information Conferences & Events AECT Publications Post and Search Job Listings