AECT Handbook of Research

Table of Contents

17. Educational games and simulations: Technology in search of a research paradigm
PDF

17.1 Introduction
17.2 A Definitive Framework
17.3 Academic Games
17.4 Experiential Simulations
17.5 Symbolic Simulations
17.6 Instructional Design Implications Derived from Research
17.7 Recommendations for Future Research
  References
Search this Handbook for:

 

17.6 INSTRUCTIONAL DESIGN IMPLICATIONS DERIVED FROM RESEARCH

Many classroom games and simulations are developed for a particular class, and the key design variables often are not explicitly identified. Further, much of the research has investigated "variables of convenience," i.e., attitudes and content-related achievement (Wentworth & Lewis, 1972). Nevertheless, a few studies have investigated other effects of games and simulations that have implications for design.

17.6.1 Academic Games

One of the stated requirements for academic games is that advancement in the exercise and winning should be based on academic skills. A study conducted by Schild (1966) tested the premise that students learn those skills and strategies that are reinforced by the structure of the game, i.e., the skills essential for winning. He recorded the decisions made by four groups of players of the Parent-Child game, in which pairs composed of one parent and one child must negotiate appropriate child behaviors on five issues. The "child" can damage the "parent's" score through consistent delinquency, and the "parent" can damage the "child's" score by excessive control and punishment. However, by round 4 of the game, most players had learned the optimal strategy essential to maximizing both the parent and child scores for their team. In other words, the teams had learned the optimal strategy for winning. The implication for game design is that game structure should be carefully constructed so that winning depends on strategies acceptable in the classroom and knowledge in the subject area.

Several studies on the classroom game Teams-Garnes-Tournaments (TGT) have implications for game design. A unique feature of TGT is that it alters both the task and reinforcement structure of the classroom. Most classrooms are highly competitive, with individual students competing for scarce reinforcements (DeVries & Edwards, 1973; DeVries & Slavin, 1978). In contrast, TGT introduces a cooperative task structure within teams and increases greatly the availability of reinforcement.

TGT organizes the class into teams of comparable achievement (e.g., one high achiever, two average achievers, and one low achiever), but each student competes at a three person tournament table with students at the same ability level. Each student's score contributes to the overall team score. (Scores earned at the tournament are 6 points, high scorer; 4 points, middle scorer; and 2 points, low scorer.) Practice sessions also are scheduled a few days prior to the weekly or biweekly tournament.

Because the team score is dependent on the performance of all the team members, the game structure reinforces peer tutoring and cooperative learning during the practice sessions. In one study, the games/teams combination increased the amount of peer tutoring beyond that in either games/ individual reward or quizzes/team reward classes (DeVries & Edwards, 1973). Classes that participated in TGT (team reward) also perceived a decrease in both classroom competitiveness and course difficulty (measured by the Learning Environment Inventory, LEI). The researchers suggest that these perceptions are the result of the task interdependence of the game and the increased opportunities for reinforcement.

A review of 10 studies in which TGT was implemented in mathematics, language arts, and social-studies classes indicated consistent effects on achievement (measured by standardized tests) and mutual concern (measured by questionnaire scales adapted from the Learning Environment Inventory). Some of the studies compared TGT to regular classroom instruction, and others compared TGT to the traditional classroom and a modification of TGT in which higher- or lower-scoring students' scores were weighted more heavily. However, the modifications did not produce a greater effect on achievement than the original TGT.

Of importance for game design in general is the relationship between competition and cooperation. Competition is the essence of any game. However, the mutual dependence of students on each other reinforces cooperation, an important characteristic of a positive classroom environment.

17.6.2 Computer Games

A key issue in manual games is the influence of a game on classroom dynamics. In contrast, key issues in computer delivered games are the mechanics of play and the observance of accepted instructional design principles. Many computer-delivered games, however, have not been developed by instructional designers. Instead, like the programmed instruction movement of the 1960s, various other groups have developed many of the products. Often, the computer software has not undergone formative or summative evaluation. Although reviews of software are available, few reviewers implement the materials with students. Moreover, evaluation checklists do not require the reviewer to conduct observations of student use (Vargas, 1986).

Observations of students using computer software indicate some problems with computer games in both game mechanics and principles of instructional design (Vargas, 1986; Gredler, 1992a). Briefly summarized, the game mechanics problems include inappropriate vocabulary for young students, inadequate directions, lengthy texts, and multistep directions with no opportunity for student practice and inappropriate use of graphics (Vargas, 1986; Gredler, 1992a). In addition, computer games often do not provide options for students to bypass tasks that are too complex or bypass items they are unable to answer. Since the only way for the player, to continue in the game is to strike a key or type in a word, players are forced to enter random answers, which, of course, are evaluated by the computer as wrong (Gredler, 1992a).

In addition to the mechanics of play, frequent observations of students using computer software indicate two instructional design problems. They are (1) inadequate stimulus control and (2) defective reinforcement contingencies. For example, the use of a question with several possible answers in which only one answer is accepted by the computer penalizes the student who selects a correct answer that is not included in the program. The task stimulus in such situations is inappropriate.

Two types of defective reinforcement contingencies have been observed during student use of computer software. First, the game or other exercise is often delayed because the keyboard is locked while stars twinkle, trains puff across the screen, or smiley faces wink or nod (Vargas, 1986, p. 75). A more serious problem occurs when the consequences that follow wrong answers are more interesting than the feedback for correct answers. In one computer exercise, for example, a little man jumps up and down and waves his arms after a wrong answer. Students, instead of solving the problems for the correct answers, randomly enter any answer in order to see the little man jump up and down (Gredler, 1992a).

Potential users of classroom computer games, therefore, should carefully review the exercises for several likely flaws. They are inappropriate vocabulary, too lengthy text, inadequate directions and branching, inadequate stimulus control, and defective reinforcement contingencies.

17.6.3 The Mixed-Metaphor Problem

Games are competitive exercises in which the objective is to win, and experiential simulations are interactive exercises in which participants take on roles with serious decision-making responsibilities. However, some developers have attempted to mix the two perspectives by assigning participants serious roles, placing them in direct competition with each other, and identifying the participants as winners or losers according to the individual's or team's performance. These exercises are sometimes referred to as simulation games and gaming simulations.

Games and experiential simulations, however, are different psychological realities, and mixing the two techniques is a contradiction in terms. Such exercises send conflicting messages to participants (Jones, 1984, 1987). They also can lead to bad feelings between participants who address their roles in a professional manner and those who treat the exercise as "only a game" (Jones, 1987).

Many exercises that otherwise would be classified as data management simulations are mixed-metaphor exercises. That is, student teams that each manage a "company" are placed in direct competition with each other with profitability as the criterion for winning. For example, in the Business Policy Game, the winning firm is the one with the highest return on investment. Further, in many classes, from 10% to 50% of the student's course grade depends on the team's performance.

Several problems recently have been identified with these exercises. Lundy (1985) observed that sometimes a team that is not doing well in later rounds attempts to .,crash the system." Seeing no way to win, team members behave like game players and behave in such a way as to prevent others from winning. Other desperation plays are charging an astronomical price for a product in hopes of selling a few, and end-of-activity plays, such as eliminating all research and development or ordering no raw materials (Teach, 1990). Some teams, however, view the exercise as simply a situation in which to show their prowess. Golden and Smith (1991) describe these teams as "dogfighters" because their behavior resembles that found in the classic World War 11 aviation dogfight.

The major problem with such exercises, however, is that competition in the business world does not routinely result in one company's being declared a winner while others enter bankruptcy (Teach, 1990). Instead, companies strive for market share and alter their strategies based on feedback about market conditions and the success of their earlier efforts. Thus, the focus on being "the winner" distorts the simulation experience.

Some researchers have investigated the factors that contribute to team success in these exercises. Gentry (1980) investigated the relationships between team size (three to five members) and various attitudinal and performance variables in three undergraduate business classes. However, of the variables entered into the stepwise regression equation to predict team standing, he found that group performance was predicted better by the ability of the best student in the group rather than by a composite of the group's abilities. Thus, group performance was more a function of a group leader rather than knowledge and ability of group members.

In addition, Remus (1977) and Remus and Jenner (1981) found a significant correlation between the student's enjoyment of the exercise and final standing of their teams. Several students in one study also disagreed with statements that the exercise was a valuable experience and represented real-world decision making (Remus & Jenner, 1981). In summary, the observations of Teach (1990), Golden and Smith (1991) and Lundy (1985), and the findings of Remus and Jenner (1981), lend support to the findings of Schild (1966). Specifically, students will tend to enact those behaviors that are reinforced by winning. In simulation games, these actions may be counterproductive to the expected learning.

17.6.4 Experiential Simulations

The student in an experiential simulation takes on a serious role in an evolving scenario and experiences the privileges and responsibilities of that role in attempting to solve a complex problem or realize a goal. Four major types of experiential simulations are data management, crisis management, and diagnostic and social-process exercises. Of these four types, crisis management simulations are developed to meet preestablished criteria regarding the nature of the crisis and expected student reactions. Data related to the development of these exercises typically are not reported for public consumption. Moreover, data management and social-process simulations often (1) are not standardized exercises and/or (2) do not provide student data other than posttest achievement on some instructor-developed instrument.

Many diagnostic simulations, in contrast, are standardized situations in which optimal sequential decisions in relation to the evolving problem have been identified. Further, research conducted on the analyses of problem-solving decisions in diagnostic simulations can serve as a model for analyzing students' cognitive strategies in other types of simulations. The first step is the evaluation of the range of possible decisions at each decision point by a group of experts. Each decision is classified in one of five categories that range from "clearly contraindicated" to "clearly indicated and important," and a positive or negative weight (e.g., -1 to -3 or +1 to +3) is assigned to each decision (McGuire & Babbott, 1967, p. 5). Then, the combination of choices that represent a skilled decision-making strategy are summed for a total numerical score. When the simulation is administered, the numerical score of each student decision is recorded. The extent of congruence between the student's total and the expert decisions is referred to as a proficiency score.

A study of the problem-solving skills in simulations with 186 fourth-year medical students analyzed students' proficiency scores and revealed four different problem-solving styles (McGuire & Bashook, 1967). The high scorers included two groups identified as (1) thorough and discriminating and (2) the "shotgun" group. Although both groups earned total problem-solving scores between 32 and 60, the "shotgun" group made many choices that were not warranted '(high errors of commission). Similarly, two problem-solving patterns were identified in the low-scoring group (scores below 30). One, the constricted group, chose few desirable or undesirable actions. In contrast, the other group, the random problem solvers, chose few -desirable actions, but they also chose many actions that were not warranted.

This method of analyzing the specific characteristics of student performance in diagnostic simulations is applicable to other types of simulations that address problem-solving strategies. First, optimal strategies through the complex task or problem are identified. Other plausible steps are then weighted according to the extent to which they are neutral (do not advance the problem solution) or are debilitating. Finally, the number of debilitating decisions made by both high and low scorers is tabulated to identify the problem-solving pattern.

Given the recent emphasis on students' constructing knowledge during learning, this model or a similar one can provide information to teachers about specific student difficulties. Also, the computer can tabulate both total and component scores on students as they work through the exercise.

17.6.5 Symbolic Simulations

A symbolic simulation is a dynamic representation of a universe, system, process, or phenomenon by another system The behavior that is simulated involves the interaction of least two variables over time. The student interacts with symbolic simulation from the outside, unlike the experiential simulation. The types of symbolic simulations are data universe, system, process, and laboratory-research simulations.

17.6.5.1. Data Universe Simulations. At present, few data universe simulations have been developed for instructional purposes. One example, however, is Jungck and Calley's (1985) Genetics Construction Kit (GCK). The software consists of two parts. One is a data universe that includes the complex behavior of 10 phenomena in classical Mendelian genetics. Operations that may be performed on this universe include crosses, comparisons of parental and filial generations, Chi-square analyses, and building up the number of progeny through successive experiments (Jungck & Calley, 1985). The second part develops "populations of organisms" for study that include combinations of the phenomena in the data universe (Stewart et al., 1992).

One study implemented GCK in the first 5 weeks of a 9week high school genetics course. Students first completed an activity in which they built models to explain a "black-box" situation and then discussed the adequacy of their models for explaining the data. They worked in groups to research problems generated by GCK by building and testing models that appeared to explain the data (Stewart et al., 1992).

After 3 weeks, the researchers selected six students who had a good understanding of simple dominance and meiosis models. These students were presented individually with subsequent problems the others were studying in groups for 6 class days. Detailed analyses of their computer records and audio recordings of their "think-aloud' strategies indicated several findings. First, students revised their original explanatory models in most of the problems they encountered. Second, all but three of the final models were compatible with the data. Of these, half represented accepted scientific theory, and half represented an alternative perspective.

Third, and of primary importance to instructional design, the researchers documented a detailed and involved model-building process used by the students (see 12.3. L. 1, 24.3.1). Among the actions initiated by the students were conducting experiments within the given field population, using an existing model to explain features of the data, using limitations of the proposed model to identify other possible causal factors, and bracketing cases of interest as a first step in revising the proposed model. Identification of the steps used by students in conducting research with other data universe simulations is a first step in developing instructional strategies for these simulations.

The variety of strategies that may be implemented by students when faced with a complex problem is illustrated by two different implementations of the genetics simulation CATLAB (Kinnear, 1982). The exercise permits students to construct a population of nonpedigree cats (up to 91 cats) and then to breed any two of the cats for any number of litters. In constructing the experimental population, the student chooses gender, tail or no tail, and color for each cat. If nonwhite is first chosen, then several options follow for both color and pattern. After the students complete their selections, the Apple 11 program provides rather stilted color images of the student's choices and the resulting litters.

One biology teacher reported that her students tended to breed cats without a plan (Vargas, 1986). The range of mixed characteristics in the resulting litters made it impossible for students to observe the relevant genetic principles. In addition, one student set about producing as many different-looking cats as possible. Vargas (1986, p. 742) concluded that the simulation by itself was no better than leaving a student unsupervised in a science laboratory to proceed by trial and error.

In contrast, Simmons and Lunetta (1993) implemented a three-part instructional strategy with three expert and eight novice problem solvers using CATLAB. The subjects were first directed to explore various traits with cats. In phase 2, the researcher had a brief discussion with each subject about his or her actions and rationale. Phase 3 required the subjects to investigate and determine the inheritance pattern of the orange and tabby striping trait.

The original intent of the study was to identify differences between expert and novice problem solvers in their interactions with the simulation. However, this dichotomy was too restrictive to explain the patterns of problem solving found in the data (Simmons & Lunetta, 1993). Instead, three levels of problem-solving performance were found. The highest level, successful problem solvers, consisted of two experts and two novices. This group (1) used systematic procedures, (2) developed valid reasons for their results, and (3) generated correct answers. They also had the highest percentage of correct responses (75%-100%). The second level, the transitional or less-successful problem solvers, consisted of one expert and three novices. Of their responses, 60% to 70% were correct. This group also used less-systematic procedures and generated some invalid explanations. In addition, they did not rule out alternative explanations that could account for their conclusions. The third level consisted of the unsuccessful problem solvers (five novices). These students exhibited the most random approaches to the problem and did not use valid scientific explanations. They typically used circular definitions to justify their actions (Simmons & Lunetta, 1993). From 35% to 45% of their responses were correct.

Analysis of the videotapes of the subjects indicated that successful problem solvers applied integrated knowledge during the process. Unsuccessful subjects, however, were unable to use domain-specific knowledge to describe their observations and were unable to detect the features of genetics concepts and principles in the data (Simmons & Lunetta, 1993). They also exhibited misunderstandings about the nature of probability. The findings of the study, which indicate three levels of problem solving, suggest that successful performance requires more than an advanced knowledge of the, subject matter (Simmons & Lunetta, 1993). That is, both novices and experts exhibited a variety of strategies that ranged from least to most successful.

Data universe simulations lend themselves to several types of cognitive tasks. However, they are likely to unsuccessful unless students have developed a systematic strategy for approaching the task and also are able to apply an integrated knowledge base of concepts and principles.

17.6.5.2. System Simulations. Developers of complex equipment simulations typically establish performance standards for students and refine the simulation until those standards are met. Essential terms and definitions are typically taught prior to student engagement with the simulation.

Developing the skills of analysis and prediction in other system simulations with several variables presents a different instructional design problem. Students' prediction skills in relation to one system, water pollution, were investigated by Lavoie and Good (1988). In the system, five variables (temperature, waste type, dumping rate, type of treatment, and type of body water) affected oxygen and waste concentration of the water.

After a short period to explore the simulation, the 14 students read background material on water pollution that described some of the effects among the variables. They next worked through several exercises with the computer simulation which involved choosing preselected parameters and observing the effects on a given dependent variable. The students then were given three prediction problems to solve.

Problem-solving ability on the prediction problems was related to three factors: high or moderate initial knowledge, high academic achievement, and cognitive performance at the Piagetian stage of formal operational thinking. Unsuccessful students tended to have both low initial knowledge and low academic ability and to be at the Piagetian stage of concrete operational thinking.

One of the key differences between the Piagetian stages of concrete and formal reasoning is that concrete thinkers typically are able to manipulate systematically only one or two variables at a time. Given a more complex situation, they change several independent variables at a time and, therefore, cannot observe the effects of any one variable (Piaget, 1972; Gredler, 1992b). In contrast, formal operational thinkers are capable of developing hypotheses that systematically test the influence of several variables on an outcome. Analysis of the videotapes of the students confirmed that they executed the strategies consistently with their level of Piagetian reasoning.

In addition, the unsuccessful students expressed dissatisfaction and lack of interest at various times during the learning sequence (Lavoie & Good, 1988, p. 342). They also conducted, on average, 50% fewer simulation runs, took fewer notes than successful students, and spent less time reviewing and evaluating their predictions than the successful subjects. Further, a postexercise interview revealed that the unsuccessful students had more misconceptions about solubility and the relationships among oxygen, bacteria, and waste than the successful students.

The researchers also identified 21 behavioral tendencies that differed between successful and unsuccessful problem solvers. Others, in addition to those already mentioned, are that successful problem solvers made fewer errors in reading graphs, relied on information learned during the lesson to make predictions, and understood the directions and information in the lesson. The implications for instructional design are clear. Systems in which several independent variables influence the values of two or more dependent variables are complex situations for students. Simulations of such systems should include preassessments of students' level of both Piagetian thinking and knowledge level. Students at the concrete level of thinking and/or with low subject* knowledge should be directed to other materials prior to interacting with the simulation. Like the data universe simulations, a requisite skill is the capability of applying an integrated knowledge base to an unfamiliar situation.

17.6.5.3. Process Simulations. Often, naturally occurring phenomena are either unobservable or are not easily subject to experimentation. Examples include Newton's laws of motion, photosynthesis, and complex atomic reactions. Process simulations that can use symbols to represent the interactions of unobservable variables and which are subject to student manipulation can be useful instructional devices.

White (1984) designed and tested a series of exercises using symbols to confront students' misconceptions of Newton's laws of motion and conservation of momentum. A series of 10 exercises was designed that required students to conduct progressively more difficult operations on a spaceship in outer space (frictionless environment). Among the misconceptions addressed by the exercises is the intuitive belief that objects always go in the direction they are kicked (White, 1984).

Thirty-two students who had studied PSSC physics participated in the study. The 18 students in the experimental group and the 14 students in the control group did not differ significantly on the pretest problems. From 1/3 to 1/2 of the students demonstrated misconceptions about the effects in some of the basic questions. Posttest data indicated that the group that interacted with the simulation significantly improved their performance.

However, on the exercise involving prediction of the effects of an orthogonal impulse, the simulation exercise led to as many students changing from right pretest answers to wrong posttest answers as changed from wrong to right. Further, many of the exercises could be solved by simple heuristics, such as, "if one impulse (force or thrust) is not enough, try two" (White, 1984). The use of such strategies supports Kozma's (1992) concern that abstract symbols may not have a referent in another domain for the students (p. 206). That is, students may learn to operate directly on the objects without developing an understanding of the underlying principles.

Among the subsequent improvements to the exercise (White, 1994) are (1) additional structure, including "laws" to be tested in the simulation, (2) the addition of real-world transfer problems, and (3) the inclusion of other symbols to focus on important concepts. An example of an additional symbol is the use of a "wake" behind the spaceship to illustrate a change in velocity. Kozma (1992) reports that White (1994) found significant improvement on the transfer problems and significantly higher performance of students in a 2-month curriculum using the simulation than students in two regular physics classes.

A long-standing problem in education is the issue of overcoming students' misconceptions that are based on their limited everyday experience and intuitions. That is, students may be able to verbalize a phenomenon accurately, but when faced with a real-world problem, they revert to out-of-school knowledge as a basis for conceptualizing the situation (Alexander, 1992). Process simulations can be a powerful instructional tool to provide the repeated experiences that Piaget (1970) first identified as essential in overcoming these problems. However, careful attention to both symbol selection and links to laws and principles is required.

17.6.5.4. Laboratory Research Simulations. Laboratory research simulations consist of a series of discrete qualitative and quantitative experiments that students may direct in a specific subject area. Several studies have compared computer-based simulations with "wet labs" in chemistry and biology courses. The results, however, are confounded by one group or the other receiving extra materials, such as summary sheets, or written problems to solve following instructions.

One development, however, is a series of experiments for introductory college chemistry courses. The experiments were revised based on student comments during formative evaluation and then placed into a comparison group pilot study. The laboratory simulations use a single-screen system that permits computer text and graphics to be superimposed on video images. Components of the system are a personal computer, a video interface card, a videodisc player, and a television monitor. This system is more expensive than other configurations; however, an advantage is that students can respond to text questions while the images remain on the screen.

In the pilot study, 103 students were randomly assigned to a lab-only group, videodisc-only group, or videodisc-plus-lab group. Six interactive videodisc workstations were available on a self-scheduled basis for the students using the computer software. On a brief seven-point posttest, the difference between the means of the videodisc group and the laboratory group was 1.03 standard deviation units. Significant differences were also found between, the means of the anonymously graded laboratory reports (videodisc-plus-lab = 31.04 and lab-only = 26.44). Also, students in the laboratory group were more likely to rely on the rote citation of examples in the lab manual even when these examples did not fit the data (Smith, Jones & Waugh, 1986).

17.6.6 Discussion

Research on games and simulations indicates three major areas that are essential for effective design: (1) the task-reinforcement structure, (2) the role of prior knowledge, and (3) the complexity of problem solving.

17.6.6.1. Task Reinforcement Structure. Both games and simulations alter the reinforcement structure of the classroom because they expand the opportunities for students to earn reinforcement. Because winning is a powerful reinforcer, games must be carefully designed so that inappropriate strategies are not learned. Although teams-games-tournaments reinforces cooperation and peer tutoring, other games reinforce guessing and the selection of wrong answers.

The major task for game players is to win, whereas the task for simulation participants is to execute serious responsibilities identified by the nature of the simulation (experiential) or by the accompanying instruction (symbolic simulation). To mix games and simulations establishes conflicting tasks, i.e., defeating other participants or executing a role with identified responsibilities.

In contrast, experiential simulations establish particular tasks or goals for participants and provide contingencies in the form of changes in the complex problem or the actions of other participants. Designers of symbolic simulations, however, face particular problems. That is, simply providing a data universe, a system, or interacting processes is a necessary but not sufficient condition for a successful or meaningful problem-solving experience. For example, if the student's decisions result in a colorful screen display, the exercise reinforces random search strategies as well as thoughtful student choices.

Moreover, in the absence of prior instruction on conducting research in multivariate open-ended situations, some students will be unsuccessful. As indicated in one study, the unsuccessful students became frustrated and lost interest. Instead of a reinforcing exercise, the simulation becomes a form of punishment for the student's effort. One solution is to teach model-building strategies so that students become proficient in using them to solve broad open-ended problems. Another is to program the exercise such that random selection of variables initiates a message that suspends the simulation and routes the student to particular information sources for assistance, such as the teacher or particular instructional materials.

17.6.6.2. The Role of Prior Knowledge. Although the research on data universe, system, and process simulations is limited, the studies indicate the importance of prior knowledge on successful performance. Prior achievement level typically serves as an ind icator of prior knowledge; however, this variable alone is insufficient to predict problem-solving performance. The research identifies two types of prior knowledge that appear to be essential in some simulations.

One is domain-specific knowledge that must be integrated into a coherent whole. (Fragmented or partial knowledge is insufficient.) In one study, for example, unsuccessful students held several misconceptions about key topics.

A second type of knowledge essential for success is a systematic strategy for addressing a multifaceted situation. Students who had been taught to use models to explain data and to revise their models to account for new data were successful in conducting genetics research in a data universe simulation.

The capability of developing hypothetical models of a complex situation was found to be important in another study. In a system simulation involving the interactions of several variables, formal Piagetian reasoning (as opposed to concrete reasoning) also was found to be essential. Of interest is that formal operational thinkers are capable of developing hypothetical models that are then tested systematically. In contrast, concrete operational thinkers can successfully manipulate only one or two variables at a time.

The implication for instructional design is that the identification of essential prerequisites, long an important design principle, involves at least two areas for some simulations. First, major concepts in the subject domain that are essential to manipulating variables or conducting research using the simulation should be identified. Level of academic achievement or a description of completed courses is insufficient to indicate essential prerequisite knowledge. In other words, variables identified in artificial-intelligence approaches to computer-based learning-i.e., problem-solving skill, aptitude, and ability (Tennyson & Park, 1987)-must be specified for the different types of simulations.

Second, the level of the task in terms of the number and nature of the variables to be manipulated also should be identified. Simulations that illustrate the int6ractive effects of several variables are more complex in terms of the reasoning strategies required for student success. Prior instruction in systematically manipulating variables may be required.

17.6.6.3. The Complexity of Problem Solving. Understanding problem solving in a variety of contexts is a major focus in cognitive theories of learning (Gredler, 1992b). Research on simulations suggests implications for these perspectives. One theoretical perspective, Gagn6's conditions of learning, identifies five distinct categories of learning that differ in instructional requirements for successful learning. One of these domains is cognitive strategies, which consists of the skills essential to the student's management of his or her thinking and learning (Gagn6, 1977, 1985). Cognitive strategies, however, may vary widely among students. Analyses of students' decisions in a diagnostic simulation, for example, indicated that successful students ranged from thorough and discriminating to the "shotgun" group, which chose many unwarranted actions. Analyses of students' strategies in a data universe simulation indicated 21 behavioral tendencies that differed between successful and unsuccessful problem solvers.

Another concern in terms of strategies acquired by the learner is that situational heuristics rather than generalizable principles may be learned. Thus, simulation design must incorporate links to the relevant theoretical framework.

Information-processing theories, another cognitive perspective, focus on the differences between expert and novice problem solvers. Research in several subject areas has identified general characteristics of both types of problem solvers (see Glaser & Chi, 1988). The expert/novice dichotomy, however, may oversimplify differences among individuals. One study, for example, found a continuum of capabilities from least to most successful that varied along the dimensions of (1) extent of integrated knowledge and (2) level of strategic reasoning.

A third cognitive development is that of constructivism (see Chapter 7 ). At present, no single constructivist theory of instruction has been developed (Driscoll, 1994). A basic tenet of constructivism, however, is that knowledge is a construction by the learner. That is, the learner interprets information in terms of his or her particular experience and constructs a knowledge base from these interpretations.

Beyond this common tenet, constructivism is interpreted by different proponents in somewhat different ways. Two of these views are particularly relevant for simulations. One view is based in part on Piaget's (1970) theory of cognitive development, which states that logical thinking develops from (1) the learner's confrontation with his or her misconceptions about the world and (2) the resulting reorganization of thinking on a more logical level. Thus, instruction should place learners in situations in which they must face the inconsistencies in their naive models of thinking. The process simulation discussed earlier that incorporates principles of Newtonian mechanics is an example.

Another perspective in constructivism is the view that authentic tasks with real-world relevance and utility should replace isolated decontextualized demands (Brown et al., 1989; Jonassen, 1991; Driscoll, 1994). Such tasks are particularly important in ill-structured domains such as medicine, history, and literature interpretation in which problems or cases require the flexible assembly of knowledge from a variety of sources (Spiro et al., 1991). Examples are diagnostic simulations in which students face complex, authentic, and evolving problems that they must attempt to understand and manage to a successful conclusion. Some data universe and system simulations, if accompanied by appropriate instruction, can also address the requirements of this constructivist view.

One concern that has been raised in relation to placing students in complex situations requiring many steps is that such tasks may overwhelm the less-capable learner (Dick, 1991, p. 42). In other words, the gap may be too great between the learner's capabilities and the tools and information provided in the exercise. The system simulation on water pollution is an example. The unsuccessful students lacked both basic knowledge related to the situation and systematic strategies for addressing multifactor problems, and expressed dissatisfaction and lack of interest several times during the activity.

In summary, the research on simulations indicates the varieties of cognitive strategies enacted by students in complex situations, and the range of differences between expert and novice problem solvers, and it offers a mechanism for empirically validating major concepts in constructivism.


Updated August 3, 2001
Copyright © 2001
The Association for Educational Communications and Technology

AECT
1800 North Stonelake Drive, Suite 2
Bloomington, IN 47404

877.677.AECT (toll-free)
812.335.7675

AECT Home Membership Information Conferences & Events AECT Publications Post and Search Job Listings