AECT Handbook of Research

Table of Contents

19. Intellignet tutoring systems: past, present, and future
PDF

19.1 Introduction
19.2 Precursors of ITS
19.3 Intelligent Tutoring Systems Defined
19.4 The 20-year History of ITS
19.5 ITS Evaluations
19.6 Future ITS Research and Development
19.7 Conclusion
  References
Search this Handbook for:

 

19.3 INTELLIGENT TUTORING SYSTEMS DEFINED

While many researchers in the field view ICAI and ITS as interchangeable designations, we make a subtle distinction between the two: ITS represent a more specific type of ICAI, due to the attributes discussed below.

19.3.1 Early Specifications of ITS

An early outline of ITS requirements was presented by Hartley and Sleeman (1973). They argued that ITS must possess: (a) knowledge of the domain (expert model), (b) knowledge of the learner (student model), and (c) knowledge of teaching strategies (tutor). It is interesting to note that this simple list has not changed in more than 20 years (see Lajoie & Derry, 1993; Polson & Richardson, 1988; Psotka, Massey, and Mutter, 1988; Regian & Shute, 1992; and Sleeman & Brown, 1982).

All of this computer-resident knowledge marks a radical shift from earlier "knowledge-free" CAI routines. Furthermore, the ability to diagnose errors and tailor remediation based on the diagnosis represents a key difference between ICAI and CAI. Figure 19-2 illustrates these knowledge components and their relations within a generic ITS. Each of these ITS components will be discussed, in turn.

 

INTELLIGENT TUTORING SYSTEM

 

 

19.3.2 ITS Components and Relationships

A student learns from an ITS primarily by solving problems--ones that are appropriately selected or tailor-made-- that serve as good learning experiences for that student. The system starts by assessing what the student already knows, the student model. The system concurrently must consider what the student needs to know, the curriculum (also known as the domain expert). Finally, the system must decide what curriculum element (unit of instruction) ought to be instructed next, and how it shall be presented, the tutor (or inherent teaching strategy). From all of these considerations, the system selects, or generates, a problem, then either works out a solution to the problem (via the domain expert), or retrieves a prepared solution. The ITS then compares its solution, in real-time, to the one the student has prepared and performs a diagnosis based on differences between the two.

Feedback is offered by the ITS based on the student-advisor considerations such as how long it's been since feedback was last provided, whether the student already received some particular advice, and so on. After the feedback loop, the program updates the student skills model (a record of what the student knows and doesn't know) and increments learning progress indicators. These updating activities modify the student model, and the entire cycle is repeated, starting with selecting or generating a new problem(see 32.3).

Not all ITS include these components, and the problem-test-feedback cycle does not adequately characterize all systems. However, this generic depiction does describe many current ITS. Alternative implementations exist, representing conceptual as well as practical differences in their design. For example, the standard approach to building a student model involves representing emerging learner knowledge and skills. The computer responds to updated observations with a modified curriculum that is minutely adjusted. Instruction, therefore, is very much dependent on individual response histories. But an alternative approach involves assessing incoming knowledge and skills, either instead of, or in addition to, emerging knowledge and skills. This alternative enables the curriculum to adapt to both persistent and/or momentary performance information as well as their interaction (see Shute, 1993-a, 1993-b). In fact, many have argued that incoming knowledge is the single most important determinant of subsequent learning (e.g., Alexander & Judy, 1988; Dochy, 1992; Glaser, 1984).

Other kinds of systems may not even have a tutor/coach present. For example, the strength of microworlds (exploratory environments) resides in the underlying simulation and explicit interfaces in which students can freely conduct experiments and obtain results quickly and safely(see 12.3). This is a particularly attractive feature for domains that are hazardous, or do not frequently occur in the real world. Furthermore, these systems can be intrinsically motivating, in terms of generating interesting complexities that keep students interested in continuing to explore, while giving them sufficient success to prevent frustration.

19.3.3 The "I" in ITS

Our working definition of computer-tutor intelligence is that the system must behave intelligently, not actually be intelligent, like a human. More specifically, we believe that an intelligent system must be able to (a) accurately diagnose students' knowledge structures, skills, and/or styles using principles, rather than pre-programmed responses, to decide what to do next, and then (b) adapt instruction accordingly (e.g., Clancey, 1986; Shute, 1992; Sleeman & Brown, 1982). Moreover, the traditional intelligent tutoring system "... takes a longitudinal, rather than cross-sectional, perspective, focusing on the fluctuating cognitive needs of a single learner over time, rather than on stable inter-individual differences." (Ohlsson, 1986, pp. 293-294).

In order to obtain a rough idea of the degree of consensus among researchers in the ITS community, twenty experts were asked to summarize, in a couple of sentences, their ideas on what the "I" in ITS meant. Following are the different responses received (in alphabetical order, and slightly edited, for readability).

Ton de Jong (Dec. 10, 1993): Intelligent in ITS stands for the ability to use (in a connected way) different levels of abstraction in the representation of the learner, the domain, and the instruction. The higher the range of abstraction, the higher the intelligence. The phrase "in a connected way" implies that one should be able to go from specific (e.g., log files) to abstract (e.g., learner characteristics), as well as the other way around (e.g., from general instructional strategies to a specific instructional transaction).

Sharon Derry (Oct. 15, 1993): An intelligent instructional system can observe what the student is doing during problem solving and/or has done over a series of problem-solving sessions, and from this information draw inferences about the student's knowledge, beliefs, and attitudes in terms of some theory of cognition. A system can be intelligent whether or not it makes instructional decisions based on this information, but if it doesn't use such information in instructional decision-making, then I don't think of it as a tutoring system, but rather a tool that has some diagnostic capabilities.

Wayne Gray (Nov. 15, 1993): I concede a wide latitude on the application of the term "ITS" in regard to instructional systems. However, at some level and to some degree, there should be some sort of "cognitive modeling" technology involved. The modeling can be of an ideal student, instructor, or grader, or of a less-than-ideal problem solver as in the "student models" that are often built up in ITS. To be intelligent, a system has to incorporate and use a model for making decisions about what to do at any given point during learning.

Lee Gugerty (Oct. 20, 1993): Intelligent tutoring involves: (a) explicit modeling of expert representations and cognitive processes; (b) detection of student errors; (c) diagnosis of students' knowledge (correct, incorrect, and missing); (d) instruction adapted to students' knowledge state (via problem selection, hints, feedback, and explicit didactic instruction); and (e) doing all of the above in a timely fashion as the student solves problems (not post hoc).

Pat Kyllonen (Oct. 14, 1993): An "intelligent" tutoring system is one that uses AI programming techniques or principles. However, what is considered AI (as opposed to standard) programming changes over time (e.g., expert systems used to be archetypal AI systems, but are now found in $100 PC software packages). For me, two features separate ITS software from conventional CAI. One is the existence of a student model. What the student knows cannot be recorded directly, but must be inferred by the system, based on a pattern of successes or failures by the student and an "understanding" of what knowledge problems in the curriculum call upon. Another feature is the existence of "coaches," "demons" or "bug libraries" that can observe a student's behavior and either diagnose the behavior in terms of the student's current knowledge structure, or suggest corrections to that behavior.

Susanne Lajoie (Oct. 18, 1993): The "I" in ITS means that the computer can provide adaptive forms of feedback to the learner based on a dynamic assessment of the student's "model" of performance. Intelligent feedback means that the assessment of the learner is ongoing, the feedback is appropriate to that particular learner in the context of where an impasse has been encountered, and it is not canned but generated on the spot, based on student needs.

Alan Lesgold (Oct. 21, 1993): "Intelligent" means that the system uses inference mechanisms to provide coaching, explanation, or other information to the student performing a task. Further, it implies that this information is tuned to the context of the student's ongoing work and/or a model of the student's evolving knowledge.

Matt Lewis (Oct. 28, 1993): An "intelligent" tutoring system contains, at a minimum, a reasonably general simulation of human problem solving in direct service of communicating knowledge and, like a good human tutor, separates domain knowledge from pedagogical knowledge. The simulation might solve domain-specific problems in the target instructional domain (e.g., a human-like approach and solution to the problem of writing a fugue) or solve pedagogical problems (e.g., error diagnosis and attribution, or selection of appropriate response).

Wes Regian (Oct. 14, 1993): An ITS differs from CAI in that: (a) instructional interactions are individually tuned at run-time to be as efficient as possible, (b) instruction is based on cognitive principles, and (c) at least some of the feedback is generated at run-time, rather than being all canned. It is not particularly important to me what language the system is written in, whether or not the system is in any sense arguably aware of anything, and whether its decisions are rendered in a manner that is the same as a human decision.

Frank Ritter (Oct. 15, 1993): The "I" in ITS usually indicates that a single knowledge-based component has been added that helps a tutoring system perform one aspect of its performance in a better way. This can be in lesson scheduling, providing examples of domain knowledge in action, or providing domain knowledge for comparison with a student's behavior. What it should mean is that it does the whole job intelligently. The systems are usually not systems in the full sense of the word, they tend to be prototypes, with whole parts missing.

Derek Sleeman (Nov. 22, 1993): "Intelligent" tutoring systems need to have motivating learning environments, to communicate effectively, and to render dynamic decisions about appropriate control strategies. Since the 1960s, we've seen that the same material delivered on various systems differentially invoke motivation; thus we need to confirm the factors that impact a learner's motivation. Next, communication can only occur when there's a shared world-view. In conventional dialogs, humans dynamically tailor their language to the person to whom they are speaking, but computers are not yet so adaptable. Finally, control implies which of the partners in the dialog will take the initiative, and it's often necessary to change control during an interaction, depending on the social setting, the student's motivation, and the level of incoming knowledge.

Elliot Soloway (Oct. 28, 1993): The intent of the "I" in ITS was to explicitly recognize that a tutoring system needs to be exceedingly flexible in order to respond to the immense variety of learner responses. CAI, as the forerunner of ITS, didn't have the range of interactivity needed for learning. In fact, the movement from ICAI to ITS was to further distance the new type of learning environments from the rigidity of CAI.

Sig Tobias (Oct. 15, 1993): Intelligent, in an ITS context, means that the program is flexible in the method and sequence with which instructional materials are presented to the student. Furthermore, the system is capable of adapting instructional parameters to student characteristics by using data collected prior to, or during, instruction for such decisions. Finally, it suggests that the instructional system can advise the student regarding options most likely to be successful for the student.

Kurt VanLehn (Oct. 18, 1993): "Intelligent" means that at least one of the three classic modules is included in the tutoring system. That is, the machine has either a subject-matter expert, a diagnostician/student modeler, or an expert teacher. Just as in any AI system, an expert system with only 10 production rules is intelligent only in that it holds the possibilities for expansion; a 100-rule system is moderately intelligent; and 1000+ rules means you're really getting there.

Beverly Woolf (Oct. 25, 1993). My view of tutor intelligence includes the following elements: (a) mechanisms that model the thinking processes of domain experts, tutors, and students; (b) environments that supply world-class laboratories within which students can build and test their own reality; and (c) a computer partner that facilitates the ah-ha experience, recognizes the student's intention, and aids and advises the student. An intelligent environment would also support complex discoveries.

As seen in this non-random sample of responses about what constitutes intelligence in an ITS, just about everyone agrees that the most critical element is real-time cognitive diagnosis (or student modeling). The next most frequently cited feature is adaptive remediation(see 22.5). And while some maintain that remediation actually comprises the "T" in intelligent tutoring systems, our position is that the two components (diagnosis and remediation), working in concert, make up the intelligence in an ITS (see our working definition, above). Consider the case where a system diagnoses a student's skill level, but makes no effort to rectify any faulty behaviors. Can that system really be classified as intelligent? Theoretically, perhaps, but practically, no. Other characteristics of intelligence appear less frequently in these responses (e.g., canned vs. generated problems and feedback, degree of learner control in the environment, presence of awareness).

The degree of agreement among responders was actually surprising given the diversity of respective research interests and backgrounds (computer scientists, psychologists, educators). But this degree of consensus was not always there. Until fairly recently, the field was not only esoteric, but quite fractionated; no two people could agree on what "intelligence" in a computer tutor actually referred to. To understand the current congruence, we need to briefly jump back in time to see the evolution of intelligent tutoring systems, from the late 1960s to the present (mid-1990s).


Updated August 3, 2001
Copyright © 2001
The Association for Educational Communications and Technology

AECT
1800 North Stonelake Drive, Suite 2
Bloomington, IN 47404

877.677.AECT (toll-free)
812.335.7675

AECT Home Membership Information Conferences & Events AECT Publications Post and Search Job Listings