AECT Handbook of Research

Table of Contents

15. Virtual Realities
PDF

15.1 Introduction
15.2 Historical Background
15.3 Different Kinds of Virtual Reality
15.4 Introduction to Virtual Reality Applications in Education Training
15.5 Establishing a Research Agenda for Virtual Realities in Education and Training
15.6 Theoretical Perspectives on Virtual Realities
15.7 Design Models and Metaphors
15.8 Virtual Realities Research and Development
15.9 Implications
  References
Search this Handbook for:

 

15.2 HISTORICAL BACKGROUND

Woolley (1992) explains that, "Trying to trace the origins of the idea of virtual reality is like trying to trace the source of a river. It is produced by the accumulated flow of many streams of ideas, fed by many springs of inspiration." One forum where the potentials of virtual reality have been explored is science fiction (Bradbury, 1951; Harrison, 1972; W. Gibson, 1986; Stephenson, 1992; Sterling, 1994), together with the related area of scenario-building (Kellogg, Carroll, & Richards, 1991).

The technology that has led up to virtual reality technology --- computer graphics, simulation, human-computer interfaces, etc. --- has been developing and coalescing for over three decades. In the 1960s, Ivan Sutherland created one of the pioneering virtual reality systems which incorporated a head-mounted display (Sutherland, 1965; Sutherland, 1968). Sutherland's head-mounted display was nicknamed 'The Sword of Damocles' because of its strange appearance. Sutherland did not continue with this work because the computer graphics systems available to him at that time were very primitive. Instead, he shifted his attention to inventing many of the fundamental algorithms, hardware, and software of computer graphics (McGreevy, 1993). Sutherland's work provided a foundation for the emergence of virtual reality in the 1980s. His early work inspired others, such as Frederick P. Brooks, Jr., of the University of North Carolina, who began experimenting with ways to accurately simulate and display the structure of molecules. Brooks' work developed into a major virtual reality research initiative at the University of North Carolina (Hamit, 1993; Rheingold, 1991; Robinett, 1991).

In 1961 Morton Heilig, a filmmaker, patented Sensorama, a totally mechanical virtual reality device (a one-person theater) that included three-dimensional, full color film together with sounds, smells, and the feeling of motion, as well as the sensation of wind on the viewer's face. In the Sensorama, the user could experience several scenarios, including a motorcycle ride through New York, a bicycle ride, or a helicopter ride over Century City. The Sensorama was not a commercial success but it reflected tremendous vision, which has now returned with computer-based rather than mechanical virtual reality systems (Hamit, 1993; Rheingold, 1991).

During the 1960s and 1970s, the Air Force established a laboratory at Wright-Patterson Air Force Base in Ohio to develop flight simulators and head-mounted displays that could facilitate learning and performance in sophisticated, high-workload, high-speed military aircraft. This initiative resulted in the SuperCockpit which allows pilots to fly ultra-high-speed aircraft using only head, eye, and hand movements. The director of the SuperCockpit project, Tom Furness, is now the director of the Human Interface Technology Lab at the University of Washington, a leading VR R&D center. And VR research continues at Wright-Patterson air Force Base (Amburn, 1993; Stytz, 1993; Stytz, 1994). Flight simulators have been used extensively and effectively for pilot training since the 1920s. (Lauber & Fouchee, 1981; Woolley, 1992; Bricken & Byrne, 1993).

In the 1960s, GE developed a simulator that was adapted for lunar mission simulations. It was primarily useful for practicing rendezvous and especially docking between the lunar excursion module (LEM) and the command module (CM). This simulator was also adapted as a city planning tool in a project at UCLA --- the first time a simulator had been used to explore a digital model of a city (McGreevy, 1993).

In the 1970s, researchers at MIT developed a spatial data management system using videodisc technology. This work resulted in the Aspen Movie Map (MIT,1981; Mohl, 1982), a recreation of part of the town of Aspen, Colorado stored on an optical disk that gave users the simulated experience of driving through the town of Aspen, interactively choosing to turn left or right to pursue any destination (within the confines of the model). Twenty miles of Aspen streets were photographed from all directions at ten-foot intervals, as was every possible turn. Aerial views were also included. This photo-based experiment proved to be too complicated (i.e., it was not user-friendly) so this approach was not used to replicate larger cities, which entail a higher degree of complexity (Hamit, 1993).

Also in the 1970s, Myron Krueger began experimenting with human-computer interaction as a graduate student at the University of Wisconsin-Madison. Krueger designed responsive but non-immersive environments that combined video and computer. He referred to this as 'Artificial Reality.' As Krueger (1993, P. 149) explains,

you are perceived by a video camera and the image of your body is displayed in a graphic world. The juxtaposition of your image with graphic objects on the screen suggests that perhaps you could affect the graphic objects. This expectation is innate. It does not need to be explained. To take advantage of it, the computer continually analyzes your image with respect to the graphic world. When your image touches a graphic object, the computer can respond in many ways. For example, the object can move as if pushed. It can explode, stick to your finger, or cause your image to disappear. You can play music with your finger or cause your image to disappear. The graphic world need not be realistic. Your image can be moved, scaled, and rotated like a graphic object in response to your actions or simulated forces. You can even fly your image around the screen.

The technologies underlying virtual reality came together at the NASA Ames Lab in California during the mid-1980s with the development of a system that utilized a stereoscopic head-mounted display (using the screens scavenged from two minature televisions) and the fiber-optic wired glove interface device. This breakthrough project at NASA was based on a long tradition of developing ways to simulate the environments and the procedures that astronauts would be engaged in during space flights such as the GE simulator developed in the 1960s (McGreevy, 1993).


Updated August 3, 2001
Copyright © 2001
The Association for Educational Communications and Technology

AECT
1800 North Stonelake Drive, Suite 2
Bloomington, IN 47404

877.677.AECT (toll-free)
812.335.7675

AECT Home Membership Information Conferences & Events AECT Publications Post and Search Job Listings