AECT Handbook of Research

Table of Contents

15. Virtual Realities
PDF

15.1 Introduction
15.2 Historical Background
15.3 Different Kinds of Virtual Reality
15.4 Introduction to Virtual Reality Applications in Education Training
15.5 Establishing a Research Agenda for Virtual Realities in Education and Training
15.6 Theoretical Perspectives on Virtual Realities
15.7 Design Models and Metaphors
15.8 Virtual Realities Research and Development
15.9 Implications
  References
Search this Handbook for:

 

15.3 DIFFERENT KINDS OF VIRTUAL REALITY

There is more than one type of virtual reality. Furthermore, there are different schema for classifying various types of virtual reality. Jacobson (1993a) suggests that there are four types of virtual reality: (1) immersive virtual reality; (2) desktop virtual reality (i.e., low cost homebrew virtual reality); (3) projection virtual reality; and (4) simulation virtual reality.

Thurman and Mattoon (1994) present a model for differentiating between different types of VR, based on several "dimensions." They identify a "verity dimension" that helps to differentiate between different types of virtual reality, based on how closely the application corresponds to physical reality. They propose a scale showing the verity dimension of virtual realities (See Fig. 15-1). According to Thurman and Mattoon (1994, P.57),

The two end points of this dimension - physical and abstract - describe the degree that a VR and entities within the virtual environment have the characteristics of reality. On the left end of the scale, VRs simulate or mimic real-world counterparts which correspond to natural laws. On the right side of the scale, VRs represent abstract ideas that are completely novel and may not even resemble the real world.

Thurman and Mattoon (1994) also identify an "integration dimension" that focuses on how humans are integrated into the computer system. This dimension includes a scale featuring three categories: batch processing, shared control, and total inclusion. These categories are based on three broad eras of human-computer integration, culminating with VR --- total inclusion. A third dimension of this model is interface, on a scale ranging between natural and artificial. These three dimensions are combined to form a three-dimensional classification scheme for virtual realities. This model provides a valuable tool for understanding and comparing different virtual realities.

Figure 15-1. Thurston and Mattoon's verity scale for virtual reality. (Adapted from Thurston & Mattoon, 1994.)

Another classification scheme has been delineated by Brill (1993; 1994b). This model will be discussed in detail here. Brill's model features seven different types of virtual reality: (1) Immersive first-person; (2) Through the window; (3) Mirror world; (4) Waldo World; (5) Chamber world; (6) Cab simulator environment; and (7) Cyberspace. Some of Brill's categories of virtual reality are physically immersive and some are not. The key feature of all virtual reality systems is that they provide an environment created by the computer or other media where the user feels present, that is, immersed physically, perceptually, and psychologically. Virtual reality systems enable users to become participants in artificial spaces created by the computer. It is important to note that not all virtual worlds are three-dimensional. This is not necessary to provide an enriching experience. And to explore a virtual world, the user doesn't have to be completely immersed in it: first-person (direct) interaction, as well as second-person and third-person interaction with the virtual world are all possible (Laurel, 1991; Norman, 1993), as the following discussion indicates.

 

15.3.1 Immersive First-Person

Usually when we think of virtual reality, we think of immersive systems involving computer interface devices such as a head-mounted display (HMD), fiber-optic wired gloves, position tracking devices, and audio systems providing 3-D (binaural) sound. Immersive virtual reality provides an immediate, first-person experience. With some applications, there is a treadmill interface to simulate the experience of walking through virtual space. And in place of the head-mounted display, there is the BOOM viewer from Fake Space Labs which hangs suspended in front of the viewer's face, not on it, so it is not as heavy and tiring to wear as the head-mounted display. In immersive VR, the user is placed inside the image; the generated image is assigned properties which make it look and act real in terms of visual perception and in some cases aural and tactile perception (Brooks, 1988; Trubitt, 1990; Begault, 1991; Markoff, 1991; Minsky, 1991; Gehring, 1992). There is even research on creating virtual smells; an application to patent such a product has been submitted by researchers at the Southwest Research Institute (Varner, 1993).

Children are already familiar with some of this technology from video games. Mattel's Power Glove™, used as an interface with Nintendo Games, is a low-cost design based on the DataGlove™ from VPL Research, Inc. The Power Glove™ failed as a toy, but it has achieved some success as an interface device in some low cost virtual reality systems, particularly in what are known as "homebrew" or "garage" virtual reality systems (Jacobson, 1994). Inexpensive software and computer cards are available that make it possible to use the Power Glove™ as an input device with Amiga, Macintosh or IBM computers (Eberhart, 1993; Stampe, Roehl, & Eagan, 1993; Jacobson, 1994; Hollands, 1995).

15.3.2 Augmented Reality

A variation of immersive virtual reality is Augmented Reality where a see-through layer of computer graphics is superimposed over the real world to highlight certain features and enhance understanding. One application of augmented reality is in aviation, where certain controls can be highlighted, for example the controls needed to land an airplane. And many medical applications are under development (Taubes, 1994b). Recently, for the first time, a surgeon conducted surgery to remove a brain tumor using an augmented reality system; a video image superimposed with 3-D graphics helped the doctor to see the site of the operation more effectively (Satava, 1993).

15.3.3 Through the window

With this kind of system, also known as "desktop VR" the user sees the 3-D world through the 'window' of the computer screen and navigates through the space with a control device such as a mouse. Like immersive virtual reality, this provides a first-person experience. One low-cost example of a 'Through the window' virtual reality system is the 3-D architectural design planning tool Virtus WalkThrough that makes it possible to explore virtual reality on a Macintosh or IBM computer. Developed as a computer visualization tool to help plan complex high-tech filmmaking for the movie The Abyss, Virtus WalkThrough is now used as a set design and planning tool for many Hollywood movies and advertisements as well as architectural planning and educational applications. A similar, less expensive and less sophisticated program that is starting to find use in elementary and secondary schools is Virtus VR (Law, 1994; Pantelidis, nd).

Another example of 'Through the window' virtual reality comes from the field of dance, where a computer program called LifeForms lets choreographers create sophisticated human motion animations. LifeForms permits the user to access "shape" libraries of figures in sitting, standing, jumping, sports poses, dance poses, and other positions. LifeForms supports the compositional process of dance and animation so that choreographers can create, fine-tune, and plan dances "virtually" on the computer. The great modern dancer and choreographer Merce Cunningham has begun using LifeForms to choreograph new dances (Schiphorst, 1992). Using LifeForms, it is possible to learn a great deal about the design process without actually rehearsing and mounting a performance.

The field of forensic animation is merging with 'Through the window' VR (Baird, 1992, Hamilton, 1993). Here, dynamic computer animations are used to recreate the scene of a crime and the sequence of events, as reconstructed through analysis of the evidence (for example, bullet speed and trajectory can be modeled). These dynamic visualizations are used in crime investigations and as evidence in trials. The London Metropolitan Police use VR to document witnesses' descriptions of crime scenes. Similarly, the FBI uses Virtus WalkThrough as a training tool at the FBI Academy and as a site visualization tool in hostage crisis situations.

15.3.4 Mirror world

In contrast to the first-person systems described above, Mirror Worlds (Projected Realities) provide a second-person experience in which the viewer stands outside the imaginary world, but communicates with characters or objects inside it. Mirror world systems use a video camera as an input device. Users see their images superimposed on or merged with a virtual world presented on a large video monitor or video projected image. Using a digitizer, the computer processes the users' images to extract features such as their positions, movements, or the number of fingers raised. These systems are usually less expensive than total immersion systems, and the users are unencumbered by head gear, wired gloves, or other interfaces (Lantz, 1992). Four examples of a Mirror World virtual reality system are: (1) Myron Krueger's 'artificial reality' systems such as VIDEOPLACE; (2) the Mandala system from the Vivid Group, created by a group of performance artists in Toronto, (3) the InView system which has provided the basis for developing entertainment applications for children, including a TV game show, and (4) Meta Media's wall-sized screen applications such as shooting basketball hoops and experiencing what happens when you try to throw a ball under zero gravity conditions (Brill, 1995; O'Donnell, 1994; Wagner, 1994).

In Krueger's system, users see colorful sihouettes of their hands or their entire bodies. As users move, their silhouette mirror images move correspondingly, interacting with other silhouette objects generated by computer. Scale can be adjusted so that one person's mirror silhouette appears very small by comparison with other people and objects present in the VIDEOPLACE artificial world. Krueger suggests that,

In artificial realities, the body can be employed as a teaching aid, rather than suppressed by the need to keep order. The theme is not "learning by doing" in the Dewey sense, but instead "doing is learning," a completely different emphasis. (Krueger, 1993, p. 152)

The Mandala and InView systems feature a video camera above the computer screen that captures an image of the user and places this image within the scene portrayed on the screen using computer graphics. There are actually three components: (1) the scene portrayed (usually stored on videodisc); (2) the digitized image of the user; and (3) computer graphics-generated objects that appear to fit within the scene that are programmed to be interactive, responding to the "touch" of the user's image. The user interacts with the objects on the screen; for example, to play a drum or to hit a ball. (Tactile feedback is not possible with this technique.) This type of system is becoming popular as an interactive museum exhibit. For example, at the National Hockey Museum, a Mandala system shows you on the screen in front of the goalie net, trying to keep the "virtual" puck out of the net. Recently, a Mandala installation was completed for Paramount Pictures and the Oregon Museum of Science and Industry that is a simulation of Star Trek: The Next Generation's Holodeck.

Users step into an actual set of the transporter room in the real world and view themselves in the "Star Trek virtual world" on a large screen in front of them. They control where they wish to be transported and can interact with the scene when they arrive. For example, users could transport themselves to the surface of a planet, move around the location, and manipulate the objects there. Actual video footage from the television show is used for backgrounds and is controlled via videodisc. (Wyshynski & Vincent, 1993, p. 130)

Another application is an experimental teleconferencing project --- "Virtual Cities" --- for children developed by the Vivid Group in collaboration with the Marshal McLuhan Foundation (Mandala VR News, 1993). In this application, students in different cities around the world are brought into a networked common virtual environment using videophones.

The Meta Media VR system is similar to the Mandala and InView systems, but the image is presented on a really large wall-sized screen, appropriate for a large audience. Applications of this system, such as Virtual Hoops, are finding widespread use in entertainment and in museums (Brill, 1995). One fascinating aspect of this type of VR mirror world is that it promotes a powerful social dimension: people waiting in the bleachers for a turn at Virtual Hoops cheer the player who makes a hoop --- it's very interactive in this way. And preliminary evidence suggests that learners get more caught up in physics lessons presented with this technology, even when they are only sitting in the audience (Wisne, 1994).

15.3.5 Waldo World

This type of virtual reality application is a form of digital puppetry involving real-time computer animation. The name "Waldo" is drawn from a science fiction story by Robert Heinlein (1965). Wearing an electronic mask or body armor equipped with sensors that detect motion, a puppeteer controls, in real-time, a computer animation figure on a screen or a robot.

One example of a Waldo World VR application is the Virtual Actors™ developed by SimGraphics Engineering (Tice & Jacobson, 1992). These are computer-generated animated characters controlled by human actors, in real-time. To perform a Virtual Actor (VA), an actor wears a "Waldo" which tracks the actor's eye brows, cheek, head, chin, and lip movements, allowing them to control the corresponding features of the computer generated character with their own movements. For example, when the actor smiles, the animated character smiles correspondingly. A hidden video camera aimed at the audience is fed into a video monitor backstage so that the actor can see the audience and "speak" to individual members of the audience through the lip-synced computer animation image of the character on the display screen. This digital puppetry application is like the Wizard of Oz interacting with Dorothy and her companions: "Pay no attention to that man behind the curtain!"

The Virtual Actor characters include Mario in Real Time (MIRT), based on the hero of the Super Mario Nintendo games, as well as a Virtual Mark Twain. MIRT and the Virtual Mark Twain are used as an interactive entertainment and promotional medium at trade shows (Tice & Jacobson, 1992). Another Virtual Actor is Eggwardo, an animation character developed for use with children at the Loma Linda Medical Center (Warner & Jacobson, 1992; Warner, 1993). Neuroscientist Dave Warner (1993) explains:

We brought Eggwardo into the hospital where he interacted with children who were terminally ill. Some kids couldn't even leave their beds so Eggwardo's image was sent to the TV monitors above their beds, while they talked to the actor over the phone and watched and listened as as Eggwardo joked with them and asked how they were feeling and if they'd taken their medicine. The idea is to use Eggwardo, and others like him, to help communicate with therapy patients and mitigate the fears of children who face surgery and other daunting medical procedures.

Another type of Waldo World has been developed by Ascension, using its Flock of Birds™ positioning system (Scully, 1994). This is a full-body waldo system that is not used in real time but as a foundation for creating animated films and advertisements.

15.3.6 Chamber World

A Chamber World is a small virtual reality projection theater controlled by several computers that gives users the sense of freer movement within a virtual world than the immersive VR systems and thus a feeling of greater immersion. Images are projected on all of the walls that can be viewed in 3-D with a head-mounted display showing a seamless virtual environment. The first of these systems was the CAVE, developed at the Electronic Visualization Laboratory at the University of Illinois (Cruz-Nierna, 1993; DeFanti, Sandin, & Cruz-Neira, 1993; Wilson, 1994). Another Chamber World system --- EVE: Extended Virtual Environment --- was developed at the Kernforschungszntrum (Nuclear Research Center) Karlsruhe in collaboration with the Institut fur Angewandte Informatik (Institute of Applied Informatics) in Germany (Shaw, 1994; Shaw and May, 1994). The recently opened Sony Omnimax 3-D theaters where all members of the audience wear a head-mounted display in order to see 3-D graphics and hear 3-D audio is another --- albeit much larger --- example of this type of virtual reality (Grimes, 1994).

The CAVE is a 3-D real-projection theater made up of three walls and a floor, projected in stereo and viewed with "stereo glasses" that are less heavy and cumbersome than many other head-mounted displays used for immersive VR (Cruz-Nierna, 1993; Wilson, 1994). The CAVE provides a first-person experience. As a CAVE viewer moves within the display boundaries (wearing a location sensor and 3-D glasses), the correct perspective and stereo projections of the environment are updated and the image moves with and surrounds the viewer. Four Silicon Graphics computers control the operation of the CAVE, which has been used for scientific visualization applications such as astronomy.

15.3.7 Cab Simulator Environment

This is another type of "first-person" virtual reality technology that is essentially an extension of the traditional simulator(see 17.4). Hamit (1993) defines the cab simulator environment as:

Usually an entertainment or experience simulation form of virtual reality, which can be used by a small group or by a single individual. The illusion of presence in the virtual environment is created by the use of visual elements greater than the field of view, three-dimensional sound inputs, computer-controlled motion bases and more than a bit of theatre (p. 428).

Cab simulators are finding many applications in training and entertainment. For example, AGC Simulation Products has developed a cab simulator training system for police officers to practice driving under high-speed and dangerous conditions (Flack, 1993). SIMNET is a networked system of cab simulators that is used in military training (Hamit, 1993; Sterling, 1993). Virtual Worlds Entertainment has developed BattleTech, a location-based entertainment system where players in six cabs are linked together to play simulation games (Jacobson, 1993b). An entertainment center in Irvine, California called Fighter Town features actual flight simulators as "virtual environments." Patrons pay for a training session where they learn how to operate the simulator and then they get to go through a flight scenario.

15.3.8 Cyberspace

The term "cyberspace" was coined by William Gibson in the science fiction novel Neuromancer (1986), which describes a future dominated by vast computer networks and databases. Cyberspace is a global artificial reality that can be visited simultaneously by many people via networked computers. Cyberspace is where you are when you're hooked up to a computer network or electronic database --- or talking on the telephone. However, there are more specialized applications of cyberspace where users hook up to a virtual world that exists only electronically; these applications include text-based MUDs (Multi-User Dungeons or Multi-User Domains) and MUSEs (Multi-User Simulated Environments). One MUSE, Cyberion City has been established specifically to support education within a constructivist learning context (Rheingold, 1993). Groupware, also known as computer-supported cooperative work (CSCW), is another type of cyberspace technology (Schrage, 1991; Miley, 1992; Baecker, 1993; Bruckman & Resnick, 1993; Coleman, 1993; Wexelblat, 1993).

Habitat, designed by Chip Morningstar and F. Randall Farmer (1991;1993) at Lucasfilm, was one of the first attempts to create a large-scale, commercial, many-user, graphical virtual environment. Habitat is built on top of an ordinary commercial on-line service and uses low-cost Commodore 64 home computers to support user interaction in a virtual world. The system can support thousands of users in a single shared cyberspace. Habitat presents its users with a real-time animated view into an on-line graphic virtual world. Users can communicate, play games, and go on adventures in Habitat. There are two versions of Habitat in operation, one in the United States and another in Japan.

Similar to this, researchers at the University of Central Florida have developed ExploreNet, a low-cost 2-D networked virtual environment intended for public education (Moshell & Dunn-Roberts, 1993; Moshell & Hughes, 1993; Moshell & Hughes, 1994a; Moshell & Hughes, 1994b). This system is built upon a network of 386 and 486 IBM PCs. ExploreNet is a role-playing game. Students must use teamwork to solve various mathematical problems that arise while pursuing a 'quest.' Each participant has an animated figure on the screen, located in a shared world. When one student moves her animated figure or takes an action, all the players see the results on the networked computers, located in different rooms, schools, or even cities. ExploreNet is the basis for a major research initiative.

CyberCity, an interactive graphical world, is currently being added as a section of CompuServe (Van Nedervelde, 1994). This is only one example of an increasing trend toward graphic interfaces in cyberspace, which is most clearly exemplified by the graphical browser MOSAIC. However, systems like CyberCity and Habitat are interactive virtual worlds rather than a hypertextual graphic user interface (GUI) system like MOSAIC(see 21.4).

There is an electronically networked coffee house (Galloway & Rabinowitz, 1992). The Electronic Cafe International, headquartered in Santa Monica, California, links people at about 60 sites around the globe via video and computer for talk, music and performance art conducted jointly by people at the various sites.

Another example of cyberspace is the Army's SIMNET system. Tank simulators (a type of cab simulator) are networked together electronically, often at different sites, and wargames are played using the battlefield modeled in cyberspace. Participants may be at different locations, but they are "fighting" each other at the same location in cyberspace via SIMNET (Hamit, 1993; Sterling, 1993). Not only is the virtual battlefield portrayed electronically, but participants' actions in the virtual tanks are monitored, revised, coordinated. There is virtual radio traffic. And the radio traffic is recorded for later analysis by trainers. Several battlefield training sites such as the Mojave Desert in California and 73 Easting in Iraq (the site of a major battle in the 1991 war) are digitally replicated within the computer so that all the soldiers will see the same terrain, the same simulated enemy and friendly tanks. Battle conditions can be change for different wargame scenarios (Hamit, 1993; Sterling, 1993).

15.3.9 Telepresence/Teleoperation

The concept of cyberspace is linked to the notion of telepresence, the feeling of being in a location other than where you actually are. Related to this, teleoperation means that you can control a robot or another device at a distance. In the Jason Project, children at different sites across the U.S. have the opportunity to teleoperate the unmanned submarine Jason, the namesake for this innovative science education project directed by Robert Ballard, a scientist as the Woods Hole Oceanographic Institute (EDS, 1991; Ulman, 1993; McLellan, 1995). An extensive set of curriculum materials is developed by the National Science Teachers Association to support each Jason expedition. A new site is chosen each year. In past voyages, the Jason Project has gone to the Mediterranean Sea, the Great Lakes, the Gulf of Mexico, the Galapagos Islands, and Belize. The 1995 expedition will go to Hawaii.

Similar to this, NASA has implemented an educational program in conjuction with the Telepresence-controlled Remotely Operated underwater Vehicle (TROV) that has been deployed to Antarctica (Stoker, 1994). By means of a distributed computer control architecture developed at NASA, school children in classrooms across the U.S. can take turns driving the TROV in Antarctica.

Surgeon Richard Satava is pioneering telepresence surgery for gall bladder removal without any direct contact from the surgeon after an initial small incision is made --- a robot does the rest, following the movements of the surgeon's hands at another location (Satava, 1992; Taubes, 1994b). Satava believes that telepresence surgery can someday be carried out in space, on the battlefield, or in the Third World, without actually sending the doctor.


Updated August 3, 2001
Copyright © 2001
The Association for Educational Communications and Technology

AECT
1800 North Stonelake Drive, Suite 2
Bloomington, IN 47404

877.677.AECT (toll-free)
812.335.7675

AECT Home Membership Information Conferences & Events AECT Publications Post and Search Job Listings