AECT Handbook of Research

Table of Contents

15. Virtual Realities
PDF

15.1 Introduction
15.2 Historical Background
15.3 Different Kinds of Virtual Reality
15.4 Introduction to Virtual Reality Applications in Education Training
15.5 Establishing a Research Agenda for Virtual Realities in Education and Training
15.6 Theoretical Perspectives on Virtual Realities
15.7 Design Models and Metaphors
15.8 Virtual Realities Research and Development
15.9 Implications
  References
Search this Handbook for:

 

15.8.VIRTUAL REALITIES RESEARCH AND DEVELOPMENT

15.8.1 Research on VR and Training Effectiveness

Regian, Shebilske, and Monk (1992) report on empirical research that explored the instructional potential of immersive virtual reality as an interface for simulation-based training. According to these researchers, virtual reality may hold promise for simulation-based training because the interface preserves (a) visual-spatial characteristics of the simulated world, and (b) the linkage between motor actions of the student and resulting effects in the simulated world. This research featured two studies. In one study, learners learned how to use a virtual control console. In the other study, learners learned to navigate a virtual maze.

In studying spatial cognition, it is useful to distinguish between small-scale and large-scale space (Siegal, 1981). Small-scale space can be viewed from a single vantage point at a single point in time. Large-scale space extends beyond the immediate vantage point of the viewer, and must be experienced across time. Subjects can construct functional representations of large-scale space from sequential, isolated views of small-scale space presented in two-dimensional media such as film (Hochberg, 1986) or computer graphics (Regian, 1986). Virtual reality, however, offers the possibility of presenting both small-scale and large-scale spatial information in a three-dimensional format that eliminates the need for students to translate the representation from 2-D to 3-D. The resulting reduction in cognitive load may benefit training. Regian et al (1992) investigated the use of immersive virtual reality to teach procedural tasks requiring performance of motor sequences within small-scale space (the virtual console) and to teach navigational tasks requiring configurational knowledge of large-scale space (the virtual maze).

In these studies, 31 subjects learned spatial-procedural skills and spatial-navigational skills in immersive virtual worlds accessed with head-mounted display and Dataglove™. Two VR worlds were created for this research: a virtual console and a virtual maze. Both were designed to support analogs of distinctly different tasks. The first was a procedural console-operations task and the second was a three-dimensional maze-navigation task. Each task involved a training phase and a testing phase. The console data show that subjects not only learned the procedure, but continued to acquire skill while being tested on the procedure, as the tests provided continued practice in executing the procedure. The maze data show that subjects learned three-dimensional, configurational knowledge of the virtual maze and were able to use the knowledge to navigate accurately within the virtual reality.

15.8.2 Research on Learners' Cognitive Visualization in 2-D and 3-D Environments

Merickel (1990; 1991) carried out a study designed to determine whether a relationship exists between the perceived realism of computer graphic images and the ability of children to solve spatially related problems(see Chapter 8). This project was designed to give children an opportunity to develop and amplify certain cognitive abilities: imagery, spatial relations, displacement and transformation, creativity, and spatially related problem-solving. One way to enhance these cognitive abilities is to have students develop, displace, transform and interact with 2D and 3D computer-graphics models. The goal of this study was to determine if specially designed 2D and 3D computer-graphic training would enhance any, or all, of these cognitive abilities.

Merickel reports that experiments were performed using 23 subjects between the ages of 8 and 11 who were enrolled in an elementary summer school program in Novato, California. Two different computer apparatuses were used: computer workstations and an immersive virtual reality system developed by Autodesk, Inc. The students were divided into two groups. The first used microcomputers (workstations) equipped with AutoSketch and AutoCAD software. The other group worked with virtual reality. The workstation treatment incorporated three booklets to instruct the subjects on how to solve five different spatial relationship problems.

The virtual reality system provided by Autodesk that was used in the virtual reality treatment included an 80386-based MS-DOS microcomputer, a head-mounted display and a VPL DataGlove™, a Polhemus 6D Isotrak positioning and head-tracking device; Matrox SM 1281 real-time graphics boards; and software developed at Autodesk.

The cyberspace part of the project began with classroom training in the various techniques and physical gestures required for moving within and interacting with cyperspace modes. Each child was shown how the DataGlove™ and the head-mounted display would feel by first trying them on without being connected to the computer.

Merickel reports that after the practice runs, 14 children were given the opportunity to don the cyberspace apparatus and interact with two different computer-generated, 3D virtual realities. The DataGlove™ had to be calibrated. Students looked around the virtual world of an office, and using hand gesture commands, practiced moving toward objects and "picked up" objects in the virtual world. Students also practiced "flying" which was activated by pointing the index finger of the hand in the DataGlove™.

The second cyberspace voyage was designed to have students travel in a large "outdoor" space and find various objects including a sphere, a book, a chair, a racquet, and two cube models --- not unlike a treasure hunt. But this treasure hunt had a few variations. One was that the two cube models were designed to see if the students could differentiate between a target model and its transformed (mirrored) image. The students' task was to identify which of the two models matched the untransformed target model. Students were instructed to fly to the models and study them; they were also instructed to fly around the models to see them from different viewpoints before making a choice. Most students were able to correctly identify the target model.

Merickel reports that during this second time in cyberspace, most students were flying with little or no difficulty. Their gestures were more fluid and, therefore, so was their traveling in cyberspace. They began to relax and walk around more even though walking movement is restricted by the cables that attach the DataGlove™ and head-mounted display to the tracking devices. Students began to turn or walk around in order to track and find various items. They appeared to have no preconceived notions or reservations about "traveling inside a computer." In sum, these children had become quite proficient with this cutting-edge technology in a very short time.

Merickel reports that four cognitive ability tests were administered to the subjects from both treatment groups. The dependent variable (i.e., spatially related problem solving) was was measured with the Differential Aptitude Test. The three other measures (Minnesota Pager Form Board Test, Mental Rotation Test, and the Torrance Test of Creative Thinking) were used to partial out any effects which visualization abilities and the ability to mentally manipulate two-dimensional figures, displacement and transformation of mental images abilities, and creative thinking might have had on spatially related problem solving.

Merickel concluded that the relationships between perceived realism and spatially related problem solving were inconclusive based on the results of this study, but worthy of further study. Furthermore, Merickel points out that the ability to visualize and mentally manipulate two-dimensional objects are predictors of spatially related problem solving abilities. In sum, Merickel concluded that virtual reality is highly promising and deserves extensive development as an instructional tool.

15.8.3 Research on Children Designing and Exploring Virtual Worlds

Winn (1993) presented an overview of the educational initiatives that are either underway or planned at the Human Interface Technology Lab at the University of Washington: One goal is to establish a learning center to serve as a point of focus for research projects and instructional development initiatives, as well as a resource for researchers in kinesthesiology who are looking for experimental collaborator. A second goal is to conduct outreach, including plans to bring virtual reality to schools as well as pre- and inservice teacher training. Research objectives include the development of a theoretical framework, knowledge construction, and data-gathering about effectiveness of virtual reality for learning in different content areas and for different learners. Specific research questions include: (1) Can children build Virtual Reality worlds? ; (2) Can children learn content by building worlds? and (3) Can children learn content by being in worlds built for them?

Byrne (1992) and Bricken and Byrne (1993) report on a study that examined this first research issue --- whether children can build VR worlds(see 7.4). . This study featured an experimental program of week-long summer workshops at the Pacific Science Center where groups of children designed and then explored their own immersive virtual worlds. The primary focus was to evaluate VR's usefulness and appeal to students ages 10 to 15 years, documenting their behavior and soliciting their opinions as they used VR to construct and explore their own virtual worlds. Concurrently, the researchers used this opportunity to collect usability data that might point out system design issues particular to tailoring VR technology for learning applications.

Bricken and Byrne (1993) report that the student groups were limited to approximately 10 new students each week for seven weeks. Participants were ages 10 years and older. A total of 59 students from ages 10 to 15 self-selected to participate over the seven-week period. The average age of students was 13 years, and the gender distribution was predominantly male (72%). The students were of relatively homogeneous ethnic origin; the majority were Caucasians, along with a few Asian Americans and African Americans. The group demonstrated familiarity with Macintosh computers, but none of the students had worked with 3-D graphics, or had heard of VR before coming to the VR workshops. The Macintosh modeling software package Swivel 3-D™ was used for creating the virtual worlds.

Each student research group had access to five computers for eight hours per day. They worked in groups of two or three to a computer. They used a codiscovery strategy in learning to use the modeling tools. Teachers answered the questions they could, however, the software was new to them as well so they could not readily answer all student questions. On the last day of each session, students were able to get inside their worlds using VR interface technology at the HIT Lab (the desktop Macintosh programs designed by the children with Swivel 3-D™ were converted over for use on more powerful computer workstations).

Bricken and Byrne (1993) report that they wanted to see what what these students were motivated to do with VR when given access to the technology in an open-ended context. The researchers predicted that the participants would gain a basic understanding of VR technology. In addition, the researchers expected that in using the modeling software, this group might learn to color, cluster, scale, and link graphic primatives (cubes, spheres), to assemble simple geometric 3-D environments, and to specify basic interactions such as "grab a ball, fly it to the box, drop it in."

The participants' experience was designed to be a hands-on student-driven collaborative process in which they could learn about VR technology by using it and learn about virtual worlds by designing and constructing them. Their only constraints in this task were time and the inherent limitations of the technology.

At the end of the week, students explored their worlds one at a time, while other group members watched what the participant was seeing on a large TV monitor. Although this was not a networked VR, it was a shared experience in that the kids "outside" the virtual world conversed with participants, often acting as guides. Bricken and Byrne (1993) report that the virtual worlds constructed by the students are the most visible demonstrations of the success of the world-building activity.

In collecting information on both student response and system usability, Bricken and Byrne (1993) reported that they used three different information-gathering techniques. Their goal was to attain both cross-verification across techniques and technique-specific insights. They videotaped student activities, elicited student opinions with surveys, and collected informal observations from teachers and researchers. Each data source revealed different facets of the whole process.

Bricken and Byrne (1993, p.204) reported that the students who participated in these workshops

...were fascinated by the experience of creating and entering virtual worlds. Across the seven sessions, they consistently made the effort to submit a thoughtfully planned, carefully modeled, well-documented virtual world. All of these students were motivated to achieve functional competence in the skills required to design and model objects, demonstrated a willingness to focus significant effort toward a finished product, and expressed strong satisfaction with their accomplishment. Their virtual worlds are distinctive and imaginative in both conceptualization and implementation. Collaboration between students was highly cooperative, and every student contributed elements to their group's virtual world. The degree to which student-centered methodology influenced the results of the study may be another fruitful area for further research.

Bricken and Byrne (1993,p.205) report that students demonstrated rapid comprehension of complex concepts and skills:

They learned computer graphics concepts (real-time versus batch rendering, Cartesian coordinate space, object attributes), 3-D modeling techniques, and world design approaches. They learned about VR concepts ("what you do is what you get," presence) and enabling technology (head-mounted display, position and orientation sensing, 6-D interface devices). They also learned about data organization: Students were required by the modeling software to link graphical elements hierarchically, with explicit constraints; students printed out this data tree each week as part of the documentation process.

According to these researchers, this project revealed which of the present virtual reality system components were usable, which were distracting, and which were dysfunctional for this age group. The researchers' conclusion is that improvement in the display device is mandatory; the resolution was inadequate for object and location recognition, and hopeless for perception of detail. Another concern is with interactivity tools. This study showed that manipulating objects with the DataGlove™ is awkward and unnatural. Bricken and Byrne (1993) also report that the head-mounted display has since been replaced with a boom-mounted display for lighter weight and a less intrusive cable arrangement.

In sum, students, teachers, and researchers agreed that this exploration of VR tools and technology was a successful experience for everyone involved (Byrne, 1992; Bricken and Byrne, 1993). Most important was the demonstration of students' desires and abilities to use virtual reality constructively to build expressions of their knowledge and imagination. They suggest that virtual reality is a significantly compelling environment in which to teach and learn. Students could learn by creating virtual worlds that reflected the evolution of their skills and the pattern of their conceptual growth. For teachers, evaluating comprehension and competence would become experiential as well as analytical, as they explored the worlds of thought constructed by their students.

15.8.4 Research on Learners in ExperientialLlearning Environments

Recently, an exciting experiential learning environment was developed at the Boston Computer Museum, using immersive virtual reality technology (Gay, 1993; Gay, 1994b; Greschler, 1994). The Cell Biology Project was funded by the National Science Foundation. David Greschler, of the Boston Computer Museum, explains that in this case, the NSF was interested in testing how VR can impact informal education (that is, self-directed, unstructured learning experiences). So an application was developed in two formats (immersive VR and flat panel screen desktop VR) to study virtual reality as an informal learning tool. A key issue was: what do learners do once they're in the virtual world? In this application, participants had the opportunity to build 'virtual' human cells and learn about cell biology. As Greschler explains, they looked at

the basics of the cell. First of all the cell is made up of things called organelles. Now these organelles, they perform different functions. Human cells: if you open most textbooks on human cells they show you one picture of one human cell and they show you organelles. But what we found out very quickly, in fact, is that there are different kinds of human cells. Like there's a neuron, and there's an intestinal cell, and there's a muscle cell. And all those cells are not the same at the basic level. They're different. They have different proportions of organelles, based on the kinds of needs that they have. For instance, a muscle cell needs more power, because it needs to be doing more work. And so as a result, it needs more mitochondrias, which is really the powerhouse. So we wanted to try to get across these basic principles.

In the Cell Biology Virtual World, the user would start by coming up to this girl within the virtual world who would say, "Please help me, I need neuron cells to think with, muscle cells to move with, and stomach cells to eat with." So you would either touch the stomach or the leg or the head and "you'd end up into the world where there was the neuron cell or the muscle cell or the intestinal cell and you would have all the pieces of that cell around you and marked and you would actually go around and build." You would go over, pick up the mitochondria, and move it into the cell. As Greschler (1994) explains, "there's a real sense of accomplishment, a real sense of building. And then, in addition to that, you would build this person." Greschler reports that before trying to compare the different media versions of the cell biology world,

[the designers] sort of said, we have to make sure our virtual world is good and people like it. It's one thing to just go for the educational point of view but you've got to get a good experience or else big deal. So the first thing we did, we decided to build a really good world. And be less concerned about the educational components so much as a great experience.

That way, people would want to experience the virtual world, so that learning would occur.

A pilot virtual world was built and tested and improvements were made. Greschler reports:

We found that it needed more information. There needs to be some sort of introduction to how to navigate in the virtual world. A lot of people didn't know how to move their hand tracker and so on. So what we did is we felt like, having revised the world, we'd come up with a world that was...I suppose you could say "Good." It was compelling to people and that people liked it. To us that was very important.

They defined virtual reality in terms of immersion, natural interaction (via hand trackers), and interactivity --- the user could control the world and move through it at will by walking around in the head mount (within a perimeter of 10 x 10 feet).

Testing with visitors at the Boston Computer Museum indicated that the nonimmersive desktop group consistently was able to retain more information about the cells and the organelles (at least for the short term). This group retained more cognitive information. However, in terms of level of engagement, the immersive VR group was much stronger with that. They underestimated the amount of time they were in the virtual world by, on average, more than 5 minutes, far more than the other group.

In terms of conclusions, Greschler (1994) suggests that immersive virtual reality

...probably isn't good for getting across factual information. What it might be good for is more general experiences; getting a sense for how one might do things like travel. I mean the whole idea [of the Cell Biology Project] is traveling into a cell. It's more getting a sense of what a cell is, rather than the facts behind it. So it's more perhaps like a visualization tool or something just to get a feel for certain ideas rather than getting across fact a, b, or c.

Furthermore,

I think the whole point of this is it's all new...We're still trying to figure out the right grammar for it, the right uses for it. I mean video is great to get across a lot of stuff. Sometimes it just isn't the right thing to use. Books are great for a lot of things, but sometimes they're just not quite right. I think what we're still trying to figure out is what is that 'quite right' thing for VR. There's clearly something there --- there's an incredible level of engagement. And concentration. That's I think probably the most important thing.

Greschler (1994) thinks that virtual reality will be a good tool for informal learning. "And my hope in fact, is that it will bring more informal learning into formal learning environments because I think that there needs to be more of that. More open-endedness, more exploration, more exploratory versus explanatory.

15.8.5 Research on Attitudes Toward Virtual Reality

Heeter (1992, 1994) has studied people's attitudinal responses to virtual reality. In one study, she studied how players responded to BattleTech, one of the earliest virtual reality location-based entertainment systems. Related to this, Heeter has examined differences in responses based on gender, since a much higher proportion of BattleTech players are males (just as with videogames). Heeter conducted a study of BattleTech players at the Virtual Worlds Entertainment Center in Chicago.

In the BattleTech study, players were given questionnaires when they purchased playing times, to be turned in after the game (Heeter, 1992). A total of 312 completed questionnaires were collected, for a completion rate of 34 percent. (One questionnaire was collected per person; at least 45 percent of the 1,644 games sold during the sample days represented repeat plays within the sample period). Different questionnaires were administered for each of three classes of players: novices, who had played one to ten BattleTech games (n=223; veterans, who had played eleven to fifty games (n=42); and masters, who had played more than fifty games (n=47).

According to Heeter (1992), the results of this study indicate that BattleTech fits the criteria of Czikszentmihalyi's (1990) model of "flow" or optimal experience:

  1. Rrequire learning of skills
  2. Have concrete goals
  3. Provide feedback
  4. Let person feel in control
  5. Facilitate concentration and involvement
  6. Be distinct from the everyday world ("paramount reality")

Heeter (1992, p.67) explains:

BattleTech fits these criteria very well. Playing BattleTech is hard. It's confusing and intimidating at first. Feedback is extensive and varied. There are sensors; six selectable viewscreens with different information which show the location of other players (nearby and broader viewpoint), condition of your 'Mech, heat sensors, feedback on which 'Mechs are in weapon range (if any), and more. After the game, there is additional feedback in the form of individual scores on a video display and also a complete printout summarizing every shot fired by any of the six concurrent players and what happened as a result of the shot. In fact, there is far more feedback than new players can attend to.

According to Heeter (1992, p.67),

"BattleTech may be a little too challenging for novices, scaring away potential players. There is a tension between designing for novices and designing for long term play. One third of novices feel there are too many buttons and controls.Novices who pay to play BattleTech may feel intimidated by the complexity of BattleTech controls and some potential novices may even be so intimidated by that complexity that they are scared away completely, that complexity is most likely scaring other potential novices away. But among veterans and masters, only 14 percent feel there are too many buttons and controls, while almost 40 percent say it's just right.).

Heeter (1992) reports that if participants have their way, virtual reality will be a very social technology. The BattleTech data identify consistently strong desires for interacting with real humans in addition to virtual beings and environments in virtual reality. Just 2 percent of respondents would prefer to play against computers only. Fifty-eight percent wanted to play against humans only, and 40 percent wanted to play against a combination of computers and humans. Respondents preferred playing on teams (71 percent) rather than everyone against everyone (29 percent). Learning to cooperate with others in team play was considered the most challenging BattleTech skill by masters, who estimated on average that it takes fifty-six games to learn how to cooperate effectively. Six players at a time was not considered enough. Veterans rated "more players at once" 7.1 on a ten point scale of importance of factors to improve the game; more players was even more important to masters (8.1). In sum, Heeter concludes that "Both the commercial success of BattleTech and the findings of the survey say that BattleTech is definitely doing some things right and offers some lessons to designers of future virtual worlds. (p. 67)"

Heeter (1992) reports that BattleTech players are mostly male. Masters are 98 percent male, veterans are 95 percent male, and novices are 91 percent male. BattleTech is not a child's game. Significant gender differences were found in reactions to BattleTech. Because such a small percentage of veterans and masters were female, gender comparisons for BattleTech were conducted only among novices. (Significant differences using One way ANOVA for continuous data and Crosstabs for categorical data are identified in the text by a single asterisk for cases of p < .05 and double asterisk for stronger probability levels of p < .01.) Specifically, 2 percent of masters, 5 percent of veterans, and 9 percent of novices were female. This small group of females who chose to play BattleTech might be expected to be more similar to the males who play BattleTech than would females in general. Even so, gender differences in BattleTech responses were numerous and followed a distinct, predictable stereotypical pattern. For example, on a scale from 0 to 10, female novices found BattleTech to be LESS RELAXING (1.1 versus 2.9) and MORE EMBARRASSING (4.1 versus 2.0) than did male novices. Males were more aware of where their opponents were than females were (63 percent versus 33 percent) and of when they hit an opponent (66 percent versus 39 percent). Female BattleTech players enjoyed blowing people up less than males did, although both sexes enjoyed blowing people up a great deal (2.4 versus 1.5 out of 7, where 1 is VERY MUCH). Females reported that they did not understand how to drive the robot as well (4.6 compared to 3.1 for males where 7 is NOT AT ALL). Fifty-seven percent of female novices said they would prefer that BattleTech cockpits have fewer than its 100+ buttons and controls, compared to 28 percent of male novices who wanted fewer controls.

Heeter (1994) concludes, "Today's consumer VR experiences appear to hold little appeal for the female half of the population. Demographics collected at the BattleTech Center in Chicago in 1991 indicated that 93 percent of the players were male." At FighterTown the proportion was 97 percent. Women also do not play today's video games. Although it is clear that women are not attracted to the current battle-oriented VR experiences, what women DO want from VR has received little attention. Whether from a moral imperative to enable VR to enrich the lives of both sexes, or from a financial incentive of capturing another 50 percent of the potential marketplace, or from a personal curiousity about the differences between females and males, insights into this question should be of considerable interest.

In another study, Heeter (1993) explored what types of virtual reality applications might appeal to people, both men and women. Heeter conducted a survey of students in a large-enrollment "Information Society" Telecommunications course at Michigan State University, where the students were willing to answer a 20-minute questionnaire, followed by a guest lecture about consumer VR games. The full study was conducted with 203 students. Sixty-one percent of the 203 respondents were male. Average age was 20, ranging from 17 to 32. To summarize findings from this exploratory study, here is what women DO want from VR experiences. They are strongly attracted to the idea of virtual travel. They would also be very interested in some form of virtual comedy, adventure, MTV, or drama. Virtual presence at live events is consistently rated positively, although not top on the list. The females in this study want very much to interact with other live humans in virtual environments, be it virtual travel, virtual fitness, or other experiences. If they play a game, they want it to be based most on exploration and creativity. Physical sensations and emotional experiences are important. They want the virtual reality experience to have meaningful parallels to real life.

Heeter (1993) reported that another line of virtual reality research in the Michigan State University Comm Tech Lab involves the development of virtual reality prototype experiences demonstrating different design concepts. Data is collected from attendees at various conferences who try using the prototype.

15.8.6 Research on Special Education Applications of VR

Virtual reality appears to offer many potentials as a tool that can enhance capabilities for the disabled in the areas of communication, perception, mobility, and access to tools (Pausch, Vogtle, & Conway, 1991; Pausch & Williams,1991; Warner and Jacobson, 1992; Marcus, 1993; Middleton, 1993; Treviranus, 1993; Murphy, 1994). Virtual reality can extend, enhance, and supplement the remaining capabilities of people who must contend with a disability such as deafness or blindness. And virtual reality offers potential as a rehabilitation tool. Delaney (1993) predicts that virtual reality will be instrumental in providing physical capabilities for persons with disabilities in the following areas:

  1. Individuals with movement restricting disabilities could be in one location while their "virtual being" is in a totally different location --- this opens up possibilities for participating in work, study, or leisure activities anywhere in the world, from home, or even a hospital bed
  2. Individuals with physical disabilities could interact with the real world through robotic devices they control from within a virtual world
  3. Blind persons could navigate through or among buildings represented in a virtual world made up of 3-dimensional sound images --- this will be helpful to rehearse travel to unfamiliar places such as hotels or conference centers
  4. Learning disabled, cognitively impaired, and brain injured individuals could control work processes that would otherwise be too complicated by transforming the tasks into a simpler form in a VR environment
  5. Designers and others involved in the design of prosthetic and assistive devices may be able to experience the reality of a person with a disability --- they could take on the disability in virtual reality, and thus experience problems firsthand, and their potential solutions.

At a conference on "Virtual Reality and Persons with Disabilities" that has been held annually in San Francisco since 1992 (sponsored by the Center on Disabilities at California State University Northridge) researchers and developers report on their work. This conference was established partly in response to the national policy, embedded in two separate pieces of legislation: Section 504 of the Rehabilitation Act of 1973, and the Americans with Disabilities Act (ADA). Within these laws is the overriding mandate for persons with disabilities to have equal access to electronic equipment and information. The recently-enacted American Disabilities Act offers potential as a catalyst for the development of virtual reality technologies. Harry Murphy (1994), the Director of the Center on Disabilities at California State University Northridge, explains that "Virtual reality is not a cure for disability. It is a helpful tool, and like all other helpful tools, television and computers, for example, we need to consider access. (p. 59)" Murphy (1994, p.57) argues that,

Virtuality and virtual reality hold benefits for everyone. The same benefits that anyone might realize have some special implications for people with disabilities, to be sure. However, our thinking should be for the general good of society, as well as the special benefits that might come to people with disabilities.

Many virtual reality applications for persons with disabilities are under development, showing great promise, but few have been rigorously tested. One award-winning application is the Wheelchair VR application from Prairie Virtual Systems of Chicago (Trimble, 1993). With this application, wheelchair bound individuals "roll through" a virtual model of a building such as a hospital that is under design by an architect and tests whether the design supports wheelchair access. Related to this, Dean Inman, an orthopedic research scientist at the Oregon Research Institute is using virtual reality to teach kids the skills of driving wheelchairs (Buckert-Donelson, 1995).

Virtual Technologies of Palo Alto, California has developed a "talking glove" application that makes it possible for deaf individuals to "speak" sign language while wearing a wired glove and have their hand gestures translated into English and printed on a computer screen, so that they can communicate for easily with those who do not speak sign language. Similar to this, Eberhart (1993) has developed a much less powerful non-commercial system that utilizes the Power Glove™ toy as an interface, together with an Echo Speech Synthesizer. Eberhart (1993) is exploring neural networks in conjunction with the design of VR applications for the disabled. Eberhart trained the computer to recognize the glove movements by training a neural network.

Newby (1993) described another much more sophisticated gesture-recognition system than the one demonstrated by Eberhart. In this application, a DataGlove™ and Polhemus tracker are employed to measure hand location and finger position to 'train' for a number of different hand gestures. Native users of American Sign Language (ASL) helped in the development of this application by providing templates of the letters of the manual alphabet, then giving feedback on how accurately the program was able to recognize gestures within various tolerance calibrations. A least-squares algorithm was used to measure the difference between a given gesture and the set of known gestures that the system had been trained to recognize.

Greenleaf (1993) described the GloveTalker, a computer-based gesture-to-speech communication device for the vocally impaired that uses a modified DataGlove™. The wearer of the GloveTalker speaks by signaling the computer with his or her personalized set of gestures. The DataGlove™ transmits the gesture signals through its fiber optic sensors to the Voice Synthesis System, which speaks for the DataGlove™ wearer. This system allows individuals who are temporarily or permanently impaired vocally to communicate verbally with the hearing world through hand gestures. Unlike the use of sign language, the GloveTalker does not require either the speaker or the listener to know American Sign Language (ASL). The GloveTalker itself functions as a gesture interpreter: the computer automatically translates hand movements and gestures into spoken output. The wearer of the GloveTalker creates a library of personalized gestures on the computer that can be accessed to rapidly communicate spoken phrases. The voice output can be sent over a computer network or over a telephone system, thus enabling vocally impaired individuals to communicate verbally over a distance. The GloveTalker system can also be used for a wide array of other applications involving data gathering and data visualization. For example, an instrumented glove is used to measure the progress of arm and hand tremors in patients with Parkinson's disease.

The Shepherd School, the largest special school in the United Kingdom, is working with a virtual reality research team at Nottingham University (Lowe, 1994). The Shephard School is exploring the benefits of virtual reality as a way of teaching children with complex problems to communicate and gain control over their environment.

Researchers at the Hugh Macmillan Center in Toronto, Canada are exploring virtual reality applications involving Mandala and the Very Nervous System, a responsive musical environment developed by artist David Rokeby that is activated by movement so that it "plays" interactive musical compositions based on the position and quality of the movement in front of the sensor; the faster the motions, the higher the tones (Treviranus, 1993). Rokeby has developed several interactive compositions for this system (Cooper, 1995).

Salcedo and Salcedo (1993) of the Blind Children Learning Center in Santa Ana, California report that they are using the Amiga computer, Mandala software, and a videocamera to increase the quantity and quality of movement in young children with visual impairments. With this system, children receive increased feedback from their movements through the musical sounds their movements generate. Related to this is the VIDI MICE, a low-cost program available from Tensor Productions which interfaces with the Amiga computer (Jacobs, 1991).

Massof (1993) reports that a project is underway (involving collaboration by Johns Hopkins University, NASA, and the Veterans Administration) where the goal is to develop a head-mounted video display system for the visually impaired that incorporates custom-prescribed, real-time image processing designed to enhance the vision of the user. A prototype of this technology has been developed and is being tested.

Nemire, Burke, and Jacoby (1993) of Interface Technologies in Capitola, California report that they have developed a virtual learning environment for physics instruction for disabled students. This application has been developed to provide an immersive, interactive, and intuitive virtual learning environment for these students.

Important efforts at theory-building concerning virtual reality and persons with disabilities have been initiated. For example, Mendenhall and Vanderheiden (1993) have conceptualized two classification schemes (virtual reality versus virtual altered reality) for better understanding the opportunities and barriers presented by virtual reality systems to persons with disabilites. And Marsh, Meisel and Meisel (1993) have examined virtual reality in relation to human evolution. These researchers suggested that virtual reality can be considered a conscious re-entering of the process of evolution. Within this reconceptualization of the context of 'survival of the fittest,' disability becomes far less arbitrary. In practical terms, virtual reality can bring new meaning to the emerging concepts of universal design, rehabilitation engineering, and adaptive technology.

Related to this, Lasko-Harvill (1993) commented:

In Virtual Reality the distinction between people with and without disabilities disappears. The difference between Virtual Reality and other forms of computer simulation lies in the ability of the participant to interact with the computer generated environment as though he or she was actually inside of it, and no one can do that without what are called in one context "assistive" devices and another "user interface" devices.

This is an important comparison to make, pointing out that user interfaces can be conceived as 'assistive technologies' for the fully abled as well as the disabled. Lasko-Harvill explains that virtual reality can have a leveling effect between abled and differently abled individuals. This is similar to what the Lakeland Group found in their training program for team-building at Virtual Worlds Entertainment Centers (McGrath, 1994; McLellan, 1994a).


Updated August 3, 2001
Copyright © 2001
The Association for Educational Communications and Technology

AECT
1800 North Stonelake Drive, Suite 2
Bloomington, IN 47404

877.677.AECT (toll-free)
812.335.7675

AECT Home Membership Information Conferences & Events AECT Publications Post and Search Job Listings