AECT Handbook of Research

Table of Contents

16. Visual Literacy
PDF

16.1 Introduction
16.2 Theoretical Foundations of Visual Literacy
16.3 Establishing a Visual Literacy Research Agenda
16.4 Visual Vocabulary
16.5 Visualization
16.6 Visual Learning/Visual Teaching
16.7 Visual Thinking
16.8 Visual Literacy and Verbal Literacy
16.9 The Visual-Verbal Relationship
16.10 Visible Language:Text as Visuals
16.11 Eletronic Visuals
16.12 Conclusions
  References
Search this Handbook for:

 

16.4 VISUAL VOCABULARY

Although Levie's summary of the research on pictures covers much of the research relevant to visual literacy, Baca's study reminds us that the use of "visuals" touches other areas, including thinking and learning, and constructing meaning. To construct meaning from visuals implies that in some way the constructed meaning can be "read" by persons who view it. The notion that images can be "read" implies the existence of at least a rudimentary visual language made up of vocabulary components.

16.4.1 Reading Pictures

Some authors have addressed the encoding of pictorial information directly. Stewig (1989) even titled his article "Reading Pictures." In Stewig's study, 28 fifth-grade students listened to the reading of three different versions of The Three Little Pigs, each version with a different set of illustrations. A rather complex 3-day procedure was adopted which involved letting students ask questions, having students make comparisons, and encouraging other interactions with the pictures and the stories. On the fourth day, students wrote what they liked best and why. Only 15 comments of 83 related to the story: content, plot, and book design. The other 68 comments referred to the pictures: color, style, detail, brightness, medium, and size. One student wrote: "The pictures told the whole story themselves because they were so clear" (p. 79).

Other research has been done to examine how children interpret pictures (interpretation in this sense is a measure of how the students read the picture). Leslie Higgins has done three notable studies in this area (1978, 1979, 1980). The studies were based on her own model which posits that ". . . picture interpretation consists of two related and interdependent forms of behavior: observation and inference drawing" (Higgins, 1978, p. 216). She goes on to explain that "Inferring in the picture interpretation context carries understanding beyond an awareness of what is seen. . ." (p. 216). In her first experiment with 95 fifth- and sixth-graders, Higgins (1978) found that picture interpretation ability correlated highly with only one factor: operational facility, a characteristic that reflects Piaget's operational stages. In her second study, she set out to determine whether children can be taught to draw inferences from pictures (Higgins, 1979). Students who were given "thinking guides" prior to the picture interpretation tests did significantly better at making inferences. However, the guides did not help the students to better evaluate their inferences. In four experiments to assess literalism in the interpretation of pictures by children, Higgins (1980) found that many children in the 4- to 7-year old range gather information that the pictures were not intended to convey. For example, in a picture in which only half a dog is shown (the other half being out of the frame), the child may conclude that the character (dog) has no head and only two legs. This phenomenon was found to be a maturational attribute that does not occur in older children. No evidence was given to indicate whether children grow out of this mistake-prone behavior simply by aging or whether learning is the change factor.

Ramsey (1989a, 1989b) suggests that "Artistic style may also be a powerful pictorial variable which primary-age children utilize as a yardstick in predicting the reality or fantasy of accompanying text content." She reports two of her own studies, done in 1982 and 1989, that demonstrate the wide range of interpretation skills already possessed by children that age.

Pettersson (1984, 1993) has approached the matter of reading pictures from the standpoint of picture readability. He first created a picture readability index called BLIX (a Swedish acronym) based on 19 picture variables that were ultimately collapsed into five rating (indexing) factors. Then he validated the index experimentally:

Experiments with ranking and rating of test-pictures showed that pictures with high BLIX-values were ranked and rated better than those with lower values by children as well as adults. Experiments with the actual making of pictures showed that despite detailed instructions on the execution of the visuals, there was still plenty of scope for individual creativity. It was also shown that informative pictures drawn so that their BLIX-ratings were high (more than 4.5 on a scale of 5.0) were to a large extent rated as aesthetically pleasing, rated as "suitable" or "very suitable" for teaching, and did not take more time to make than pictures with lower BLIX-ratings (1993, p. 158).

16.4.2 Visual Representation

The study of visual representation has generally fallen into five distinct areas of inquiry: (1) semiotics and film/video conventions; (2) signs, symbols, and icons; (3) images and illustration (including the survey by Levie discussed above); (4) multi-image; and (5) graphic representation. Each of those areas has its own growing research literature.

16.4.2.1. Semiotics and Film/Video Conventions. The literature of film is voluminous. Arnheim (1957) theorized about the nature of images in film and about film structure. He made popular the idea of film as art. Metz (1974), on the other hand, was concerned with the linguisitic attributes of film and the sign language (semiotics) used by filmmakers. He identified and categorized the visual building blocks of film imagery into a notational scheme that has since become the basis for reader theories. In that sense, semiotics has become the basis for analysis of "the language of film."

The educational media field was introduced to the concept of analyzing film by Pryluck and Snow (1975). Theirs was the linguistic approach. Corcoran (1981) was one of the first to deal with semiotics and film/video conventions in a way that is related to visual literacy. He pointed out that there are problems in the use of linguistic models or reader theories as they apply to reading the images of screen media. Others who have focused on the relationship of semiotics to visual literacy are Muffoletto (1982), Metallinos (1982, 1995a), and Gavriel Salomon (1979b, 1982, 1983, 1984). In his earlier work, Salomon (1979b) indicated that different symbol systems are employed by different media. Jack Solomon (1988) extended Salomon's logic by interpreting hidden signs within the environment at large.

The vast majority of the scholarly work in semiotics is theoretical and analytical-interpretive rather than experimental. Thus, Salomon (1979b) speaks of the differences between notational symbol systems and nonnotational ones, but research on labeled and nonlabeled illustrations is done by people outside the semiotics area, like Mayer (1989).

Monoco (1981) popularized the idea of interpreting film symbols as "reading" film. The latter research by Salomon (1983, 1984) has focused on demonstrating experimentally that it is much easier in terms of mental effort for an individual to view television than it is to read text. He characterized television as easy, and print as tough. This conclusion was consistent with his earlier lament (Salomon, 1979b), based on his interpretation of Samuels' (1970) study of children reading illustrated text: "Specifically, the employment of charts, graphs, or pictures could save mental effort and make the acquisition of knowledge more effective, but it will impede [reading] skill development" (p. 83). The implication that the availability of visuals and film may make learners lazy is obvious,

The primary scholarly interest in film semiotics has been directed toward film criticism. While many approaches to film criticism have evolved, each with its advocates, not until 1994 that a visual literacy approach to film criticism was introduced (Metallinos, 1995). Whether underlying theory of that approach will lead to supportive research is not yet clear.

Hefzalla (1987) has taken the position that the principle of visual primacy in film has led to the use of the visual element of film as a tool for communicating more than is actually shown. Thus film has an implying power [his term] more powerful than simple, literal linguistics. Related research has disproportionately focused on cinematic production variables: lighting, focus, shot selection, camera angle, image placement, camera movement, and such (e.g.. McCain, Chilberg & Wakshlag, 1977; Metallinos & Tiemens, 1977; Kaha, 1993; Kipper, 1986). While many of these results are of interest to the visual literacy and instructional. technology fields, the research is the primary province of those doing encoding research in the fields of mass communication and film studies.

Goodman (1968, 1977, 1978) contributed basic theory about symbol systems that had an influence on semiotics (Salomon, 1979a) and also added a cognitive (as contrasted to an aesthetic) dimension to the interpretation of art. In his "theory of symbols," he created a taxonomy of the major symbol systems used by human beings, including gestural and visual graphics systems. The theoretical literature on semiotics and symbol systems is rich. The research to validate the theories is, as yet, not so rich.

16.4.2.2. Signs, Symbols, and Icons. Scholarship concerning signs, symbols, and icons is interwoven with that of semiotics, but it also stands apart. The concepts are easily confused, and frequently the terms are used and misused interchangeably. Thus, to define symbol, Salomon (1979b) indicated that "most objects, marks, events, models, or pictures that serve as bearers of extractable knowledge are symbols" and that "Symbols serve as characters or coding elements . . ." (p. 29). By Salomon's definition, symbols subsume both signs and icons, which are at opposite ends of a continuum of abstractness and resemblance.

Historically there has been much disagreement over the nature, meaning, and definition of symbols (Sewell, 1994). Sermonic theorists have considered sign to be the more inclusive term, subsuming symbols, signals, indexes, and icons. Sewell (1994) helped bring focus to the functional nature of symbols this way:

Symbols are classified into a three-tiered hierarchy along a concrete to abstract continuum as pictorial symbols that include 3-D models, photographs, and illustration drawings, graphic symbols that include image-related graphics, concept-related graphics, and arbitrary graphics, and verbal symbols that are the most abstract, since they have no graphic resemblance with the object to which they refer (p. 137).

Eisner (1970), too, was concerned with the element of resemblance of symbols to their referents. However, his taxonomy classified symbols into four classes that reflect more than the simple polarity of the continuum. His classes of symbols are: conventional symbols (abstract signs/symbols with finite referents), representational symbols (iconic symbols that faithfully depict their referents), connotative symbols (those that distort the image of the referent), and qualitative symbols (those that are neither signs nor icons per se, but rather are images that establish an atmosphere or evoke feelings). Theorists have quibbled over how and why symbols represent referents and over such points as the functional imperatives of symbols as either descriptive or depictive (Goodman, 1968; Salomon, 1979b). However, these issues have not been an object of research, and the categorization schemes serve research only as obvious variables to be used in all manner of visual representation studies.

Salomon has linked his interest in codes and symbols system with his concern for cognition and learning (1979a). In generalizing the results of four coding experiments, he reported "that at least three kinds of covert skills-singling out details, visualization, and changing points of view-can be affected by filmic coding elements" (Salomon, 1979b, p. 155).

Almost anything can be a sign, depending on how one defines the term. However, the common lay uses of the term have driven the research. Dewar and Ellis (1977) studied the perception of, and understanding of, traffic signs. Makett-Stout and Dewar (198 1) evaluated the effectiveness of public information signs. They concluded that such signs communicate as individual symbols, but that they do not constitute a visual language. They also found that the proliferation of safety signs and symbols has resulted in the creation of several signs with identical meanings.

In a related analysis, Yeaman (1987) investigated the functions and content of signs in libraries. His taxonomy was austere, establishing only three basic categories of library signage: directional, locational, and informational. While his analysis included legibility concerns, font selection, and location considerations, it did not consider the use of symbolic, nonverbal signs.

Three recent studies have investigated how well people are able to read and interpret graphic signs, symbols, and icons (Griffin & Gibbs, 1993; Griffin, 1994; Griffin, Pettersson, Semali & Takakuwa, 1995). In the first of these studies (Griffin & Gibbs, 1993), subjects from the United States and Jamaica were asked to identify 48 widely used symbols. Subjects from both countries found some symbols to be confusing. Some symbols, though widely used, were found to be not widely understood. And there were significant differences in the recognition patterns between subjects of the two cultures. The study verified that signs are culture related. In a study that drew symbols from street signs, computer notation, and clip art, Griffin (1994) found that symbols used in business presentations were often misinterpreted or not understood. He found that image perception and understanding is relevant to context (verbal context can make visual symbols easier to interpret correctly). He also found that subjects make rapid judgments about the meaning of symbols because:

They often do not look at the visual in great detail. Rather, they take a superficial look at the symbol and then make a determination of the meaning. Visual experts should not rely on symbols to convey in-depth meaning or ideas which are critical to an outcome. Symbols do not convey accurate meanings" (p. 44, emphasis added).

The third study (Griffin, Pettersson, Semali & Takakuwa, 1995) was performed on subjects in four different countries, each country with its own distinct culture. The methods and results paralleled those of the 1993 Griffin and Gibbs study. The evidence of American culture was evident in the responses in other countries, particularly in Japan regarding computer-related symbols, but the overwhelming evidence was in favor of cultural differences being the predominant variable when symbol understanding was measured. An international symbol system based on intuitive interpretation of symbol meanings may not be possible until the world shares a common culture.

16.4.2.3. Images and Illustration. In the area of images and illustration, including pictorial research, we find important contributions by Alesandrini (1981, 1984), Duchastel (1978, 1980), Duchastel and Waller (1979), Knowlton (1966), Levie (1978, 1987), Levie and Lentz (1982), Pettersson (1989, 1993), and the text The Psychology of Illustration, Volume 1, Basic Research, by Willows and Houghton (1987).

First Knowlton (1966), then Alesandrini (1984), offered classification schemes for illustrations. Their categories are conceptually quite similar. Knowlton proposed three categories: realistic, analogical, and logical. Alesandrini also offered three categories:. representational, analogical, and arbitrary. These categories have been useful tools for the field. Realistic or representational illustrations share a physical resemblance to the referent object or concept. Analogical illustrations show something other than the referent object and imply a similarity. Logical or arbitrary illustrations bear no resemblance to the referent object or concept but rather offer some organizational or layout feature that highlights a conceptual or logical relationship of the illustration's components to each other.

While the results of research relevant to images and illustration are spread throughout this chapter, special notice should be made of two, chapters in the Willows and Houghton book that deal with the use of illustrations in children's learning. In one chapter, Pressley and Miller (1987) have reviewed t he research on "Effects of Illustrations on Children's Listening Comprehension and Oral Prose Memory." In the other, Peek (1987) has brought together the research on "The Role of Illustrations in Processing and Remembering Illustrated Text." 717he information being too voluminous to be included here in toto, these chapters can only be touched on briefly, with closer examination commended to readers.

Pressley and Miller (1987) have explicitly written their review of the effects of illustration on children's memory as a reflection of Paivio's dual-coding theory. That theoretical bias reflects Pressley's (1977) own conclusion that enough research evidence had been gathered regarding illustrated text, so that "No more experiments are required to substantiate the positive effect of pictures on children's learning" (p. 613). In a summary and analysis of the issues related to research on illustrations in text, Duchastel (1980) agreed with that general conclusion.

Most of the research on illustrations has involved text-illustration relationships and comparisons or other linkages of visual to verbal material. Pressley and Miller (1987) point out that verbal cues have a greater effect on memory than visual cues, but:

There can be little doubt from the available data, however, that if the picture and verbal cues [both] can be activated, they promote children's learning of stories relevant to verbal codes alone (p. 89).

Peek (1987) has reviewed the research on illustrated text regarding the role of illustrations in mental processing and remembering. She cites numerous authors who have made claims about the beneficial affective-motivational roles and functions of illustrations in texts. Pictures have been said to arouse interest, set mood, arouse curiosity, make reading more enjoyable, and to create positive attitudes toward subject content and toward reading itself. While acknowledging that a few studies support such claims, Peek's overall judgment is that:

Although the proposed roles sound quite plausible, educational research has not come up with much evidence in support of these claims-perhaps because researchers consider the interest and enjoyment effects too obvious for serious investigation (p. 117).

Regarding cognitive effects of illustrations in text, Peek has concentrated on studies of retention effects. Her general conclusions are that "retention of depicted text information is facilitated, whereas unillustrated text is not," and "with growing delay, subjects tend to base their retention-test responses on what they have seen in the pictorial supplements" (p. 128). In other words, when pictures and text are used together, retention is facilitated, and pictures help delayed recall more than immediate recall.

Levie and Lentz (1982) have provided an outstanding review of the research on effects of illustrated text on learning. They summarized the results of 155 experimental comparisons of learning from illustrated versus nonillustrated text. Forty-six of those studies compared learning from illustrated text material versus from text alone:

In all but I of these 46 cases, the group mean for those reading illustrated text was superior to that of the group reading text alone.... In 39 of the 46 comparisons, the difference was statistically significant . . . and the average group score for the illustrated-text groups was 36% better than for text-alone groups (p. 198).

The Levie and Lentz review also analyzed as a group those studies that dealt with learning a combination of illustrated and nonillustrated text information from illustrated text versus text alone. The studies included the earlier works prior to 1970 that had led many to conclude that pictures do not facilitate comprehension of text (see Samuels, 1970). However, when all of the studies were considered together, Levie and Lentz concluded:

In summary, the diverse group of studies ... indicates that when the test of learning is something other than a test of only illustrated text information or only nonillustrated text information, the addition of pictures should not be expected to hinder learning; nor should pictures always be expected to facilitate learning. Even so, learning is better with pictures in most cases (p. 206).

A third aspect of the Levie and Lentz review is its coverage of "some closely related research areas" (p. 214). Accordingly, theirs is the most comprehensive review available of the research on learning from illustrated text in all of its aspects. For that reason their nine conclusions are particularly noteworthy:

  1. In normal instructional situations, the addition of pictorial embellishments will not enhance the learning of information in the text [emphasis added].
  2. When illustrations provide text-redundant information, learning information in the text that is also shown in pictures will be facilitated.
  3. The presence of text-redundant illustrations will neither help nor hinder the learning of information in the text that is not illustrated.
  4. Illustrations can help learners understand what they read, can help learners remember what they read, and can perform a variety of other instructional functions. Illustrations can sometimes be used as effective/efficient substitutes for words or as providers of extra linguistic information.
  5. Learners may fail to make effective use of complex illustrations unless they are prompted to do so.
  6. Illustrations usually enhance learner enjoyment, and they can be used to evoke affective reactions.
  7. Illustrations may be somewhat more helpful to poor readers than to good readers.
  8. Learner-generated imaginal adjuncts are generally less helpful than provided illustrations (pp. 225-26).

16.4.2.4. Multi-Image. Multimedia is an area whose current popularity has spurred both articles in the popular press and research interest. Romiszowski (1994) lauds "the motivation-enhancement role of multimedia that is a result of providing appropriate information through impactful presentations" (p. 12). However, multimedia has not yet become a research interest of visual literacists.

Before there was multimedia, there was multi-image, which was inadequately researched, although there was enough published over 30 years for it to be reviewed as a body of research by Burke and Leps (1989). Jonassen (1979) noted that "Research on multi-imagery generally has focused on the linear versus simultaneous presentation issue" (p. 291). The presentation of two or more images simultaneously (multi-image) has obvious implications on perception, encoding, and many other issues that remain areas of popular visual literacy concern.

Perrin (1969) provided a theory of multiple-image communication that posited that more information would be assimilated by viewers when multiple images were presented simultaneously on multiple screens. Perrin's theory was based on the assumption that viewers would mentally combine the images and consequently be able to make more and better comparisons. Perrin did not address the issue of information overload, which other scholars have considered to be either positive or negative, depending on the purpose of the multi-image presentation: positive when affective responses are sought, negative when cognitive learning is the purpose (Goldstein, 1975).

Goldstein (1975) reconsidered the relevant research on perception and applied those findings to the perception of multiple images. He cited Haber (1970) as concluding that "recognition memory for pictures is essentially perfect" and linked that to the findings of fixation studies:

We know that once a picture receives only a few fixations, that the picture will be recognized later. The presentation should be slow enough to allow the necessary fixations, but, since our memory for pictures is excellent, overly long exposures are not necessary (p. 59).

Whiteside (1987) reviewed the four multi-image dissertations of Didcoct, Ehlinger; Tierney, and Toler. The studies covered single-image versus multi-image comparisons, picture recognition following a multi-image presentation, physiological and intellectual effects on subjects viewing multi-image, perception questions, and, of course, the effects of simultaneous versus sequential presentation on learning. Whiteside generalized that the majority of the studies (three of four) had not revealed significant differences. The other study (Toler, n.d.) found the effect of simultaneous presentation on visual discrimination tasks to be significantly higher scores than when the presentation was sequential. She also confirmed that "visuals" (visual learners) would outperform haptics, but that haptics would benefit most from multi-image presentation.

Like film, much of the scholarly writing in regard to multiple imagery has been devoted to aesthetics and criticism. Burke (1977) offered a scheme for multi-image criticism that considered the unique characteristics of the medium. In contrast, Seigler (1980) drew directly from film theory when he theorized about the montage eff6cts of multiple images.

Although it would seem natural that multi-image would be the stage on which the competing cue summation and dual-coding theories would be compared, that has not happened. Jonassen's (1979) study comes close. In an experiment with seventh-grade students, one-screen, three-screen, and four-screen presentations were made of the same content. All treatment groups achieved substantial improvement on the criterion task, confirming the instructional effectiveness of the slide-tape medium. The one-screen presentation was a basic linear slide-tape. The three-screen presentation was a duplication of the one-screen presentation, except that additional examples of the concepts were shown concurrently alongside the basic set of slides. The four-screen presentation proved to be significantly more effective than the other treatments. That treatment used only the basic set of images, but kept previously shown slides in view as new images were introduced. The four screen treatment was the only one that provided concurrent projection of both examples and nonexamples for the students to compare visually.

The paucity of multi-image research may be a function of a lack of a unifying theory of multi-image (Burke, 1991a). In two recent articles, Burke (1991a, 1991b) has provided a new theoretical framework for the study of multi-image that is consistent with classical film theory. His theory accommodates the differing spatial emphasis of painting, photography, cinema, video, and multi-image. With theory in place, perhaps elaborating research will follow.

16.4.2.5. Graphic Representation. Spread across several disciplines are many papers on graphic representation, such as those of Jonassen, Beissner and Yacci (1993), Bertoline, Burton, and Wiley (1992), Braden (1983), Whiteside and Whiteside (1988), Griffin (1989), Macdonald-Ross (1977a, 1977b, 1979), Moxley (1983), Pruisner (1992), Winn (1980, 1981, 1982, 1983, 1986, 1987), and Winn and Holiday (1982).

16.4.2.5.1. Graphics. Saunders (1994) identifies nine categories of graphics: "symbols (pictographic or abstract), maps, graphs, diagrams, illustrations or rendered pictures (realistic to abstract), photos (still or moving), three-dimensional models, graphic devices and elements (may also be considered as symbols, and composite graphics made up of two or more of the other types)" (p. 184). At first glance, two of those categories seem questionable, those of Photos and 3-13 models. However, Saunders explains that she refers to Photos that have been digitized and are capable of artistic manipulation and to models constructed through computer graphics and animation. These then all fit within her definition: "Graphics may be simply defined as a prepared form of visual message or a visual form of communication" (p. 184). As noted previously, Alesandrini (1984) classified graphics much differently under three rubrics: representational, analogical, and arbitrary. Her classification has been widely used in the research literature.

Graphics have also been categorized according to instructional applications. Reiber (1994) classified graphic applications as cosmetic, motivation, attention gaining, presentation, and practice. He further noted that cosmetic and motivation applications served affective functions, and that the other three applications served cognitive functions.

Research on graphics is heavily interwoven with research on pictures, as reviewed by Levie (1987). Accordingly, most of Levie's "picture" conclusions are applicable to graphics. Thus, we might read his conclusions as follows:

Overall, research on interpreting pictorial cues and features [graphic cues and features] demonstrates that although some fundamental skills such as object recognition are essentially innate, young children and adults without ample picture-viewing [graphics-viewing] experience have trouble decoding pictorial [graphic] information that is abstract, complex, or represented in culture-bound conventions-especially when the objects and concepts shown are unfamiliar (Levie, 1987, pp. 7, 8; brackets added).

However, this expansion of Levie's conclusions must be taken with a grain of caution. When the purpose of the visuals is exclusively to support reading to learn, the effects of pictures and representational illustrations are more effective than those of graphics (Levin, Anglin & Carney, 1987).

16.4.2.5.2. Charts, Graphs, and Diagrams. For a more extensive review and discussion of research on charts,. diagrams, and graphs than is provided here, readers are referred to the chapter by Bill Winn in Houghton and Willows' The Psychology of Illustration: Basic Research (1987). In that review of research on charts, graphs, and diagrams, Winn (1987) summarized that usually, but not always, graphics have done more to improve performance of students with low ability than those with high ability. Studies that supported that conclusion included the study of the effects of complex flow diagrams on 10th-grade science learning by Holliday, Bruner, and Donais (1977) and the study of elementary and secondary school students solving illustrated math story problems by Moyer, Sowder, Threadgill-Sowder, and Moyer (1984). Both of those studies are "diagram" studies.

16.4.2.6. Diagrams. According to Saunders (1994), diagrams "include those visuals drawn to represent and identify parts of a whole, a process, a general. scheme, and/or the flow of results of an action or process" (p. 185). Winn (1987) points out that:

There is a disproportionate amount of research on what we have defined as "diagrams." There is very little on the instructional effectiveness of graphs, and not much more on charts (p. 168).

Diagrams have proved particularly effective in science instruction when used to show processes (Winn, 1987).

In a study that examined the effectiveness of two math-emagenic activities, study questions, and diagrams, Buttolph and Branch (1993) found a weak effect in favor of diagrams, but no significant difference between the treatment groups. However, their post hoc comparison led them to "suggest that diagrams have the potential to be more effective math-emagenic aids to learning than study questions . . ." (p. 25). Since the subjects in this study created their own diagrams with prompts, the results should be compared to those of Alesandrini's (1981) study that involved student creation of mathemagenic material. In that study, college students were given a science chapter to read and were assigned one of two study strategies. Students who wrote paraphrases were compared to students who drew pictures of the material. Alesandrini, too, found only weak (not significant) effects in favor of the drawing strategy.

16.4.2.7. Charts. As a graphic form, charts are characterized by the organization of information on the page in groupings that are set apart from each other by columns and rows (Winn, 1987), Thus defined, charts will be used with increasing frequency in the future due to their ease of creation with the spreadsheet and tables functions of modem computer software. The term chart, however, is used incorrect]Y to refer to many other graphic forms such as posters, pie graphs, bar graphs, and line graphs, as wall charts, pie charts, and so forth.

Winn (1987), in reviewing the studies of Decker and Wheatley (1982) and of Rabinowitz and Mandler (1983), concluded that students improve free recall if they take advantage of spatial grouping, and that this free recall is further facilitated if the cognitive elements are grouped according to a conceptual structure. Considering an even wider body of research, Winn (1987) said, "We can conclude from this research that even the simplest spatial organization of elements into meaningful clusters has the potential for improving learning" (p. 177).

16.4.2.8. Graphs. Graphs are used to show quantitative relationships. Different types of graphs show different functions; e.g., line graphs show sequence and trends, pie graphs show portions of a whole, and bar graphs show quantitative comparisons (Fry, 1983; Macdonald Ross, 1977b; Pettersson, 1993; Winn, 1987). A form of bar graph that is of particular interest to visual literacists is the isotype graph, wherein quantifies are represented not by bars but by series of small representational drawings (Winn, 1987). The isotype is thus visually a combination of the graph and the illustration.

More than any other of the graphic forms, graphs are governed by design conventions, and organizations such as the American Statistical Association have gone so far as to prepare style sheets to codify those conventions (ASA, 1976; Tufte, 1983). Research to substantiate the unique teaching values of graphs, even the highly pictorial isotype graphs, is lacking.

16.4.2.8.1 Graphic Organizers. Closely related to the effects of graphic materials are the effects that occur when they are used in special ways. Early research on graphic organizers showed "little or no effect" (Levie & Lentz, 1982, p. 215). Therefore, it will not be discussed here. Bellanca (1990, 1992) has proposed two dozen graphic organizers as tools for teaching thinking. The organizers all have two things in common. First, each has a visual (or graphic) component. Second, each graphic component is meant to be the structural background for the display of words, phrases, or other verbal information. Unfortunately (for our purposes), the Bellanca works are trade books, written for elementary and secondary school teachers. No research evidence is presented to validate the effectiveness of these graphic patterns, although each is recommended for use in a particular type of learning situation. Black and Black (1990) also have written a trade book that suggests that thinking can be organized with graphic organizers. Again, there is no substantiating research evidence. Does this mean that scholars have no interest in this issue? Not at all. If we accept what Bellanca classifies as a graphic organizer, then we can say that some, not all, of the graphic organizers have received limited research attention.

In early studies that measured the effect of graphic organizers when used as teacher-directed, prereading activities, graphic organizers failed to significantly facilitate learning of content material (Smith, 1978). Moore and Readance in the first of two meta-analyses (1980) reported more positive results, indicating that there was, in fact, a small overall effect of graphic organizers on learning from text. They also noted a strong effect when graphic organizers were used as student-constructed postorganizers rather than preorganizers. The second Moore and Readance meta-analysis (1984) confirmed the results of the 1980 study and found that adults benefit more from graphic organizers than do children.

Moore and Readance (1984) also analyzed the affective effect on teachers. Teachers were reported themselves to be more confident, better organized, and more in control when they had prepared themselves to use graphic organizer strategies. "In essence, teachers believed that graphic organizers prepared them to help students cope with particular pieces of content" (p. 15).

Johnson, Toms-Bronowski, and Pittelman (1982) found semantic maps to be superior to traditional teaching methods when used to teach vocabulary. Sinatra, Berg, and Dunn (1985) used a graphical outline with learning-disabled students, with positive effects on learning comprehension.

Sinatra, Stahl-Gemake, and Berg (1984) presented vocabulary concepts through graphic networking to 27 disabled readers. Results were compared against a verbal-oriented readiness approach, yielding significantly higher comprehension scores in favor of the mapping approach. Sinatra (1984) introduced four different types of network outlines in an attempt to increase reading and writing proficiency. Using these semantic mapping techniques had no significant effect on quality of writing but did have significant effects on improving reading comprehension.

In a fourth study, Sinatra demonstrated that these same semantic mapping techniques could render significant improvement in writing over a short period of time when the graphic outline is used to help the student organize thoughts prior to writing.

Jonassen, Beissner, and Yacci (1993) take a narrower view of what constitutes a graphic organizer, limiting the category to structural overviews. As such, graphic organizers are visual aids that function as "organizers" in the sense of what Ausubel (1978) termed an advance organizer So defined, the graphic usually takes the form of labeled nodes connected to unlabeled lines. Jonassen, Beissner, and Yacci recommend using graphic organizers with good or more mature students. They cite examples of graphic organizer research with significant increases in learning effects for science learning (Amerine, 1986), and for recall of social science passages (Boothy & Alvermann, 1984).

Eggen, Kauchuk, and Kirby (1978) examined the effects of graphic organizers on comprehending and producing hierarchies. They found a significant increase in comprehension for fourth-, fifth-, and sixth-grade students asked (and able) to draw a hierarchy of the information in a text. The ability to create hierarchical drawings was not a metacognitive skill possessed by all students at that developmental stage.

16.4.3 Color

Although color could be a subtopic of any or all of the five categories of visual representation above, the topic generates so much interest that it will be covered here separately. Dwyer and Lamberski (1983) reviewed the research literature on the use of color in teaching. Their reference list includes 185 items, most being reports of research. Their general conclusion was that "The instructional value of color appears highly dependent upon the complexity of the task in the materials and perceived response requirements by the learners" (p. 316). More specific conclusions were that:

  • Color was found to be of value in nonmeaningful tasks, especially if other perceptual cues lacked physical form differences or were low in associative value. The application of color to meaningful tasks appeared related to the interaction between learner and materials.
  • In externally paced materials (passive), color appeared to be secondary to other salient features.
  • If the task in passive materials became confusing, especially in simultaneous audio and visual materials, the learner selectively attended to a preferred mode as the functional stimulus. However, in most adult learners, this preferred mode is verbal, though in some incidences an integrated verbal and visual strategy may be used.
  • If color was central to the concept being presented and if students focused their attention on it, color facilitated learning.
  • In unstructured situations, older learners ... disregard . . . the potential contribution that could have been made by the relevant visual code.
  • In structured situations ... older learners appeared to have the encoding and rehearsal strategies necessary to use an integrated code system like color.
  • Younger learners generally have been found to benefit from color cues in passive materials, due more to their motivational characteristics rather than to their identified cognitive functions.
  • Color codes have been found to be ineffective in passive materials, apparently due to insufficient learner-material interactive time.
  • Color codes have had more success in facilitating verbal performance in self-paced (active) materials.
  • The value of color in retrieval tasks appears highly task related.
  • Color cues appeared to facilitate recall of low-perceptual tasks which are highly visual [but not of more verbal tasks].
  • Color codes have been found to facilitate achievement in complex self-paced tasks, particularly with criterion tasks that are visual in nature (p. 317).

There is continued interest in color. See, for example, the recent work of Pruisner (1992, 1993, 1994). Pruisner (1994) concluded that students prefer color-cued text, and that the use of color in graphics enhances learning. Pett (1994) reviewed the research on the use of white letters on colored backgrounds. Preferred background colors for both slides and CRTs were found to be blue and cyan. Medium-density backgrounds were found to provide greater legibility than either high or low density. Green and cyan provided high legibility with slides, but not on a CRT.

None of the research cited above has resulted in major new theory or in revelations of such a magnitude as to cause paradigm shift. Rather, the studies have resulted in the revelation of principles for image design and for instructional applications.

Four extraordinary books have been published which support research on illustration and graphic representation: the two books by Houghton and Willows/Willows and Houghton (1987) and the two books by Tufte (1983, 1990). While the latter are not research compendia, per se, Tufte's The Visual Display of Quantitative Information (1983) is scholarly, filled with principles drawn from the research, and is a definitive work on the subject. In a like manner, Tufte's Envisioning Information (1990) is a comprehensive, scholarly work that is a definitive book on how to use illustrations in support of concepts.


Updated August 3, 2001
Copyright © 2001
The Association for Educational Communications and Technology

AECT
1800 North Stonelake Drive, Suite 2
Bloomington, IN 47404

877.677.AECT (toll-free)
812.335.7675

AECT Home Membership Information Conferences & Events AECT Publications Post and Search Job Listings