|
|||
|
27:
Text Design
|
27.3 Text DifficultyA separate area of research relevant to text design concerns itself with assessing how difficult a text might be for its intended readers and, indeed, whether or not difficulty per se is a bad thing. The title of a book by Chall and Conard (1991) puts the question succinctly: Should Textbooks Challenge Students? The Case for Easier or Harder Books. The area of text difficulty has been examined from numerous points of view (e.g., see Davison & Green, 1987; Schriver, 1989). Here I want simply to report on some of the issues and findings. Again, if we start with an historical perspective, it is probably true to say that the instructional materials of today are not only more spaciously arrayed but also contain shorter paragraphs, shorter sentences, and shorter words than did texts published some 50 years ago. What can research tell us about these features of text difficulty? 27.3.1 Paragraph LengthsFew researchers have commented on the effects of long chapters and long paragraphs on readability. It would seem, other things being equal, that short chapters, and short paragraphs within them, will make a text easier to read. In addition, the ways in which new paragraphs are denoted may be important. One problem is knowing how best to format Paragraphs without unduly breaking the readers' flow. In an early study, Hartley, Burnhill, and Davies (1978) suggested that different methods of paragraph denotation can affect the speed and accuracy of location and access, as well as the recall of information. 27.3.2 Sentence LengthIt is generally considered that long sentences-such as this present one-are difficult to understand because they often contain a number of subordinate clauses that, because of their parenthetical nature, make it difficult for readers to bear all of their points in mind and, in addition, because there are often so many of them, make it harder for readers to remember the first part of the sentence when they are reading the last part. Long sentences overload the memory system; short sentences do not. I once wrote:
Perceptive readers will notice that many of my sentences in this chapter contain more than 30 words-but at least they have been scrutinized! Furthermore, the sentence above ignores the advice given by many other commentators (e.g., see Berger, 1993) that sentences (and paragraphs) should vary in length if they are to entertain the reader. 27.3.3 Word LengthLong words, like long sentences, also cause difficulty. It is easier to understand short, familiar words than technical terms that mean the same thing. If, for example, you wanted to sell thixotropic paint, you would probably do better to call it nondrip! One author on style quoted a letter writer in The Times who asked a government department how to obtain a book. He was "authorized to acquire the work in question by purchasing it through the ordinary trade channels"-in other words "to buy it." Concrete words and phrases are shorter and clearer than abstract ones. 27.3.4 Difficult Short SentencesIt does not necessarily follow, of course, that passages written in short sentences and short words will always be better understood. Alphonse Chapanis (1965, 1988) provides many examples of short pieces of text that are difficult to understand. The one I like best is the notice that reads:
People interpret the notice as meaning "to get on the elevator I must either walk up one floor, or go down two floors," or even "to get on the elevator I must first walk up one floor and then down two floors." When they have done this, they find the same notice confronting diem! What this notice means, in effect, is "Please, don't use the elevator if you are only going a short distance." Chapanis's articles are well worth studying. They are abundantly illustrated with short sentences that are hard to understand and (in some cases) potentially lethal. 27.3.5 AmbiguitiesMany short (and indeed many long) sentences can turn out to be ambiguous. Consider "Then roll up the three additional blankets and place them inside the first blanket in the canister." Does this sentence -mean that each blanket should be rolled inside the other, or that three rolled blankets should be placed side by side and a fourth one wrapped around them? (An illustration would clarify this ambiguity.) Ambiguities, or at least difficulties, often result from the use of abbreviations or acronyms (strings of capital letters that form words, e.g., PLATO). I once counted over 20 such acronyms in a two-page text distributed by my university computer center. Chapanis (1988) provides additional examples, also from the field of computing. The meanings of acronyms may be familiar to the writer, but they need to be explained to the reader. Furthermore, readers easily forget what an author's abbreviations stand for when they are not familiar with the material. 27.3.6 Clarifying TextGenerally speaking, text is usually easier to understand when:
27.3.7 Measuring Text DifficultyThere are many readability formulas available that attempt to predict the age at which the reader, on average, will have the necessary reading skills and abilities to understand a piece of text. Most readability formulas are not in fact as accurate at predicting this as one might wish, but the figures that they provide do give a rough guide. Typical readability formulas combine two main measures (with a constant) to predict reading age. These are: (1) the average number of words per sentence, and (2) the average length of the words in these sentences (usually measured in syllables). Thus, the longer the sentences and the more complex the vocabulary, the more difficult the text is rated. Many readability formulas can be calculated by hand. One of the simplest, the Gunning Fog Index, is as follows:
The result is the (American) reading grade level. A better-known formula, but one that is harder to calculate by hand, is the Flesch Reading Ease (RE) formula. This is: RE = 206.835 - 0.846w - 1.015s, where w = number of sylllables per 100 words; s = average number of words per sentence. In this case, the higher the RE score, the easier the text. The relationship between RE, difficulty, and suggested reading ages is as follows:
Today, with word-processing systems, it is much easier to apply the more complex readability formulas. For example, the readability program on my word processor can be applied to text to provide three sets of readability data derived from three different formulas. When this program was run on some 50 sentences of this chapter, the outcomes were as follows:
It can be seen that the predictions from the four formulas vary slightly and only give a rough estimate of reading difficulty. Such readability formulas have other obvious limitations. Some short sentences are difficult to understand (e.g., "God is grace"). Some technical abbreviations are short (e.g., DNA) but difficult for people unfamiliar with diem. Some long words, because of their frequent use, are quite familiar (e-9. communication). The order of the words, sentences, and paragraphs is not taken into account, nor are the effects of other aids to understanding such as illustrations, headings, numbering systems, and typographical layout. Also, most importantly, the readers' motivation and prior knowledge of the topic are not assessed. All of these factors affect text difficulty. (See Davison & Green, 1987, for a fuller discussion.) Nonetheless, despite these problems, readability formulas can be useful tools for having a quick look at the likely difficulty of text that is being produced, and also-provided you use the same measure-for comparing the relative difficulty of two or more pieces of text. Comparison studies of original and revised texts have shown advantages for more-readable text in:
The difficulty with readability measures arises when people attempt to use them to change the way text is written. Text that has short, choppy sentences can be difficult to read (Armbruster & Anderson, 1985). Critics of readability formula have had fun producing "more readable" versions of such famous texts as the Declaration of Independence or the Lord's Prayer to highlight the limitations of readability formulas in these respects. Davison and Green (1987) provide one of the best critiques of readability formulas currently available. Studies by Beck and her colleagues are also interesting to note in this connection (e.g., Beck, McKeown & Worthy, 1995; Loxterman, Beck & McKeown, 1994). Here the more-readable texts in these studies score as less readable on readability formulas. 27.3.8 Revising Written TextThere are numerous guidelines on how to write clear text and also on how to revise one's own text, or text written by someone else (e.g., see Bellquist, 1993; Kahn, 199 1). In my own work with 11 - to 13-year-old school children, I have used the guidelines given in Figure 27-7. These guidelines are based on theoretical work conducted by psychologists and others on the nature of the writing process.
27.3.8.1. Computer-Aided Revision. Several computer programs have now been developed to help writers revise both technical and conventional text (e.g., see Hartley, 1992). Many of these programs were originally designed to be run when the text had been written, to analyze it and to make suggestions for improvement. Today, however, we may expect writers to use such programs concurrently with their writing. Such programs point to potential difficulties and offer on-screen advice. Figure 27-8 provides an illustration of the advice given to an author who had a "dangling modifier" in her text. One typical suite of such programs at the time of writing is Grammatik 5. The number of facilities available is currently being expanded, but Figure 27-9 lists some of them. One difficulty here is whether the novice writer can cope with all the information provided. Another appears to be that writers often need to understand sophisticated grammar in order to follow the advice offered by the authors of such programs! Evaluation studies of different programs are now beginning to appear (e.g., see Kohut & Gorman, 1995).
In 1984, 1 published a report of how useful one such set of computer-aided writing programs (The Writer's Workbench) had been to me in revising a particular article. I compared the suggestions made by nine colleagues with the suggestions made by the computer programs. The human and the computer aids to writing differed in two main ways. My colleagues were more variable than the computer programs: Different colleagues picked on different things to comment on. None made comments in all of the (14) categories of comments that I derived in the inquiry. The computer programs were more thorough and more consistent than my colleagues, but this was over a narrower range of (six) categories. The programs picked up every misspelling; they drew attention to every sentence that was over 30 words long; they indicated that I had missed out a bracket, but did not say where; and they provided me with 85 suggestions for better wording! Thus the computer programs were excellent at doing the donkey work of editing; my colleagues excelled at using their knowledge to point out inconsistencies, errors of fact, and to suggest better examples. The final version of the article thus benefited from the combined use of both sources of information.
In 1993, 1 replicated this study with a journalist colleague (Domer & Hartley, 1993). The conclusions that we reached were much the same, despite the advances made in computer-aided, writing programs.
|
![]() |
|
![]() ![]() |
AECT 877.677.AECT
(toll-free) |