Uncovering the Key Skills of Reading

Roger Beard. Handbook of Early Childhood Literacy. Editor: Nigel Hall, Joanne Larson, Jackie Marsh. Sage Publication. 2003.

This chapter attempts to provide an overview of research that has been undertaken to uncover the key skills of reading. It will outline the debates that have been a feature in this area of literacy studies over many years and examine more recent theoretical stances that have tried to reconcile the earlier debates. The chapter will also briefly consider the implications of the advent of hypertext for our understanding of what key reading skills comprise.

It needs to be stressed that the chapter is focused on the key skills of interpreting and comprehending text in printed or electronic media in written English. Similar discussions that are focused on other languages will raise different issues, particularly if the languages are not alphabetic ones like English. As will be seen, some of the key skills of reading written English are related to the fact that written English comprises a code in which alphabetic letters represent the phonemes of spoken language (Byrne, 1998).

Much of the research on reading skills has been of an experimental kind and the chapter will inevitably reflect this. However, the discussions in the chapter have been written to take account of the fact that the audience for this handbook is likely to be predominantly from an educational background. One of the few substantial naturalistic studies is by Bussis et al. (1985). Their conceptual framework regarding reading skills has much in common with the experimental work in the field, discussing the relative importance of decoding, anticipation of grammatical sequences and so on, although they report the use of observational and interview methodologies and place a relatively greater emphasis on the importance of constructing meaning from text.

There is a huge literature on this topic and the publications that the chapter focuses upon have been selected either because they have been particularly influential or because they are representative of a certain perspective. Where these references are rather dated, other more recent publications have been suggested. It is also important to note that, although the chapter focuses on certain technical skills of literacy, these skills are developed within social processes in homes, classrooms and the wider world. The chapter will conclude by noting that becoming a successful reader involves both key skills and meaningful social practices.

The Language Base of Literacy

It is widely agreed that language has generally preceded literacy, both historically in the evolution of humankind and individually in normal patterns of childhood development (Liberman, 1995). This precedence suggests that the understanding of reading will benefit from a consideration of language, on which literacy is in some ways ‘parasitic.’ In attempting to uncover the key skills of reading, the structure of language is a useful starting point. Key skills are taken to be the centrally important abilities that can be developed through training and/or practice, in a process described by Bussis et al. (1985: 67) as orchestrating personal resources to achieve a particular result.

Some influential linguists (e.g. Crystal, 1976) have shown how the structure of spoken English can be helpfully represented by three strands: pronunciation (sounds and spellings), grammar (syntax and morphology) and meaning (words and discourse).

Although different terms may be used, the three-strand distinction is common in linguistic description. Such a model is also likely to contain the elements for the ‘encoding’ of the language. In English this is centred on the use of the graphology, the system of vowels, consonants etc., to represent the phonology, the system of sounds (phonemes). Written language often involves a more deliberate use of vocabulary and grammar, in order to fulfil the sociocultural contexts which have led to the decision to use written language in the first place. Written language also follows certain conventions of space and direction: in English normally left to right, down the page. Written language uses various kinds of punctuation that compare, in a limited way, with pauses in speech. Written language is also pragmatically different. The reader is typically distant from the creator of the written language in both space and time, whereas the use of speech normally involves a face-to-face interaction. The respective genres and registers used often reflect this.

It is important to add that subdivisions such as the one outlined above are rarely all-embracing and some strands of language may cut across others. For instance, grammatical rules apply within words (morphology) as well as between words (syntax). Meaning is conveyed at word level (vocabulary) as well as at discourse or text level. Punctuation is part of the graphology but also plays an important role in confirming the grammatical rules that are being used.

Analysing the strands of language in this way provides a helpful starting point for attempts to undercover the key skills of reading. Taking the three strands in turn, the letter-sound relationships of unfamiliar words have to be decoded; grammatical sequences have to be followed; words have to be recognized; and the meaning attributed to them has to be understood. Such an analysis seems to imply that successful reading and writing may involve effective use of information from a range of sources. Descriptions of these sources of information have often fallen into one of two groups, often described as ‘bottom-up’ or ‘top-down.’ The two groups approach the language base of literacy from different directions. The former gives emphasis to the code that is used in written language to represent the spoken; the latter emphasizes the meaning that is conveyed by the written language. The distinction is clearly made by Jeanne Chall (1983: 28-9). She explains that ‘bottom-up’ approaches are those that view the reading process as developing from perception of letters, spelling patterns, and words, to sentence and paragraph meaning.

‘Top-down’ approaches stress the first importance of language and meaning for reading comprehension and also for word recognition. The reader theoretically samples the text in order to confirm and modify initial hypotheses. Each approach will now be briefly discussed in turn before a third approach is considered, one that deals with the interactive nature of reading skills.

Are the Key Skills ‘Bottom-Up’ Ones?

One of the best-known bottom-up models of reading has been outlined by Philip Gough (1972) and is focused on ‘one second of reading.’ The following summary of Gough’s model is derived from that provided by David Rumelhart (1985). The visual information (written characters) from the text is seen and registered in an ‘icon’ that holds the information while it is scanned. The written characters are then processed by a ‘pattern recognition’ device. This device identifies the letters, which are then read into the ‘character register.’ While the letters are being held in this way, a ‘decoder’ converts the character strings into the underlying phonemic representation. The phonemic representation of the original character strings serves as an input to a ‘librarian,’ which matches up these phonemic strings against the vocabulary store (lexicon). The resulting entries are then fed into ‘primary memory.’ The four or five lexical items held in primary memory at any one time provide input to a ‘wondrous mechanism,’ which applies its knowledge of the syntax and semantics to determine the meaning of the input. This ‘deep structure’ of meaning is sent on to the final memory register, TPWSGWTAU (‘the place where sentences go when they are understood’). When all inputs of the text have found their final resting place in TPWSGWTAU, the reading is complete.

Gough goes on to consider the implications of this model for the young learner. He emphasizes that children of school age bring several important capacities to learning to read. They can produce and understand sentences; they have a vocabulary; they have the ability to understand language; and they have a phonological system. Gough also concedes that identifying letters, ‘blank, stark, immovable, without form or comeliness,’ does not come naturally. He stresses that letter recognition has to be accomplished, ‘whether by means of alphabet books, blocks, or soup.’ However, the fundamental challenge in learning to read, according to Gough, arises in what is commonly referred to as ‘decoding,’ converting characters into phonemes. After discussing the limitations of teaching approaches that use word recognition (‘look-and-say’) techniques, Gough argues that children should be helped to map the letter-sound code from the start, while assuring them that the code is solvable. Gough stresses that he does not see phonics as a method of teaching children grapheme-phoneme correspondence rules. The rules that children learn are not the rules they must master, but rather heuristics for locating words through auditory means. The lexical representations of those words then provide data for the induction of the real character-phoneme rules. Skill in phonics provides children with a valuable means of data collection about the writing system. According to Gough, the reader appears to go from print to meaning as if by magic. But this is an illusion. The reader really plods through the sentence, letter by letter, word by word. Good readers need not guess what the text conveys; poor readers should not.

Another well-known bottom-up model is that of David LaBerge and Jay Samuels (1974). This model consists of three memory systems. As the text is processed, the three systems hold different representations. The visual memory system holds visually based representations of the different elements that make up the text (letters, spelling groups, and so on). The phonological and semantic memory systems similarly hold phonological and semantic representations. Visual information is analysed by a set of specialized ‘feature detectors,’ most of which are fed directly into ‘letter codes.’ The letter codes feed into ‘spelling pattern codes,’ which in turn feed into ‘visual word codes.’ There are a number of routes whereby words can be mapped into meanings, either directly (necessary to distinguish between homophones like ‘pair’ and ‘pear’) or through word group and phonetic codes (necessary for a word group meaning like ‘time out’).

Some Criticisms of Bottom-Up Models

In discussing these two bottom-up models, Rumelhart draws attention to several research findings that are difficult to account for with either model.

First, the perceptions of letters often depend on the surrounding letters. Ambiguous symbols, such as a poorly written w, may be interpreted as an e and a v as in event. The reader is saved from this ambiguity if the symbol appears in a sentence like ‘Jack and Jill went up the hill.’

Secondly, the perception of words depends on the syntactic environment in which we encounter them. Studies of oral reading errors made by children and adults show that over 90% of reading errors are grammatically consistent with the sentence to the point of error (Weber, 1970). Rumelhart argues that such findings are difficult to reconcile with the bottom-up models discussed above. In the Gough model, for example, syntactic processing occurs later in the sequence than the findings imply.

Thirdly, the perception of words depends on the semantic environment in which we encounter them. This environment can be seen at work in our perception of homonyms (‘wind up the clock’ versus ‘the wind was blowing’) and also in resolving ambiguities (‘They are eating apples’ could mean either ‘The children are eating apples’ or ‘The apples may be eaten’). The importance of semantic context in letter identification was noted as early as 1886 by Catell. He found that that skilful readers can perceive whole words as quickly and easily as single letters, and whole phrases as quickly and easily as strings of three or four unrelated letters (cited by Adams, 1990: 95).

The authors of the most influential bottom-up theories have in time accepted that their original theories have been overtaken by evidence. In subsequent publications, Gough, and LaBerge and Samuels, have conceded the weaknesses in their models of reading. Gough (1985) accepts that predictable texts facilitate word recognition, although he warns that most words are not predictable and so can only be read bottom-up. He accepts that his model did not pay sufficient heed to the problems of understanding text, but believes that it still points in the right direction. Similarly, LaBerge and Samuels’ model has been revised to include feedback loops from semantic memory to earlier stages of processing (Samuels, 1985).

Are the Key Skills Top-Down’ Ones?

In contrast with the bottom-up models of reading, one of the best known of the top-down models actually includes the word ‘guessing.’ According to David Crystal (1976) ‘psycholinguistics’ is the study of language variation in relation to thinking and to other psychological processes within the individual. Kenneth Goodman (1967), however, has given the term a more radical connotation in literacy research. His ‘psycholinguistic guessing game’ model of reading assumes a close and direct parallel between the learning of spoken and written language. He asserts that learning to read is as natural as learning to speak. He suggests that the basis of fluent reading is not word recognition but hypothesis forming about ‘the meanings that lie ahead.’ He argues that reading involves the selection of maximally productive cues to confirm or deny these meanings.

In a later paper, Goodman (1985) stresses the tentative information processing that is involved in reading. He argues that reading is meaning seeking, selective and constructive. Inference and prediction are central. Readers use the least amount of available text information necessary in relation to their existing linguistic and conceptual schemata to get to meaning.

Frank Smith’s (1971; 1973) model of reading drew heavily on the seminal work of Noam Chomsky (1957). Chomsky had shown how human language acquisition could not be explained by a linear model. Children did not just learn language by imitation or by connecting together various bits of language (e.g. sounds, words or phrases). Using their inherited capacity for language learning (a kind of ‘language acquisition device’), children all over the world seemed to learn to speak by a process of hypothesis testing and discovery, through authentic interaction with others. Smith argued that precisely the same kind of argument may be applied to reading. A child is equipped with every skill that he or she needs in order to read and to learn to read. Given adequate and motivating experience with meaningful text, learning to read should be as natural as learning to talk.

As with Goodman’s model, there is a strong emphasis on the ‘non-visual’ information that the reader brings to the text. Reading comprises a process of ‘reducing uncertainty’ as hypotheses about the structure and meaning of the text are mediated by sentence, word and letter identification if they are needed. Smith argues that readers normally can and do identify meaning without, or ahead of, the identification of individual words. Smith argues that skilful readers do not attend to individual words of text, that skilful readers do not process individual letters and that spelling – sound translations are irrelevant for readers.

Some Criticisms of Top-Down Models

As with the bottom-up theories, there have been recurrent criticisms of the top-down ones. Eleanor Gibson and Harry Levin (1975) point out that Goodman’s model of reading does not explain how the reader knows when to confirm guesses and where to look to do so. Philip Gough (1981) has consistently challenged how predictable written language is. His studies suggest that, at most, we can only predict one word in four when all the succeeding text is covered. Furthermore, the words that are easiest to predict are often the words that are most easily recognized. When skilled adult readers read a text with content words missing, prediction rates may fall as low as 10% (Gough, 1983; see also Gough and Wren, 1999).

One of the most searching critiques of Goodman’s theories has been provided by Jessie Reid (1993). She notes that Goodman oscillates between using the terms ‘predicting,’ ‘anticipating,’ ‘expecting’ and ‘guessing.’ These terms, though closely related, are not synonymous. Reid asks a number of questions: how does the reader know which cues will be most productive? Are the criteria fixed for any given word? If the predicted word is not on the page, what then? Can readers sample for the most productive cues in the word they did not expect? Most fundamentally, is the guessing game model optimally efficient?

One of the most detailed attacks on Smith’s theories was made by Marilyn Jager Adams in 1991. Adams acknowledges that Smith’s argument was, in some respects, insightful: he was correct in arguing that skilful reading does not proceed on the basis of identifying one letter or word at a time. But extending the ideas about the language acquisition device, the details of which were only speculative, was, according to Adams, an enormous and gratuitous leap.

Adams examines several of Smith’s assertions in the light of recent psychological research and shows how misleading they can be, including those mentioned earlier in this chapter. In examining the assertion that skilful readers do not attend to individual words of text, Adams refers to research involving computer-mediated eye movement technology. She cites evidence that fluent readers do skip a few words, mostly short function words, but that most words are processed either in eye fixations or in the peripheral vision of the saccades of eye movements (Just and Carpenter, 1987; see also Carver, 1990; Rayner, 1992). Adams feels that Smith is right in warning against an over-concentration on individual words, but wrong to imply that readers should not process them. Skilful readers have learned to process words and spellings very quickly but such automaticity comes from having read words, not from skipping them.

Adams also considers the assertion that skilful readers don’t process individual letters. Adams acknowledges that skilful readers do not look or feel as if they are processing individual letters of text as they read, but research has repeatedly shown that they do (McConkie and Zola, 1981; Rayner and Pollatsek, 1989). Individual letters and spelling patterns are processed interdependently as the text is perceived and comprehended, in a process of ‘parallel processing’ (McClelland and Rumelhart, 1986; Rumelhart and McClelland, 1986). According to Adams, to deny letter identification in reading is like saying that there is no such thing as a grain of sand. Skilled readers can process letters so quickly because of visual knowledge of words. This knowledge is based on their memories of the sequences of letters, which make up words. The more we read, the more this knowledge is reinforced and enriched.

Unlike Gough, and LaBerge and Samuels, however, there was no retraction from the most influential top-down theorists. In his sardonically titled Phonics Phacts, Goodman (1993) reasserts the ‘natural’ view of learning to read. In responding to the many research studies reviewed in Adams’ Beginning to Read (1990) which, he acknowledges, sought to undermine his arguments, he replies that his is a ‘real-world’ (as opposed to an instructional and laboratory studies) view of reading. However, less convincingly, he relies for evidence on experiments with short, decontextualized and disfigured texts (i.e. not from the ‘real world’) in support of his case. Frank Smith also reflects a lack of willingness to accept any limitations in his theories by producing a fifth edition of his book Understanding Reading (1994) without any substantial changes to his original theories.

On both sides of the Atlantic, however, there is a clear consensus that the most influential top-down theories have also been overtaken by evidence. In three recent independent reviews of research commissioned by central government bodies in the UK, Roger Beard (1999), Jane Hurry (2000) and Colin Harrison (2002) all reach a similar conclusion. Hurry concludes as follows: ‘It is now very clear that Goodman and Smith were wrong in thinking that skilled readers pay so little attention to the details of print. Research on the reading process in the 1980s produced clear evidence that skilled readers attend closely to letters and words and in fact that it is the less skilled readers who rely more heavily on contextual clues to support their reading’ (2000: 9). Colin Harrison also spells out the implications of recent research in some detail:

What we now believe, on the basis of eye-movement research with equipment far more accurate and faster than used to be available, is that fluent readers fixate nearly every word as they read, and that, far from simply sampling letters on the page in a partial and semi-random fashion, and looking closely at the letters in a word when it seems necessary, a good reader processes just about every letter in every word, very rapidly and very accurately. This is almost the opposite of the model of a good reader that some of us read about in the works of Ken Goodman (1970; 1967) or Frank Smith (1971). (2002: 18)

Are the Key Skills Interactive Ones?

One of the most influential publications in support of an ‘interactive-compensatory’ model of reading was written by Keith Stanovich (1980). Drawing on over 180 sources, Stanovich argues that fluent reading is an interactive process in which information is used from several knowledge sources simultaneously (letter recognition, letter-sound relationships, vocabulary, knowledge of syntax and meaning). Various component subskills of reading can cooperate in a compensatory manner. For example, higher level processes can compensate for deficiencies in lower level processes: the reader with poor word recognition skills may actually be prone to a greater reliance on contextual factors because these provide additional sources of information.

Indeed, in contrast to the top-down theories, Stanovich shows that good readers do not use context cues more than poor readers do. In contrast, it is weaknesses in word recognition that lead to relatively greater use of contextual cues as reading proficiency of continuous text develops. Better readers may appear to use context cues more effectively in cloze procedure activities when words are artificially deleted and the surrounding text is visible. But what is at issue here is not the presence of contextual knowledge in good readers, but their use of and reliance upon it in normal reading of continuous text (good readers may be more sensitive to context, and yet less dependent upon it, because information is more easily available to them from other sources). Stanovich draws on dozens of studies to show that fluent readers are distinguished by rapid word recognition and effective comprehension strategies.

In the UK a similarly extensive research review has been brought together by Jane Oakhill and Alan Garnham (1988; see also Oakhill, 1993). Like Stanovich, they question top-down theories in the light of the relative speeds of the processes involved. They show how, in fluent reading, the use of contextual cues to help identify a word is usually unnecessary because words are recognized from visual information so quickly.

Perfetti (1995: 108) notes that research findings suggest that the role of contextual cues in word recognition and in comprehension is radically different from that assumed by top-down models: ‘the hallmark of skilled reading is fast context-free word identification combined with a rich context-dependent text understanding’ [author’s original italics].

Some Criticisms of the Interactive Model

Perhaps not surprisingly, the main criticisms of the interactive-compensatory model have come from the bottom-up and top-down theorists. Gough, for example, is concerned that:

It is easy to create a model which is ‘right’; all you need do is make one interactive or transactional enough such that everything in reading influences everything else. The result will be ‘right’ because it will be impervious to disprove; it will yield no falsifiable predictions. But my view has always been that such models are not really right, they are simply empty, for a model which cannot be just proved is a model without content. (1985: 687)

Smith is similarly unconvinced. In the third edition of Understanding Reading, Smith (1988: 193) suggests that no top-down theorist would want to claim that reading is not an interaction with the text. He goes on to warn, though, that many interaction theories tend to be ‘bottom-up in disguise; they sound more liberal but they still tend to give the basic power to the print.’

Nevertheless, the interactive-compensatory model does seem to be generally accepted by many in literacy education as one of the most valid ways of representing the key skills of reading. Stanovich (2000: 7) notes that his 1980 paper has received over 350 citations. Harrison (2002) reports that the paper is widely regarded as one of the most important of recent years and that it remains broadly speaking uncontested (for discussion of some of the implications for policy and practice, see Perfetti, 1995; Stanovich and Stanovich, 1995; Pressley, 1998).

Stanovich himself, in reviewing the powerful impact of his work, suggests that ‘at the gross level, the results have stood the test of time – as have the general theoretical analyses’ (2000: 8). He goes on to identify two studies that have advanced the theory. First, Nation and Snowling (1998) found that individuals with poor comprehension may be at risk for poor development in word recognition because they lack the language prediction skills that are needed to add contextual information to partial phonological information. Stanovich suggests that such individuals may reflect another form of the ‘Matthew effect’ that he himself has proposed in earlier publications: ‘the facilitation of further learning by a previously existing knowledge base that is rich and elaborated’ (2000: 185; see also Stanovich, 1986). Secondly, Tunmer and Chapman (1998: 60) report that good decoders do not need to rely on context so often because of their superior ability to recognize words. When such readers do use context they are much more likely to identify unfamiliar words than are less skilled readers (see also Share, 1995). Stanovich (2000: 11) suggests that this work elaborates and builds upon the interactive-compensatory model in an attempt to explain more of the variance in reading ability.

One Remaining Issue

This chapter has brought together a range of sources on a complex, and sometimes controversial, topic. There are signs that something of a consensus has now been reached among the majority of researchers in the field. One remaining issue, however, is how an interactive-compensatory model can be effectively summarized, perhaps diagrammatically, for wider dissemination among practitioners. For some years, a kind of overlapping circles diagram has been used, apparently being particularly promoted in the work of Routman (1988) and others. Adams (1998) describes the way in which this diagram has been adopted in teacher education and is concerned that it may sometimes be used to underplay the role of phonics, as ‘grapho-phonic cues’ are tucked away at the bottom of the model, perhaps suggesting that such cues are a last resort in reading. Instead, in her own publications, Adams prefers a triangular model, based on the work of McClelland and Rumelhart (1986), in which the role of the ‘phonological processor’ in the decoding of unfamiliar words is shown by a phonological loop. The analytic and iconic aspects of this issue may be worth considering further by readers of this chapter.

Consolidating the Learning of Key Skills

In recent years, research has examined in greater depth the particular importance of several of these key skills. Three in particular have received sustained attention: fluency, comprehension and phonics. While all three are in some ways assumed in the models discussed in this chapter, recent studies suggest that each of them can be positively developed by specific teaching approaches. The significance of all three has been recently confirmed by the report of the National Reading Panel (2000) in the United States of America.

As much of the Panel’s work was based on research syntheses of research, reference to individual studies may be misleading; the interested reader is referred to the Panel’s full report and discussion. Improvements in reading fluency can be effected especially by guided oral reading, combined with feedback and guidance. These improvements have resulted in increases in overall achievement, affecting word recognition and comprehension. Independent silent reading is also likely to play a positive role, although the research base is less well established (2000: Section 3).

Similarly, reading comprehension can be promoted by a variety of techniques, including those that focus on vocabulary, on text comprehension instruction, and on teacher preparation and comprehension strategies instruction (2000: Section 4).

The National Reading Panel report also concludes that systematic phonics instruction makes a bigger contribution to children’s growth in reading than alternative programmes that provide unsystematic or no phonics teaching. Phonics teaching was found to be effective when taught individually, in small groups, and as classes. No one approach differed significantly from the others in this respect. On the basis of its research review, the Panel concludes that systematic phonics teaching produces the biggest impact on growth in reading before children learn to read independently. To be effective, systematic phonics teaching introduced in the kindergarten age range must be appropriately designed for learners and must begin with ‘foundational’ knowledge involving alphabet letters and phonemic awareness.

The research perspective adopted by the NRP has been criticized by Cunningham (2001), who argues that the Panel adopted an excessively positivist philosophy of science. At the same time, Cunningham generally endorses the Panel’s conclusions on all three aspects referred to above. In relation to phonics, for example, he argues that: ‘the preponderance of logic and evidence is against those who contend that it is all right to provide young school children with reading instruction containing little or no phonics, with any phonics included being taught unsystematically’ (2001: 332). On the role of guided oral reading in developing fluency, Cunningham concludes that the findings of the Panel seem likely to hold up over time in the real world. Furthermore, the section on comprehension he describes as more interesting and potentially valuable than the other parts of the Panel’s report because it does not adhere too closely to a priori methodological standards.

New Media: New Skills?

With the advent of electronic texts, new questions are raised about the nature of these texts and the skills that they require. According to David Reinking (1998: xxiv), hypertext has become the quintessential example of how printed and electronic texts differ. The former are generally linear and hierarchical; the latter are fluid, a set of different potential texts awaiting realization. Hypertext elements are verbal or graphic units and the links that join them. Hypertexts are multilinear, constructed so that the textual elements can be read in any order (Bolter, 1991).

Such distinctions suggest that the key skills in reading hypertext will need to be underpinned by an understanding of the availability of different routes through the material. These ‘navigational’ skills (Kamil and Lane, 1998) have been further delineated by Landow (1992) as an ability to deal with ‘departures’ (understanding where a particular link may take a reader) and ‘arrivals’ (the evaluation of a new textual location).

As with much electronic text, another key requirement is the ability to evaluate and make effective complementary use of visual images. This realization points up two possible relations between word and image in electronic media: hypermedia or interactive television that may evolve in time. Both have very different cultural implications from the print-based verbal culture to which educational institutions are still adapted (Bolter, 1998). Nevertheless, it is salutary to note that explorations of these issues are still largely located in printed texts, often with little visual supporting material (see Reinking, 1998, for a discussion of this apparent paradox).

There is less agreement about the particular reading skills required by hypertext. Horney and Anderson-Inman (1994) have categorized hypertext reading as involving skimming, checking, reading, responding, studying and reviewing, although these might also be described as reading strategies. However, an underlying issue, according to Kamil et al. (2000), is that when a reader encounters a hypertext link, there is often no way of predicting whether the information to be acquired is useful. This may explain why Gillingham (1993), even when working with adults, found that hypertext may interfere with comprehension when the goal is to answer specific questions.

A recent study has underlined the complex issues that need to be addressed in understanding the key skills of reading electronic texts. Pang and Kamil (2002) report a study of 18 third-grade students. The sample included good and poor readers; all were experienced with computers, and about half were experienced with hypertext. The students each read four passages selected from an Internet-based children’s newsletter, with hypertext links. Hard copy versions were also provided. The majority of the students expressed a preference for reading the hard copy versions of the texts. A majority also preferred reading the whole text first before exploring any hyperlinks.

In general, the advent of hypertext seems not to alter the significance of the key reading skills discussed in this chapter, but instead to raise new questions about their strategic use in the new information age contexts created by electronic media.

Conclusion

If something of a consensus about the key skills of reading has now been reached, this consensus can be used as a kind of conceptual infrastructure for policy and practice decisions. Although the chapter has inevitably drawn primarily on experimental research, the infrastructure can be accommodated and built upon by others who use different theoretical perspectives (see Oakhill and Beard, 1999). Moreover, other chapters in this handbook are testimony to the fact that becoming a successful reader involves the development of key skills and involvement in many social processes. These processes help learners, for example, to understand what reading is for and what it does; to develop positive attitude towards reading; to link the acts of reading and writing; to have access to a range of rich and interesting texts; and to be essentially concerned with making meaning.

Definitions of literacy are changing as new kinds of communication skill evolve and are better understood. This chapter has shown how the study of reading processes has also evolved, leading to a more informed understanding of what goes on when we interact with written language.