Sharon Murphy. Handbook of Early Childhood Literacy. Editor: Nigel Hall, Joanne Larson, Jackie Marsh. Sage Publication. 2003.
The vitality of language lies in its ability to limn the actual, imagined and possible lives of its speakers, readers, and writers It arcs toward the place where meaning may lie. (Toni Morrison, as cited in Himley, 2000: 131)
Any review of research on literacy assessment creates a double ‘language moment.’ This doubling is created because texts about literacy assessment are themselves being read and interpreted to investigate how reading and writing are assessed, and the manner in which the reading and writing are assessed in any one instance in these texts tells something about what the author-researchers thought about literacy. Each element—each move and assertion made within the literacy assessment texts themselves, and each inference a reader makes about what these literacy assessment texts represent—‘arcs towards where meaning [about literacy and assessment] may lie’ and, in doing so, reveals as much about the meaning makers as it does about literacy.
In this review, I intend to try and work with (or limn) the double language moment to make more apparent how the field of literacy assessment in early childhood education works. Specifically, I will situate my analysis in the idea that assessment, like language, is value laden and that values, in large part, determine what one sees as either literacy or assessment. Drawing upon this theoretical framework I will go on to discuss, in similar terms, research relating to three literacy assessment archetypes in early childhood education and then will suggest future directions for research.
Values, Language, and Assessment
Varied sources highlight the fact that language is undergirded by values. Child developmentalists who study language recognize that a system of shared meaning permeates language. For instance, Katherine Nelson, in exploring the meaning of meaning in language, posits that ‘the greater the degree to which an individual’s semantic, conceptual, and script systems correspond to the conventional representations of the cultural (or subcultural) group to which the speakers belong, the greater the likelihood of establishing a high degree of shared meaning on any given occasion’ (1985: 10). In essence, she is saying that if you share discourse strategies and values with someone, then your likelihood of successfully communicating with them is heightened because the way you interpret the world is similar.
Bakhtin’s (1981) argument that the meaning of discourse is an artefact of social history sets the idea of a shared system of meaning and values within a time period lengthier than Nelson’s single interchange and highlights the importance of the social practices that become part of language. Linguists such as Simpson explore ‘point of view’ in texts by considering ‘the value systems and sets of beliefs which reside in texts in other words, ideology in language’ (1993: 5). Critical theorists like Fairclough (1989) argue that shared values run so deep that they appear naturalized, that is, people are no longer aware of the ways in which shared values affect what they say, how they say it, and in what circumstances they say it. In essence, all of these perspectives illustrate that language is an interpretive value framework for encountering the world.
The language of assessment, as a subset of language itself, is open to the same varied influences as language is. The recognition of this element foregrounds what is sometimes an unspoken assumption in discourse on assessment: the language of assessment is a language of values and, as such, it is an interpretive framework for viewing not only what is assessed but the nature of the assessment itself. One simple way in which to illustrate this point is to consider word choices in describing assessment options. For example, the words ‘formal’ and ‘informal’ are used within the field of literacy assessment to describe particular kinds of assessments (see, for example, Farr and Beck, 1991; Goodman, 1991). These descriptors, whether intended to or not, immediately communicate issues of worth or importance to the assessment. Formal assessments, like formal dress affairs, are considered by many to be performative rather than substantive and, some would argue, mask one’s true self. Informal assessments, like casual clothes, may be instruments of day-to-day activity but may be considered by some to be not quite refined enough. Of course, there are other interpretations possible and other elements of comparison between formal and informal assessment; my point is not to perform a comparative analysis but to illustrate that assumptions underlie the language of assessment and these assumptions are value laden.
In addition to values being a part of assessment simply because the language of assessment is a subset of language, it is important to recognize that within the field of assessment, there is considerable explicit theorization about values. At the heart of assessment is validity, a term which shares the same root meaning as value (Kaplan, 1964). If psychometric theorizing about validity is drawn upon, evidence for test interpretation and use is linked with the appropriateness of the consequences of that interpretation and use (Messick, 1985) and the latter is explicitly about values. For example, even though a test (which is the typical focus of psychometric writing) can be demonstrated to assess well a domain of knowledge, if the consequences of the test use are more than the test should bear, then the test should be considered invalid for that use. However, Messick (1985) readily acknowledges that there is considerable slippage between theoretical discourse about assessment and the practice of assessment. If values are an indicator for the ‘worthwhileness of implementation,’ then this slippage suggests that validity theorization is itself an arena for values display.
Assuming then that assessment is and inevitably will continue to be value laden, what are the consequences of this assumption for the field of assessment itself and for literacy assessment in early childhood education in particular? Operationally, I believe that values in large part determine how we ‘find’ (or see and evaluate) literacy. If our values predispose us to conceptualize literacy in particular ways, then we will tend to look for evidence of our conceptualizations while missing evidence for other types of conceptualizations. I will set the framework for a consideration of how values work in three literacy assessment archetypes by briefly considering the historical path that literacy assessment has taken in early childhood education.
Setting Archetypes against a Historical Perspective
The typical point of departure for most reviews of research on literacy assessment in early childhood education is a focus on the readiness movement (see, for example, reviews by Johnston and Rogers, 2001; Stallman and Pearson, 1990). Two strands interlace to form the substance of literacy assessment under this rubric:
- The testing movement, which arose out of the desire to create a more ‘scientific’ approach to assessment (Willis, 1991)
- The developmentalist perspective, with its underlying biological deterministic perspective (Stallman and Pearson, 1990).
The explanations for the move towards a more ‘scientific’ approach to assessment typically include the desire of what is now the discipline of psychology to become just that, a discipline, with its own associated scientific methods (Gardner, 1984), and the desire to stem criticisms of biases (in the forms of poorly constructed examinations or the influence peddling of high status society members) that plagued the evaluation of performance in schools (Brooks, 1920; 1921). One study commonly associated with the onset of readiness testing was conducted in 1931 by Morphett and Washburne (1983). This report, which correlated reading ability and intelligence, recommended that, so as to reduce the possibility of failure, the onset of reading instruction should begin when children are ready, which in this research was at the mental age of six and a half. The readiness construct remained a relatively intransigent one for three-fourths of the twentieth century, embedded as it was not only in most reading tests (Stallman and Pearson, 1990) but also in the general textbooks developed for pre-service early childhood educators (e.g. Seefeldt, 1980).
Even though accounts reveal that there was debate over whether readiness was biologically or environmentally determined, these accounts also indicate that the concept itself held sway (e.g. Banton-Smith, 1951-2). Indeed, typically little is posited to have changed in literacy assessment until the mid 1960s when theories of literacy learning began to be influenced by fields such as cognitive psychology, linguistics, and anthropology. As these newer theories of literacy became refined and articulated, so too did a critique of the substance of literacy assessment. Alternative frameworks and assessment methodologies for literacy assessment were proposed as part of this ‘new literacy’ (Willinsky, 1990) movement. A literacy theorization Zeitgeist appeared to be at work during this period, with similar theoretical moves afoot globally in Argentina (e.g. Ferreiro and Teberosky, 1982), the United States (e.g. Calkins, 1983; Gentry, 1987; Goodman and Burke, 1972; Goodman and Goodman, 1980; Teale and Sulzby, 1986), Australia (e.g. Holdaway, 1979), New Zealand (e.g. Clay, 1972), Israel (e.g. Tolchinsky Landsman, 1990), Italy (e.g. Pontecorvo and Zucchermaglio, 1990), Brazil (e.g. Grossi, 1990), and Britain (e.g. Meek, 1982; 1988; Rosen and Rosen, 1973). Underlying these different bodies of work were the following premises:
- The child is constantly hypothesizing about the world.
- The tasks one uses to describe/assess literacy can conceal as much as they can reveal.
- Tasks that are situated in activities of the everyday are more likely to yield fuller demonstrations of children’s early literacy.
- The relationship between teaching and learning is not always a causal one; children may learn much that they weren’t taught.
Out of these assumptions grew assessment initiatives that related more to the literacy activities of the everyday in both format and substance. These initiatives began to be developed in the 1970s, proliferated in the 1980s and early 1990s, and have reached a relative stability of sorts since that time.
This thumbnail sketch of the history of literacy assessment in early childhood is itself a bit misleading. Indeed, the thumbnail sketch is an example of what occurs when values and assumptions are left unexamined. For instance, absent from most commentaries on the history of early childhood literacy assessment is any reference to the child study movement which began in the nineteenth century and ebbed and flowed in popularity across the twentieth century (Cohen and Stern, 1978). In Britain, in the 1930s, people like Susan Isaacs (1966) advocated the value of qualitative record keeping based on observation. Isaacs’ sample observational notes include lengthy descriptions and summarizations of the literacy activities of children.
Many interpretations can be brought to bear to explain the absence of the child study movement in reviews of research on early childhood literacy assessment:
- The child study movement did not focus specifically on the theorization of literacy but was more interested in the whole child (although this argument is not persuasive since many standardized assessments of reading used across the twentieth century had weak to non-existent grounding in literacy theories, yet they are referenced).
- Psychically, the readiness concept and its testing were so dominant in the field of early childhood assessment, that other assessment activities were but footnotes to the field.
- Disciplinary boundaries were such that early childhood and literacy educators did not interact.
Each of these interpretations is not only a statement about values, but an illustration of how the literacy that is ‘found’ as a result would lead out of the interpretations. So, for example, if literacy theory was not valued in the child study movement, what was valued instead, and what would literacy look like as a result of such a perspective? Similarly, what view of literacy is found if reading readiness tests are the lens? Rather than answer such questions in relation to the history of literacy assessment, I will explore similar questions by drawing upon research relating to three early childhood education literacy assessment archetypes.
Literacy Assessment Archetypes in Early Childhood Education
Three archetypical methods for data collection exist in the discourse on literacy assessment in early childhood education: standardized or large-scale group tests, observation and documentary methods, and responsive listening methods. Standardized or large-scale group methods include both commercially available tests and state run tests. Observation and documentation methods include a variety of observational techniques accompanied by some form of record keeping, and the collection of documents generated by children that can illustrate literacy knowledge. Responsive listening involves observation and documentation, but it is dynamic in relation to the child’s learning and involves a legitimization of the child’s point of view (Gandini, as cited in Rinaldi, 1998). I have deliberately avoided categorizations such as formal and informal, or external (to the classroom—a reference to accountability) and internal (to the classroom) (Paris et al., 2001), because of the significant ideological weight these terms have for how one finds literacy and in what circumstances.
How Standardized Large-Scale Group Methods ‘Find’ Literacy
Critique abounds with respect to the use of standardized large-scale group methods of assessment (e.g. Murphy, 1997; Murphy et al., 1998: Sacks, 1999). Professional associations such as the American Educational Research Association (2000) and the International Reading Association (1999) have been so concerned that they have issued papers on this topic. Yet, there appear to be national (Hoffman et al., 2001) and global moves afoot (e.g. in Japan: Johnston and Rogers, 2001; in Canada: Murphy, 2001; in England: DfES, 2002) to increase the use of standardized testing. These moves, in part, are being fed by societal reform occurring as consequence of economic and media globalization (Barlow and Robertson, 1994). Countries that at one time were bemused by the newspaper publication of school-by-school standardized test results in the United States now find their own newspapers vying for the eye-catching headlines and politicians trying to trump each other with test scores (Murphy, 2001). Given such a context, one would imagine that the way in which standardized large-scale group methods assess literacy must be compelling.
The rhetorical moves (and the values underlying them) associated with standardized testing account for some of the longevity and popularity of this method. Standardized tests are usually referred to as measures (e.g. Farr and Carey, 1986), a term suggesting accuracy and precision. The reporting method is usually numerical, which is further suggestive of exactness. Systematic analyses of standardized tests designed for younger readers reveal an emphasis on micro-text elements such as words and word parts (Murphy et al., 1998; Stallman and Pearson, 1990), inherently turning literacy into sets of smaller and smaller components. Indeed, Stallman and Pearson report that for most of the readiness tests they reviewed, ‘the clear emphasis (almost half of the subtests) is on sound-symbol knowledge’ (1990: 36).
The content of tests, another index of values, determines what is named as literate behaviour. Test content is manifested by the architecture of the test (its format) and the substance of the test (the material covered). Stallman and Pearson (1990) report that the overarching architectural motif for readiness tests is that of multiple-choice fill-in-the-bubble format, with an emphasis on the recognition rather than the production of elements. Others have found similar patterns (see Murphy et al., 1998). This architectural structuring of standardized tests has many consequences, two of which are particularly significant for the present discussion:
- The tests have embedded in them such relatively narrow a priori definitive assumptions about what and how evidence of literacy is demonstrated, that the literacy knowledge of some is lost to the architecture (e.g. by design, partial knowledge is not given credit on these tests).
- The tests so constrain the ways in which literacy is manifested that they make it relatively easy to create literacy programmes which mimic the tests and, unsurprisingly, the children engaged in such programmes make significant gains on the tests.
The substance of standardized literacy tests can be addressed at many levels. I have analysed such tests to determine whether they hold up to good psychometric design principles and found many problems with the tests (Murphy et al., 1998). However, analyses such as mine are often counteracted with a comment that the analysis offered is that of an adult reader and that, somehow, the traps I or another adult might see would not be fallen prey to by a child who has different values. Until recently, it was difficult to counteract this argument with evidence. Only a small sampling of work (e.g. Fillmore, 1982) exists in which children were used as informants in the critical analysis of standardized test items. However, more recently Hill and Larsen (2000) have included extensive item-by-item responses from children in relation to a comprehensive analysis of a pilot-test edition of the Gates-MacGinitie test for eight-year-olds. Using linguistic, genre, and discourse approaches to the analysis of test items (which could be published in full because the items were pilot items), Hill and Larsen’s work illustrates how the texts within tests operate as unusual text forms that are systematically biased against certain groups. Hill and Larsen take multiple routes to uncovering what children know and what they might believe a text says and, in doing so, reveal the questionable validity of the texts in the Gates-MacGinitie: right answers are achieved for the wrong reasons, wrong answers (when explained) have inherent justifiable logic, and some of the answers that the test makers indicate are the correct answers make no sense at all.
What then, is the literacy found by this assessment method? In short, the literacy found is a facsimile of sorts. Superficially it may appear to be not only literacy, but a precise estimate of literacy knowledge; however, when this literacy is probed, as in the case of the Hill and Larsen (2000) research, it disintegrates, and one is left with fragments of possibility but not much else.
How Observation and Documentary Methods ‘Find’ Literacy
While standardized tests ‘make up people’ (Hacking, 1990) by using language and numbers that imply a precision inconsistent with the nature of the phenomenon under study, observation and documentary methods make up people by interpreting their surface actions and the residue they leave behind in the form of literacy artefacts. Observation and documentation methods include selected individualized standardized instruments, portfolio assessments, and observation-based schemes. I include selected individualized standardized instruments here because, although these instruments have made some attempt at standardization in the data-collection tasks that are the focus of the interaction between an adult and a child around a literacy activity, typically the tasks are similar to regular classroom activities.
As the field of emergent literacy (Teale and Sulzby, 1986) came into being, so too did a wealth of assessment tools drawing from that theory. Typically, these tools involve: (1) anecdotal records made by the teacher based on interactions with the child, (2) the collection of literacy artefacts that stand as tokens of literacy development, and (3) observations based on interventions that take the form of tasks typical of classroom activity, such as requesting that a child read aloud. Examples of popular tools include The Primary Language Record (Barrs et al., 1988; Barrs, 1993), An Observation Survey of Early Literacy Achievement (Clay, 1993) or its earlier variants (e.g. Clay, 1972), and Reading Miscue Inventory: Alternative Procedures (Goodman et al., 1987) or its earlier variants (e.g. Goodman and Burke, 1972). Examples of research, the results of which became the material of checklists (e.g. Rhodes, 1993), included Sulzby’s (1985) work on storytelling, Doake’s (1985) work on reading-like behaviour, Goodman and Altwerger’s (1981) work on print awareness, Gentry’s (1987) work on spelling, and the work of people like Harste et al. (1984) or Ferreiro and Teberosky (1982) on early literacy development. The nature of the activities undertaken in most of the activities associated with the tools or checklists derived from studies is very much in keeping with the types of activities that occurred in the child study movement; however, the interpretation of these ‘kidwatching’ (Goodman, 1978) assessments is informed by theories about literacy learning for young children.
The rhetoric that accompanies these assessment activities is of two sorts: (1) a justification rhetoric, and (2) a descriptive rhetoric based in theories of emergent literacy. The justification rhetoric essentially amounts to an argument for the validity of the activities; it is implicitly an argument against the vacuousness of standardized large-scale group tests. Typical language associated with this argument includes terms such as ‘authentic’ (e.g. Hiebert, 1994) and ‘performance-based’ (e.g. Kapinus et al., 1994). The descriptive rhetoric usually draws upon observational research or ethnographic research as a further justification argument.
The format or architecture of observation and documentation methods is wide-ranging—from, for standardized instruments/protocols, a priori structuring of the tasks/activities/knowledge deemed to be literate behaviour, to an absence of all a priori structuring (in such instance, the structuring of what counts is left up to the observer). It can range from a temporal ordering of indicators of literate behaviour to a description derived out of one or more theories of literacy. In short, much can vary within and among assessment methods of this type.
But the architecture of observation and documentation methods has an additional element that must be considered. That element, the significance of the demonstration of behaviour, is related to two key assumptions built into observation and documentation methods: (1) the knowledge children have about literacy will be demonstrated in observable ways, and (2) the environment is conducive to allowing such demonstrations to happen. When behaviours are not demonstrated in observable ways, the question facing those interpreting observation and documentation methods is whether or not this absence of demonstration should be regarded as indicative of some type of ‘lack’ on the part of the child. When observations are interpreted or when standardized observation and documentation methods are used, these methods become as much an assessment of the environment as they are the child. So, for example, if there is no opportunity for the child to demonstrate knowledge of an element of literacy knowledge such as quotation marks, because the instructional environment provided no opportunity for their use, then the problem resides with the setting and not with the child. Indeed, using such a rationale, some have advocated theoretically based observation and documentation methods as a means of changing teacher practices (e.g. Searle and Stevenson, 1987).
Like standardized tests, the substance of observation and documentation methods can be addressed at many levels. An in-depth analysis of a large set of observation and documentation methods was conducted by Meisels and Piker (2001). Of the 89 teacher-identified early literacy assessments (for children ages five to nine) that they studied, only 7% were designed to be administered to a group. Therefore, while there is some ‘noise’ in their findings as a result of the inclusion of these group assessments, many of the patterns described by Meisels and Piker (2001) are useful in considering how literacy is thought about in these assessments. They are at once similar to and distinct from their standardized group assessment peers. Similarities include a high incidence of phonics assessment (61% included some aspect of this element), comprehension (58% included this element) and reading (57% included this element). Unlike the standardized tests, print awareness was assessed in 47% of the instruments and reading strategies in 42%. Also of note is the fact that writing makes a significant appearance in these assessments: writing conventions are examined in 57% and writing process in 48%. The emergent literacy theoretical values appear to have had some impact, in methods such as these, upon what is defined as literate behaviours.
So what kind of literacy is found by observation and documentation methods? A simple answer might be, ‘What you see, is what you get.’ That is, observation and documentation methods are as much about the behaviour being displayed as they are about the ability of the viewer/interpreters to understand what the display is about.
How Responsive Listening Methods ‘Find’ Literacy
Responsive listening methods are not named as such in the literature on early childhood literacy assessment. Indeed, responsive listening methods, like the methods associated with the child study movement, are methods that are the product of childhood education and study, rather than literacy study. Responsive listening methods emerge out of the Reggio Emilia approach to early childhood education originating in Italy. In this approach, children are viewed as:
hav[ing] their own questions and theories, and they negotiate their theories with others. Our duty is to listen to the children, just as we ask them to listen to one another. Listening means giving value to others, being open to them and what they have to say. Listening legitimizes the other person’s point of view, thereby enriching both listener and speaker. (Rinaldi, 1998: 120)
The effect of a listening stance and the expectation that the child is working from a base of knowledge are dynamic in relation to the child’s learning. As the teachers in Reggio Emilia schools accessibly place, throughout the environment, documentation in the form of charts, diaries, tapes and slides, their children ‘become even more curious, interested, and confident as they contemplate the meaning of what they have achieved’ (Malaguzzi, 1998: 70).
Even though literacy learning per se has not been a focus of the published reports on the Reggio Emilia approach, many of the observation and documentation approaches within literacy assessment in early childhood education can be, or have been, extended so that they adhere to the principles of this approach. For instance, the retelling component of the Reading Miscue Inventory (Goodman et al., 1987) is quite open to being treated in a responsive listening fashion, whereas the coding of miscues, which is technical and theoretically driven (Murphy, 1999), would be less open to such techniques with young children. Similarly, Hill and Larsen’s (2000) investigations with children as to what formed the basis for their answers to the questions on a multiple-choice standardized test involved a type of listening in which the adults were open to learning what the children thought about standardized testing.
If there is any type of rhetorical move embedded in the responsive listening assessment approach, it is the tendency to focus on uncovering children’s knowledge (Giudici et al., 2001). The assumption is that much knowledge lies waiting to be revealed. In literacy assessment, one example that incorporates some aspects of responsive listening is the research of Ferreiro and Teberosky (1982) in which they demonstrate the great range of literacy knowledge children have at relatively young ages. These researchers, whose method is adapted from Piagetian-based clinical interviews, describe children’s knowledge, not so much as fixed facts, but as sets of hypotheses from which they are working: hypotheses about genre, the role of graphic elements, and what elements of language can be represented in print. However, it should be noted that responsive listening typically does not occur in the format of a clinical interview setting, especially given the literature that indicates the care that must be taken in interviewing young children (e.g. Ginsburg, 1997); rather, for the Reggio Emilia classrooms it occurs in the day-to-day activity of classrooms.
The architecture of the responsive listening approach lies embedded in the structuring of conversation and the response of the adult. The skills of the adult in conversation with or responding to the child result in glimpses into children’s thinking that, depending on those skills, can result in the rarely seen or the mundane and the predictable. The responsive listening approach isn’t standardized because it is about response to children, but it can be routinized in so far as there is attentiveness and care given to what are assumed to be knowledgeable children. An example of the extent to which this perspective is taken can be found in a discussion of the application of the work of Vygotsky (1978) to teaching and learning. Malaguzzi, in reflecting on Vygotsky’s concept of the zone of proximal development (the space between the independent ability of the child and what the child can do with the support of others), comments that:
The matter [of the zone of proximal development] is ambiguous. Can one give competence to someone who does not have it? In such a situation [in which the child is about to see what the adult already sees] the adult can and must loan to the children his judgement and knowledge. But it is a loan with a condition, namely, that the child will repay. (1998: 83-4)
The substance of the responsive listening very much depends on the adult’s finding the places in conversation where the child’s hypotheses can be let loose and where a lending of judgement might be made. Even then, the responsive listening approach is not about getting the child to demonstrate proficiencies (although it surely does that), but it is more about trying to understand how children think about the world and why they might think that way. The approach works away at eroding common assumptions about literacy learning to reveal what was previously unthought-of in terms of children’s knowledge of print conventions in relation to both reading and writing.
So, what kind of literacy is found by the responsive listening method? Again, a simple answer: an uncommon kind, one that resists being defined by adults’ predisposed biases towards conventional literacy representations and interpretations and that allows for different insight to be made.
Within and beyond Archetypes
For the three archetypes presented, fundamental differences flow from who defines (and values) literacy at the outset. With traditional, standardized approaches, the definitions are established a priori and a narrow interpretation of what counts as literate behaviour defines the experience and the result of the assessment. With the observation and documentation methods, a priori assumptions exist, but they allow for some interaction with the environment; ultimately, however, these methods rest upon base description and, maybe, the interpretive lenses of literacy theories. The responsive listening method relies less ona priori conventional understandings of literacy but asks what the child’s understanding of literacy is.
For each of the archetypes, values also feed into what any literacy assessment might mean. On the one hand, we are all witness to the considerable rhetoric that suggests that assessments ought to be related to the purposes to which they are to be used (we get the formal and informal assessment language out of that argument). But, fundamentally, if a literacy assessment is just that—a literacy assessment—then the first criterion to which it must be held, above any other, is that of the quality of its assessment of literacy. All of the archetypes presented suffer in this regard. None provides a complete picture of the literacy knowledge of children, and perhaps no single assessment type can. But some clearly provide better information than others.
As for the standardized test archetype, one point raised by Crossland (1994) bears remembering: the pace at which children’s literacy knowledge changes in early childhood is so rapid as to create an educational Doppler effect. That is, by the time we have gathered information about a child’s literacy and interpreted what it might mean, the child has moved on to new understandings. The annual or semi-annual standardized literacy test seems especially likely to suffer from such Doppler effects.
As for the other archetypes, to some extent the observation/documentation and the responsive listening methods complement each other in together providing the broadest picture. But the picture is only as large as our minds allow it to be. Two examples of how the literacy picture, and the complementary assessments, might be enlarged come to mind.
The first has to do with the theoretical models that inform the interpretation of any observation/ documentation. By and large, the models that are used are those with psycholinguistic or sociopsycholinguistic rootings. Newer interpretive frameworks such as the conceptualization of literacy as social practice might bring different interpretations to bear on the same observations and documentation. For instance, Watrous and Willett considered the matter of reading identity of a student and, in their words, their method:
looked at the process of group formation and maintenance and at the individual as an aspiring member of the classroom community. It did not highlight John as a failure, as a non-member with little chance of gaining group acceptance. It did not measure his progress against predetermined criteria indicating success or failure as a reader. Instead, it looked at the strategies John employed to establish membership and the degree to which he was successful in his efforts. It also examined the ways the community both hindered and facilitated those efforts. (1994: 85-6)
Such perspectives locate literacy learning, and ultimately the assessment of literacy learning, not simply in the head of the learner but in the community in which it is practised.
Even when the values of literacy are located more strictly in the learner, perspectives are missing from the current interpretive frameworks. For a second example, consider, for a moment, what an assessment method might look like if it emphasized affect. I am not talking here about superficial concepts of ‘fun’ or behaviouristic conceptions of ‘motivation,’ in which external referents are the driving force, but about psychoanalytically driven assessments such as are found in the work by Jones (1996). Such work teases out the complex relations between attachments to texts and the roles such attachments have for readers. Consider, for instance, that for Jones’ (1996) children, books became transitional objects, objects which could be loved and hated but which were treasured in the same way that a favourite toy might be. What a different relationship her children have with books than others who have not had such experiences.
Both the ‘literacy as social practice’ and the psychoanalytic perspective reflect a switch from the underlying question of most assessments, which is ‘Who do we want you to become as a literate person?,’ toward a more interesting question, ‘Who are you as a literate person?’ Maybe this is the question that ought to be guiding research and thinking about literacy assessment in early childhood education.