Skip Navigation LinksHome > January 2002 - Volume 22 - Issue 2 > Assessing Curriculum‐Based Reading and Writing Samples
Topics in Language Disorders:
Enhancing Academic Performance of Students with LLD

Assessing Curriculum‐Based Reading and Writing Samples

Nelson, Nickola Wolf PhD; Van Meter, Adelia M. MS

Free Access
Article Outline
Collapse Box

Author Information

Professor, Department of Speech Pathology and Audiology, Western Michigan University, Kalamazoo, Michigan (Nelson)

Clinic Co-Coordinator, Department of Speech Pathology and Audiology, Western Michigan University, Kalamazoo, Michigan (Van Meter)

This article is based partially on work completed as part of the Writing Lab Outreach Project, a collaborative effort of Western Michigan University and Kalamazoo Public Schools (KPS) supported by grant No. H324R980120 from the U.S. Department of Education (N.W. Nelson and C.M. Bahr, Co-Directors; A.M. Van Meter, Project Coordinator), and in part by Project CONNECT, supported by grant No. H029B40183 from the U.S. Department of Education (N.W. Nelson and M.J. Clark, Co-Directors).

Collapse Box

Abstract

Curriculum-based language assessment requires tools that differ from those used for traditional assessment. Analysis of reading and written language samples can provide information about curriculum-based language strengths and needs that can be used recursively to establish goals and benchmarks, provide intervention, evaluate change, and begin the next round of planning—all aimed at influencing students' progress in the general education curriculum. This article presents methods and tools for conducting these analyses and a case example to illustrate their use.

Back to Top | Article Outline

CURRICULUM-BASED READING AND WRITING ASSESSMENT

In any assessment activity, the first question should be: what is the purpose? The assessment purpose addressed in this article is an analysis of a student's language strengths and needs within curricular contexts to establish appropriate intervention goals and benchmarks, to guide the intervention process, and to evaluate therapeutic outcomes.

Curriculum-based language assessment is not just a good idea; it is the law. The 1997 reauthorization of the Individuals with Disabilities Education Act (IDEA; U.S. Congress, 1997) specified that individualized education programs (IEPs) for students with disabilities “must include a statement of measurable annual goals, including benchmarks or short-term objectives, related to meeting the child's needs that result from the child's disability” and “to enable the child to be involved in and progress in the general curriculum” (Sec.614(1)(1)(A)).

Curriculum-based language assessment (CBLA) is defined as the “use of curriculum contexts and content for measuring a student's language intervention needs and progress” (Nelson, 1989, p. 171). CBLA is criterion-referenced and can be used recursively. This distinguishes it from eligibility assessment, which uses diagnostic norm-referenced tests and is used only infrequently.

CBLA also differs from other forms of curriculum-based measurement (CBM). The purposes of CBM are to “measure directly student's skill achievements at specified grades” (Idol, Nevin, & Paolucci-Whitcomb, 1986, p. v), assess “student performance within the course content” (Tucker, 1985, p. 200), and provide a database for “making special education decisions” (Deno, 1989, p. 1). For such purposes, Deno and his colleagues validated techniques to quantify growth at frequent intervals by using quick, short sample measurements of reading (Deno, Mirkin, & Chiang, 1982; Fuchs, Fuchs, & Maxwell, 1988; Jenkins & Jewell, 1993) and writing (Deno, Marston, & Mirkin, 1982; Espin et al., 2000). Short sample analysis has clear advantages for repeated measurement, but it yields too little detailed information for language intervention. The CBM question, “Is the student learning the curriculum?” is fundamentally different from the CBLA question, “Does the student have the language skills to learn the curriculum?” (Nelson, 1994, 1998).

CBLA starts with interviews of students, parents, and teachers about the areas of the curriculum that are of greatest concern (Nelson, 1994, 1998). Materials from those contexts are then used for assessment and intervention. When asked, school-aged students, teachers, and parents often identify reading and writing problems as primary to speech difficulties in academic and social importance. This is in contrast to evidence that speech-production problems are more often identified and treated than language problems by speech-language pathologists (SLPs) (Zhang & Tomblin, 2000), but it is consistent with roles in literacy development that SLPs may appropriately play (ASHA, 2001). Once curricular contexts of concern have been identified, a set of four questions is used to guide the recursive cycles of assessment and intervention using curricular tasks from those contexts (see Table 1).

Table 1
Table 1
Image Tools
Back to Top | Article Outline

THE THEORETICAL MODEL UNDERLYING THESE TOOLS

Any assessment activity reflects the theoretical model underlying it. In this case, we base assessment and intervention on a model of spoken and written language processing described by Nelson (1998) as the “pinball wizardry” model (Figure 1). The model represents multi-level, interactive neuro-cognitive and linguistic processes, with meaning at the center. It includes three linguistic knowledge systems (phonological-orthographic, syntactic, semantic), three related systems (pragmatic, discourse, and world knowledge), and arrows representing neural connections and working memory mechanisms. In the model, higher level processing systems receive bottom-up information from auditory and visual perceptual input sources for listening and reading, and they supply information to sensorimotor output processes for speaking and writing. At the top of the model are metacognitive elements that serve top-down strategic and executive control functions (Denckla, 1994; Graham & Harris, 1999; Singer & Bashir, 1999).

Figure 1
Figure 1
Image Tools

This model is used for guiding analysis of reading and writing samples. It also informs the scaffolding and mediating processes of dynamic assessment and language intervention. Dynamic assessment approaches “are characterized by guided learning for the purpose of determining a learner's potential for change” (Palincsar, Brown, & Campione, 1994, p. 132). Such approaches use a test-teach-test paradigm to provide “scaffolds” (i.e., mediating supports) for activities students cannot perform without assistance. The goal is to mediate a student to higher levels of functioning by framing, focusing, guiding, and feeding back (Feuerstein, 1979) key information about the task (i.e., “missed cues”); then to measure the student's independent abilities after supports are withdrawn (Gutierrez-Clellen, 2000). Such approaches are particularly recommended for assessing culturally diverse learners (Peña, Iglesias, & Lidz, 2001), but they are useful for all students.

The tools described in this article are designed to identify aspects of language and information processing systems and strategies that students can engage independently, aspects they can perform with support, and aspects they cannot yet do at all. This information is then used by the clinician and collaborating teachers to design a plan to bridge the student from current abilities (observed response [OR]) to higher-level abilities (expected response [ER]).

Back to Top | Article Outline

APPLYING THE MODEL TO CURRICULUM-BASED READING ASSESSMENT

Making use of the model for reading assessment, the clinician starts with step one of CBLA by constructing the ER for a curriculum-based activity, such as reading a section of a science book successfully. For example, a competent student might draw on metacognitive skill, discourse knowledge, and long-term memory to recall the topic of the chapter, while skimming illustrations, headings, and highlighted words to activate prior knowledge. When looking at the text and moving her eyes from left to right to take in visual information, the student would take a direct route to meaning for some well-known words but make reference to phonological-orthographic knowledge (perhaps using pronunciation keys) for others. While decoding print into language, the student would use knowledge of syntactic and semantic constrictions to understand new vocabulary in the context of sentences and to judge whether sentences make sense, simultaneously trying to grasp larger concepts the author is attempting to convey. When transition words (e.g., next, the primary reason, in summary), connectors (e.g., when, if…then, because, therefore, except for), pronouns, or other cohesive devices are used, the student immediately would relate them to other elements across sentence boundaries to connect ideas logically. When a passage does not make sense, the effective student would back up and reread. This student would be able to paraphrase the information accurately and answer both concrete and abstract/inferential questions about the content.

When analyzing the OR of a struggling reader, the clinician might observe that the student omits the preparatory steps and plunges directly into attempting to decode the first word, but demonstrates inadequate knowledge of phonological-orthographic relationships to decode even common words (e.g., Gillam & Carlile, 1997). Perhaps this student uses only the first consonant and apparently guesses at the rest of the word, not using information about syntax or semantics to figure out what might make sense in the sentence and discourse. When attempting to decode unfamiliar vocabulary, this student might demonstrate poor concepts of syllables and how to shift pronunciation until a real word or probable new word results. When asked to paraphrase what was read, the student might show incomplete comprehension. Concrete and inferential questions also might elicit evidence of incomplete and inaccurate understandings.

After observing the responses the student makes independently, the clinician would use a dynamic assessment paradigm to scaffold higher level performance and learn more about what the student can do with support. The clinician could probe for metacognitive strategies by asking first if the student knows what the selection is about. If the student indicates uncertainty, the clinician could ask how the student might figure that out before she starts to read. If the student does not volunteer possibilities, the clinician could frame headings and other discourse level cues to see if the student could take advantage of them. To find out more about what the student knows about decoding, the clinician might frame a syllable at a time and begin to inventory the student's knowledge of sound-symbol correspondence and morphological elements (e.g., -ing, -ed, -tion). To learn whether comprehension problems stem primarily from decoding difficulties or a more basic language problem, the clinician might read a passage to the student and then check the ability to paraphrase or answer comprehension questions. The result would be information about what to target in intervention and how to go about it.

Back to Top | Article Outline

APPLYING THE MODEL TO CURRICULUM-BASED WRITING

In the case of writing, the clinician might use the model to construct the ER for an assignment to write a persuasive essay about a favorite activity. To accomplish this, a successful student would need to reflect on what it means to persuade someone of something, select a topic the student knows and feels something about, and plan a general discourse structure (Scott, 1999; Westby, 1999). Then the student must use the plan to begin drafting. At that point, he would generate sentences internally and use knowledge of phonological-orthographic relationships to produce intended words in meaningful, well-constructed sentences. He would organize the sentences into paragraphs designed to establish the main point and to provide multiple arguments to support it. Along the way, he or she would reread the essay, making revisions and edits as necessary.

When observing a struggling writer, the clinician might notice how long it takes for the student to generate an idea. If the student produces nothing after an extended period of time, the clinician, as a regular participant in classroom writing activities, might approach this student, as well as others who also are struggling, and ask some questions that would help them isolate topics. If the student is having difficulty organizing, the teacher and clinician could introduce semantic webbing strategies, then probe later to see if the student uses them independently. If the student struggles with drafting, the clinician might ask the student to show what he or she usually does when attempting to spell an unfamiliar word or to compose a sentence orally first. Then the clinician can probe knowledge of sound-symbol and syllabic-morphological information, making note of the need to perform a full inventory later (e.g., Torgesen, 1999). If the student still does not write, the clinician might take some dictation initially, then offer the pen back to the student if appropriate. On subsequent probes and in-class writing, the clinician would scaffold more independent learning attempts, then assess the results when supports are minimized.

Back to Top | Article Outline

GATHERING READING AND WRITING SAMPLES

When gathering either reading or writing samples, the goal is to observe the most advanced abilities the student can demonstrate independently. As illustrated, when students are not able to produce adequate samples with complete independence, dynamic assessment techniques can be used and the difference observed. Even in cases in which greater independence is possible, strategic questioning and other dynamic assessment techniques can illuminate strength areas that can support change, as well as areas of need that should be addressed directly in intervention. This evidence is important for establishing prognosis and planning instruction.

The analyses described here may be performed on samples gathered either as special probes or in the course of on-going curriculum-based intervention. We have used the tools in after-school homework labs (Nelson & Van Meter, 1996) and inclusive computer-supported writing labs (Nelson, Van Meter, Chamberlain, & Bahr, 2001; Nelson, Van Meter, & Bahr, in press), both for individualized planning and documenting group outcomes.

Back to Top | Article Outline
Gathering Reading Samples

Depending on areas of the curriculum identified by students, parents, and teachers as being of greatest concern, reading samples may be drawn either from narrative or expository texts. Students' actual grade level texts should be used if possible. This is consistent with the goal to keep students in the general education curriculum. It differs, however, from the approach often used by reading specialists, which starts by identifying a student's reading level and then assesses skills using reading-level texts. The advantage of using grade-level texts is that they present curriculum-based opportunities to judge not only a student's ability to decode print at grade level, but also to understand the language of those texts. Language specialists have a role in teaching students with language disorders to make sense of grade-level texts even if the students cannot read them independently. When placed in general education for science and social studies, for example, such students must be able to learn from such texts.

When students are reading so far below grade level that adequate samples are impossible, alternatives are needed. Overly difficult passages also may inhibit best efforts until trust can be established. In such cases, reading selections might be drawn from books or materials closer to the child's current reading level, or just a little higher. Although we prefer to use students' actual classroom texts, standard graded passages of informal reading inventories (IRI) or graded trade books could be used. For example, Leslie and Caldwell (2000) provided narrative and expository reading passages with comprehension questions for use with students from the pre-primer through junior-high level. Whatever the source, the material should be new to the student. Examiners can learn little about a student's decoding knowledge if the material being read has been memorized or is too simple.

Although one can gain useful information by asking students to read a passage silently and answer questions about it, read-aloud samples yield information about decoding that can be gathered in no other way. When introducing the read-aloud sampling activity, the examiner explains that finding ways to help the student read better requires first learning what the student can do without help. If a student struggles with the independent attempt and becomes anxious, the examiner can offer assistance or select an easier sample. Some children routinely wait for help when they get stuck or say, “I don't know.” This behavior, which may represent learned helplessness (Winograd & Niquette, 1988), should be noted; then dynamic assessment techniques can be used to differentiate knowledge, skills, and strategies the student has yet to learn from those the student knows but is not currently bringing to the task at hand.

Back to Top | Article Outline
Gathering Writing Samples

Samples of children's writing may target narrative or expository discourse (or other genres). Clinicians, however, can only provide instructions that are likely to result in a certain genre. Actual genre types produced by students may vary according to students' ability levels or natural preferences (Westby & Clauser, 1999).

The choice as to which genre to attempt to sample in writing should be made in collaboration with teachers. The decision is based on interviews with students and parents and on general curricular objectives. In the early elementary years, narrative samples work well to give a picture of students' knowledge of a genre that is important both for reading and writing, and one in which prior knowledge is an implicit expectation of the school culture curriculum (Nelson, 1998). That is, children who have proficiency with European tradition story grammar have an important piece of background knowledge that can assist them to understand other stories with similar structure. To gather narrative samples, we prefer open-ended sampling techniques over story starters because open-ended techniques can reveal more about students' story-telling abilities than those that specify characters and setting (Bahr, Nelson, & Van Meter, 1996; Swoger, 1989). An open-ended probe designed to elicit narratives involves telling students: We are interested in the stories students write. Your story should tell about a problem and what happens. It can be real or imaginary.

In later grades, although narrative ability remains important, teachers often express a preference to know more about their students' expository writing. Expository writing is particularly challenging for many students (Scott, 1999; Westby & Clauser, 1999). It also plays a prominent role in the high-stakes testing many states now use to evaluate students' academic abilities. Expository writing might be assessed in the context of real curricular assignments. For gathering probes in a single session without the reading and research required for authentic reports, students can be asked to: Think about a topic that's interesting to you. Plan a report on your topic and write about it. In response to this probe, fourth to sixth grade students in our writing labs have chosen such topics as balloons, dinosaurs, and building go-carts.

Whether gathering narrative or expository samples, general directions are: You can print or use cursive. You have about an hour to plan and organize, draft, revise, and edit. Use the plain sheet of paper to do your planning and the lined paper to draft. It is a good idea to skip every other line. We're giving you a pen because we want to see the changes you make. Making revisions and edits is part of being an author. Spell the best that you can.

For students who produce few or no written words within the first 10–15 minutes, instructors can scaffold or take dictation. This may induce the reluctant student to begin writing independently, or it may yield a spoken sample of the student's discourse, which can provide a place to start intervention. Notes are made on the transcript to indicate what kind of assistance was provided. When finished, we use a routine in which students read their samples aloud to an adult team member, who listens appreciatively and writes in words that may be unintelligible. This is intended to show that we value students' ideas and are focused on meaningful communication. When repeating the probes in writing lab classrooms, letting the students know they will be able to read their work in the author chair can add to students' motivation to give their best effort (Nelson, Bahr, & Van Meter, in press).

Back to Top | Article Outline

ANALYZING READING SAMPLES

Creating a Transcript for a Read-Aloud Sample

To produce a read-aloud transcript, the clinician records any spoken discrepancies directly on a photocopy of the actual reading passage or a typed transcript of it. Audio-tape-recording is helpful as a backup to on-line transcription. Figure 2 presents a portion of a coded sample for a child we call Melissa. She was a third-grade student in a classroom that was part of a project in which SLPs collaborated with teachers to provide inclusive computer-supported writing lab activities three days per week (Nelson et al., 2001). At the same time, Melissa was undergoing formal testing to see if she would qualify for special education services. Our assessments were conducted as informal probes to contribute to the team's decision making and to document writing lab outcomes. The sample in Figure 2 shows the observed responses for Melissa's reading of a story from her third-grade reading textbook.

Figure 2
Figure 2
Image Tools

Several approaches have been described for marking and analyzing transcribed read-aloud samples, including “miscue analysis,” as described by Goodman (1973) and his colleagues (Goodman, Watson, & Burke, 1987), and “running records,” as described by Clay (1979). Assessment and intervention approaches such as these, which target word recognition in context, rather than in isolation, have been criticized (e.g., Lyon, 1995; Scanlon & Vellutino, 1996). Part of the concern is that such approaches lead to interventions that encourage readers to predict or “guess” words at the expense of learning to decode them. That does not necessarily follow, however, and a full language assessment should yield information about how students use multiple language systems to recognize words and comprehend simultaneously. Gillam and Carlile (1997) found that children with specific language impairments, in particular, had more difficulty integrating information from multiple language systems than children matched for single-word reading level. By comparing a student's OR with the ER for curricular tasks, a language specialist can contribute important information to the instructional team about how to improve reading across the curriculum.

When coding the read-aloud transcript, multiple attempts are recorded. Repetitions or self-corrections are indicated with the letter “r” or “c” in a small circle and a tail under the word(s) repeated. Omitted words or morphemes are circled. Extreme pauses may be marked with a circled “p.” Changes that match a student's spoken dialect can be marked with a circled “d.” Words assisted by the examiner can be indicated with square brackets. Coding conventions may be customized. The point is to create a transcript that can yield insights into what a student knows about reading. The result should be a profile of strengths, in terms of language abilities the student uses regularly, and needs, in terms of abilities occurring only with scaffolding or not at all.

Back to Top | Article Outline
Analyzing Read-Aloud Samples for Decoding

The worksheet in Appendix A organizes information from the analysis of Melissa's transcript. As the heading data indicate, she read 157 of 201 words (78%) correctly the first time without error or repetition. The interpretation of percentages of words read correctly is influenced by the difficulty and familiarity of the material. IRIs categorize students' reading as independent (98% words decoded and 90% comprehension responses correct), instructional (90% words decoded and 70% comprehension responses correct), or frustration (fewer than 90% words decoded and less than 70% comprehension responses correct) (Leslie & Caldwell, 2000).

Although Melissa's percentage of 78% would be categorized in the IRI “frustration range,” she showed little evidence of frustration while reading. In fact, Melissa approached the task willingly and wanted to read more at the end of the story. This was interpreted as a relative strength in her use of metacognitive controls despite decoding difficulties. Unless students demonstrate extreme frustration and reluctance, encouraging them and supporting them to read the same “difficult” material as their classmates can convey confidence in their ability to progress in the general education curriculum and can help them achieve that goal.

In addition to quantifying the student's reading fluency, the clinician can analyze qualitative evidence of language knowledge the student is using while decoding. Figure 2 shows a miscue summary (Clay, 1979; Gillam & Carlile, 1997; Goodman, 1973; Goodman et al., 1987; Weaver, 1994). It tallies the degree to which any observed (OR) discrepancies match expected (ER) words in Meaning [Ask: Does the inaccurate word make sense, and does it change the meaning?], Syntax [Ask: Can you say that?], and Grapho-Phonemic characteristics [Ask: To what degree does the inaccurate word match the way the target word looks and sounds?]. Self-corrected miscues are automatically coded with a “+” in all three columns, and nonsense words typically are coded as “+” for grapho-phonemic match but “−” for meaning and syntax. Other miscues require individual analysis within sentence contexts.

By definition, all inaccurate word productions differ grapho-phonemically to some degree from the intended text. Evidence of the degree to which a student's observed responses maintain grapho-phonemic similarity to the text provides a key to understanding decoding difficulties and for planning intervention. An impressive body of research (summarized by Catts & Kamhi, 1999; Lyon, 1995; Snow, Burns, & Griffin, 1998) points to phonological awareness and knowledge of the alphabetic principle as central to decoding ability. That is, a child's knowledge of phonological patterns and how orthographic patterns represent them is a primary indicator of successful decoding and fluent reading. Melissa's inaccurate word productions matched the text closely 30 times, partially 12 times, and not at all only 2 times. Although Melissa apparently has grasped the alphabetic principle and is relying on grapho-phonemic cues more than semantic or syntactic ones, she is not always successful. Thus, it is important to analyze her word-level decoding strategies more closely.

The middle column of the worksheet guides the examiner to consider the student's demonstration of knowledge of word shapes, including initial sounds, or the “onset” of the word to the first vowel, and “rime” from the vowel to the end of the word. Observations also can be made of whether the student has advanced beyond attempts to sound out difficult words phoneme-by-phoneme to using “chunks” of words, including consonant clusters (e.g., skl, str), digraphs (e.g., sh, th, ou), common “word family” components (-ake, -ent, -at, -ite, -ight) or morphemes (e.g., un-, -tion, -ly). The clinician also notes whether the student has multiple strategies for changing pronunciation of non-words, seeking real words that fit the context. Melissa's data suggested that she was using the beginnings of words more than their endings, although she did show some evidence of using morphological endings (e.g., reading numbers for members). She left some nonsense words uncorrected (e.g., crible for cradle in miscue #4) but did make multiple attempts for other words, in some cases self-correcting (e.g., my /m/ /emi/ Amy, Amy in miscue #36). She also showed evidence of using some consonant cluster and other syllable chunks, as in the crible for cradle example. However, that example also suggests a weakness in medial vowels and a possible b/d confusion (which appeared in her written sample as well).

When analyzing meaning, a first-level analysis of whether the miscue makes sense in context can be followed by a second-level analysis of whether it changes the author's meaning. In Melissa's sample, a “+” is used in the “M” column if the miscue maintains meaning under both criteria (e.g., Let's start a best friend club, for Let's start a best friends club); “∼” if it fits semantically but changes the author's meaning (e.g., We can meet under our porch, for We can meet under your porch); and “−” if it does neither (e.g., Who are the numbers? for Who are the members?). Analysis revealed that 28 misread words maintained meaning fully, 2 partially, and 14 not at all, suggesting that she uses meaning to decode successfully in a majority of cases but not all the time.

Analysis of syntactic fit was completed for Melissa's sample by reading each sentence as she read it and marking the “S” column as grammatical (+28), partially grammatical (∼1), or agrammatical (−15). Melissa's proportions of inaccurate word productions suggest that she has some ability to use syntactic knowledge to identify words in the text and to confirm her decoding accuracy, but again, she does not use it consistently. For her a good scaffolding technique may be to stumble in comprehension and feed back anomalous sentences (e.g., “Wait a minute; I'm confused; Can you say Lizzie shared her truck or teach with candy with Harold?”).

A second pass through the transcript yields more information about Melissa's use of language knowledge to predict and confirm her word decoding efforts. To judge language prediction skills, the clinician reads the transcribed words up to and including the inaccuracy. If the result is acceptable linguistically, the clinician tallies it as a “yes.” Seven of Melissa's 13 miscues (54%) fit linguistically with previous words in the sentence, suggesting that she was actively predicting words to fit the language only on about half of them. Her confirmation skills were judged by counting the number of uncorrected and inaccurately decoded words that fit the context of words that followed in a sentence. This is based on the assumption that she would attempt to correct words that did not make sense if she were actively monitoring meaning. Melissa corrected only 16 of her 44 inaccurate words (36%). Only 4 of her 13 uncorrected words (31%) fit grammatically with words that followed in the same sentence. These results suggest that Melissa could benefit from learning a self-talk strategy to ask whether what she is reading makes sense and returning to self-correct when it does not.

Back to Top | Article Outline
Assessing Reading Comprehension

Measuring language comprehension (spoken or written) is challenging. It can be assessed only indirectly and requires multiple methods (Westby, 1999). One approach is to elicit retellings or paraphrases. Another is to pose questions that can be answered only if the student has understood the text. If a student has already retold or paraphrased text language successfully, questions may be redundant. In her retelling, Melissa was able to describe the characters and to capture the gist, but she omitted the initiating event of the two children deciding to form a best friends club and she failed to maintain the story's temporal sequence. She also had difficulty drawing the logical inference when asked why Harold knew he would be invited to the party (Douglas's mother said he could have his whole class to the party, and Harold was in the class), but Lizzie would not (“She was in a different class”).

Dynamic assessment techniques are used to tap more deeply into the student's language comprehension skills. The clinician might probe whether a student can use context to infer vocabulary meanings or can relate pronouns or other cohesive devices to their prior referents. For example, when asked who “she” was in one sentence, Melissa correctly identified Christina [pronounced “Kristen”] as the referent, rather than the adjacent “Lizzie.” In this instance, she demonstrated strength in language knowledge that might be used to support her decoding efforts.

Comparing decoding and comprehension can lead to one of the profiles described by Catts & Kamhi (1999). For example, students with “dyslexia” demonstrate low-level word decoding in conjunction with high-level language comprehension. Students with “hyperlexia” demonstrate high-level decoding in conjunction with low-level comprehension.

Back to Top | Article Outline

ANALYZING WRITTEN LANGUAGE SAMPLES

After gathering samples of students' written language, it is helpful to photocopy originals for coding. Melissa's mid-year narrative probe, which she produced at about the same time as her read-aloud sample, appears in Figure 3. In inclusive writing lab contexts (Nelson et al., 2001), we have implemented a continuous loop of assessment and intervention, each informing the other.

Figure 3
Figure 3
Image Tools
Back to Top | Article Outline
Analyzing Writing Processes

The writing sample worksheet (Appendix B) includes sections for assessing writing processes, written products, and spoken language. Written samples can be used for assessing written products, but active observation is required to assess writing processes and spoken communication.

The top section is used to describe the writing processes—planning and organizing, drafting, and revising and editing. Observations of the student while writing are guided by questions about the degree to which the student uses writing processes in a recursive and strategic manner. This may reveal strengths and needs that are not reflected in written products.

In the area of planning and organizing, the examiner observes the independence, confidence, reflection, and other executive strategies (Denckla, 1994; Singer & Bashir, 1999) with which the student approaches the writing process. Notes are made about whether the student brainstorms possible topics, draws a picture, writes a title or notes, uses self-talk while planning, or needs to dictate to an instructor. If a student uses a graphic organizer, the observer describes its type and complexity. Melissa arrived at her topic independently but did not draw a picture and showed no other evidence of planning.

In the area of drafting, the observer records whether the student refers to planning notes while writing, revises during the drafting process, and depends on others for spelling, or proceeds quickly and independently. Although Melissa had been dependent on others for spelling earlier in the school year, for this probe, she was more willing to spell independently.

In the area of revising and editing, the observer records whether there are moments of reworking ideas and planning along the way, and whether the student rereads his or her work, looking for opportunities to revise for content as well as to edit spelling and punctuation. Melissa originally ended her story at the bottom of the first page. With minimal encouragement, she revised by extending the story by three sentences. She edited two words for spelling and one for a capital letter at the beginning of a sentence.

Back to Top | Article Outline
Assessing Written Products

At the discourse level, fluency is recorded on the worksheet as the total number of words in the sample. Melissa produced 102 words in the body of her story, not counting the two-word title, “My baer” or “the end,” and counting “for ever” as one word. Structural organization is also documented, as well as evidence of narrative maturity.

Although a variety of approaches could be used to analyze narrative maturity (see, e.g., Hughes, McGillivray, & Schmidek, 1997), coding for the presence of story grammar elements (Hedberg & Westby, 1993; Stein & Glenn, 1982; Westby, 1999) can lead directly to goals and benchmarks for missing elements. Not all children are socialized to produce classical western style narratives, however (Hester, 1996; Westby, 1994). Story structure also may vary when children from diverse cultures produce spoken versus written narratives (Hyter & Westby, 1996; Michaels, 1981). Expository discourse can vary by sub-genre as well. Westby and Clauser (1999) provided a chart for analyzing the many varieties of expository discourse.

Given these caveats, story grammar developmental scales (e.g., Hedberg & Westby, 1993; Hughes et al., 1997) remain helpful in leading to “next step” goals in planning. For example, narratives can be rated as: (1) isolated descriptions when students describe or mention isolated people, places, or events; (2) temporal sequences when students connect events temporally but without conveying cause and effect relationships; (3) reactive sequences when students add cause-effect relationships but do not indicate a problem or imply that their characters have goals; (4) abbreviated episodes when students add a clearly stated problem and imply or state their characters' aims or attentions; (5) complete episodes when students state clearly their characters' plans to achieve goals related to the problem and provide an ending to bring closure; or (6) complex/multiple episodes when students add obstacles in the goal path or write more than one abbreviated or complete episode. Melissa's bear story was evaluated as a temporal sequence. She expressed no cause-effect relationships and offered no clearly stated problem or evidence of goal setting or planning. For Melissa, a discourse goal at the next higher level was established to include the causal elements of a reactive sequence.

Also within the discourse level, elements are rated that suggest the student's sense of audience. These include judgments as to whether aspects of the piece were creative, original, and had the intended effect, such as attempts at humor, drama, or persuasion. The author's inclusion of appropriate and relevant information can be judged, as well as dialogue or other literary devices, such as rhetorical questions, lists, repetitions for effect, or hyperbole. Cohesion can be judged for such elements as pronoun use or verb tense.

Melissa's “story” attempt was rather mundane. Her description of the bear, other than the mention of the brown hat, was not judged as particularly original. She did show sensitivity to pronoun cohesion and rules for using indefinite and definite article by introducing a baer in the first sentence, referring to it four more times in the same sentence, and changing to the baer in the following sentence. She maintained past tense throughout the story, but confused sequence and used the home when my home would have been more appropriate at the end of the story.

At the sentence level, the worksheet summarizes data about advancing syntactic maturity. To compute T-unit length, the clinician marks divisions between independent clauses, including any embedded or subordinated clauses. T-units are the “minimal terminal” or stand-alone units described by Hunt (1965, 1970, 1977). T-units, rather than sentences, are used as the unit of measure to avoid over-crediting students for strings of independent clauses joined with the compound conjunctions and, but, or, and so. Fragments or incomplete but stand-alone words or phrases (C-units; Loban, 1976) are included in computing mean length of T-unit. MLT-unit indexes advancing maturity by reflecting greater embedding and subordination (Hunt, 1977; Scott, 1988, 1994). Melissa produced 11 T-units in her 102-word sample, for a mean of 9.3 words per T-unit, which is higher than the usual expectation for third graders (Scott, 1988).

T-unit length is an important and accepted measure, but it is not always as sensitive as clinicians need (Scott, 1999), it does not reflect grammatical accuracy, and it may over-credit complex syntax when simpler forms would be better. Additionally, teachers may be unfamiliar with the jargon of T-units. They may relate more readily to a system that codes sentences as “simple” or “complex” and “correct” or “incorrect.” In this system, a sentence is coded as “complex” if it includes more than one verb phrase or a secondary verb (infinitive, gerund, and participle), allowing a maximum of two independent clauses in a compound sentence (more may be considered “run on”). A sentence is coded as “correct” based on the rules of standard edited English. This decision is consistent with the general education literacy curriculum and supportive of the aim for academic success for all students (Delpit, 1995). To avoid calling dialectal features “incorrect,” however, sentences can be coded as “correct” and marked with a circled “d” for dialect if they are acceptable in a student's home dialect.

As students mature and develop proficiency with standard edited English, they produce fewer incorrect and more complex sentences along the continuum: simple incorrect [si], simple correct [sc], complex incorrect [ci], complex correct [cc]. Variations in this progression are possible, however. For example, increases in [ci] sentences may be viewed as a sign of growth for students who took few risks previously (Weaver, 1982), and increases in [sc] sentences may signal growth past a stage of linking strings of independent clauses with “and.” Melissa's sentences were coded as 1 [si], 2 [sc], 2 [ci], and 4 [cc]. She used compound verb phrases frequently (over-relying on them, in fact) and introduced one sentence with a subordinate clause, When I went to bed I gave my baer a kiss and went to ded that night.

At the word level, choices of unusually mature or interesting words are noted. Word knowledge also is reflected in a student's spelling attempts. Such knowledge may be quantified as the percentage of words spelled correctly. This measure is influenced, however, by the words the student elects to use. Students who use only words they know how to spell may increase the percentage of words spelled correctly at the expense of other elements of maturity. Melissa misspelled 12 of the 102 words in the body of her story (12%), but five of these were baer/bear.

Beyond percentages, several schemata have been suggested for rating spelling maturity (Ehri, 2000). Although developmental scales are somewhat controversial (Treiman & Bourassa, 2000a, 2000b), they can contribute to goal setting if individual differences are considered. The progression we use (adapted from Ehri, 1986; Gentry, 1982) codes: (1) scribble writing for letter-like sequences with pretend meanings [in which case, dictation would be taken]; (2) prephonetic spelling for unrelated letters strung together to convey meaning; (3) semi-phonetic spelling for letters forming words with only a few sounds represented, often first and last (e.g., fid for friend, propm for policeman); (4) phonetic spelling for clear use of sound-symbol relationships to capture most of the phonetic structure (e.g., deteshin for dentention, jrownding for drowning, chrip for trip); (5) transitional spelling for evidence of morphological awareness (e.g., -ed, -tion) and visual orthographic patterns (e.g., -ight); (6) conventional spelling for use of multiple strategies, including context-dependent spelling of homophones such as their and there. Melissa's spelling patterns showed she still used some phonetic strategies (e.g., baer/bear; stoer/store; cut/cute), while also showing emerging evidence of transitional level spelling (e.g., fase/face; wened, whened/went).

Finally, percentages may be computed for writing conventions. Capitalization, end punctuation, commas, apostrophes, and quotation marks are judged in this area. Formatting also may be assessed for the presence of such elements as paragraph indenting and special formatting for such distinct genres as letters and poetry. Melissa generally capitalized words at the beginning of sentences. She showed inconsistent capitalization of “I,” using the uppercase form in the early parts of the story, but “i” later.

Back to Top | Article Outline
Assessing Related Spoken Language Abilities

Spoken language skills assessed in classroom contexts include listening and comprehension in reception, and manner, topic maintenance, and grammatical ability in expression. Strengths in these areas are used to support the student's written language development. Problems are addressed reciprocally in spoken and written forms.

Observations of Melissa's spoken language abilities showed generally strong communicative manner and language comprehension. Areas of need appeared in spoken communication organization, topic maintenance, and specific vocabulary. These were consistent with problems that appeared in the analysis both of her reading and writing samples. Topic organization and higher-level abstract language use (Nippold, 1998; Nippold, Allen, & Kirsch, 2001; Nippold, Moran, & Schwarz, 2001) are areas that could be targeted for Melissa.

Back to Top | Article Outline

ESTABLISHING GOALS, PROVIDING INTERVENTION, AND MEASURING CHANGE

After data from samples are summarized on worksheets, the examiner uses the information to make decisions about intervention goals and objectives. Plans are individualized to meet the student's needs, while taking advantage of his or her strengths. IEP objectives and general curricular objectives are integrated in this process. Repeated probes or classroom samples can be reanalyzed to document progress at report card time.

In Melissa's case, the results of her reading and writing assessments and related planning are shown in the assessment summary and objectives forms (Appendices C and D). At the discourse level, both samples suggested a need to improve narrative construction skills beyond the temporal sequence. This became one of Melissa's writing lab goals, and scaffolding was designed to help her relate her ideas causally across discourse genres.

One curricular assignment, writing an animal report, provided opportunities to connect reading and writing skills. Melissa chose pandas as her topic and gathered information through shared reading and scaffolded use of decoding and comprehension strategies. This gave her opportunities to work on the use of syntax and meaning cues while reading, so that she could make sense of what she read, take notes, and then draft original sentences from her notes. She summarized information and wrote her notes with the support of a planning worksheet that provided an organizational framework. With this support, Melissa wrote a three-paragraph report detailing the panda's appearance, diet, and habits. Working from notes, she was able to address the goal of more specific vocabulary use—

It looks, furry, black, white and looks friendly. it feels soft it doesn't smell ever good.

It eats fish and BamBoo shoots, and mice and birds.

Pandas use their claws to hold on to the BamBoo shoots when they eat.

At the end of the year, Melissa wrote a final independent narrative probe that told of roller-skating with her mom. Again, most of the narrative was organized as a temporal sequence, telling about going around “ten times,” then…

…20 more time. Then we went home and went to bed. The neKt day we went roller skating agin and we got on skats and went roller skating then we went around 100 times. Then we eat hot dogs and pop then we went around 200 more times and then [inserted] I boke my lage and then I can not go roller skating agin. I had to get cruchies and I did not like roller skating agine. The End.

Although Melissa still had room for improvement in her discourse skills (including verb tense cohesion), she did independently conclude this story with a problem, elements of a reactive sequence, and comments regarding her emotional response to the events.

At the sentence level, a reduction in the use of the compound verb phrases formed with and in the year-end narrative probe resulted in an increase of simple correct sentences, but Melissa continued her tendency toward run-on sentences with multiple independent clauses. She also showed an over-reliance on beginning sentences with “then” and “and then.” Although she was conveying causal relationships, she did not use the word “because” for subordination.

At the word level, when Melissa produced her final probe, she was more solidly in the stage of transitional spelling and moving into conventional spelling. In this sample, she spelled “went” in all instances, and she correctly inflected “skats” and “skating” relative to their syntactic roles in the sentence. She used editing processes and showed improved phonemic awareness and sequencing to change her initial attempt, “lost,” to the intended “lots.” She consistently capitalized “I” and the first words of sentences.

In multiple contexts, Melissa used rereading when words or sentences did not make sense. Although formal testing by the school psychologist and school SLP did not result in a decision of eligibility for special education services, Melissa was able to improve her language skills as a result of the classroom-based writing lab activities (Nelson et al., 2001) and to experience success in the general education curriculum.

Back to Top | Article Outline

CONCLUSION

When language specialists collaborate with general education teachers for the benefit of all students, curriculum-based reading and writing assessments contribute to the intervention process. The reading and written language analysis tools presented in this article are designed to support the clinical procedures of assessment, planning, and outcome monitoring. They can be embedded within traditional related-service delivery models. They also can help stimulate systemic change in the ways in which “related” services are provided. The tools offer advantages of being highly individualized and leading to decisions about what to do next in curriculum-relevant ways, targeting goals that are important to students, teachers, and parents, and achieving outcomes that make a difference in children's lives.

Back to Top | Article Outline

REFERENCES

American Speech-Language-Hearing Association. (2001). Roles and responsibilities of speech-language pathologists with respect to reading and writing in children and adolescents (position statement, technical report, and guidelines). Rockville, MD: Author.

Bahr, C., Nelson, N.W., & Van Meter, A. (1996). The effects of text-base and graphics-based software tools on planning and organizing of stories. Journal of Learning Disabilities, 2, 355–370.

Catts, H.W., & Kamhi, A.G. (Eds.). (1999). Language and reading disabilities. Boston: Allyn & Bacon.

Clay, M.M. (1979). The early detection of reading difficulties (3rd ed.). Auckland, New Zealand: Heinemann.

Deno, S.L. (1989). Curriculum-based measurement and special education services: A fundamental and direct relationship. In M.R. Shinn (Ed.), Curriculum-based measurement: Assessing special children. (pp. 1–17). New York: Guilford Press.

Deno, S.L., Marston D., & Mirkin, P.L. (1982). Valid measurement procedures for continuous development of written expression. Exceptional Children, 48, 368–371.

Deno, S.L., Mirkin, P.L., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 36–45.

Delpit, L. (1995). Other people's children: Cultural conflict in the classroom. New York: The New Press.

Denckla, M.B. (1994). Measurement of executive function. In G.R. Lyon (Ed.), Frames of reference for the assessment of learning disabilities (pp. 117–142). Baltimore: Paul H. Brookes.

Ehri, L.C. (1986). Sources of difficulty in learning to read and spell. In M.L. Wolraich & D. Routh (Eds.), Advances in developmental and behavioral pediatrics (Vol. 7, pp. 121–195). Greenwich, CT: JAI Press.

Ehri, L.C. (2000). Learning to read and learning to spell: Two sides of a coin. Topics in Language Disorders, 20(3), 19–36.

Espin, C., Shin, J., Deno, S.L., Skare, S., Robinson, S., & Benner, B. (2000). Identifying indicators of written expression proficiency for middle school students. Journal of Special Education, 34, 140–153.

Feuerstein, R. (1979). The dynamic assessment of retarded performers. Austin, TX: Pro-Ed.

Fuchs, L.S., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial and Special Education, 9(2), 20–28.

Gentry, J.R. (1982). An analysis of developmental spelling in GNYS AT WRK. The Reading Teacher, 36, 192–200.

Gillam, R.B., & Carlile, R.M. (1997). Oral reading and story retelling of students with specific language impairment. Language, Speech, and Hearing Services in Schools, 28, 30–41.

Goodman, K.S. (1973). Analysis of oral reading miscues: Applied psycholinguistics. In F. Smith (Ed.), Psycholinguistics and reading (pp. 158–176). New York: Holt, Rinehart and Winston.

Goodman, Y.M., Watson, D.J., & Burke, C.L. (1987). Reading miscue inventory: Alternative procedures. New York: Richard C. Owen.

Graham, S., & Harris, K.R. (1999). Assessment and intervention in overcoming writing difficulties: An illustration from the self-regulated strategy development model. Language, Speech, & Hearing Services in Schools, 30, 255–264.

Gutierrez-Clellen, V.F. (2000). Dynamic assessment: An approach to assessing children's language-learning potential. Seminars in Speech and Language, 21, 215–222.

Hedberg, N.L., & Westby, C.E. (1993). Analyzing story telling skills: Theory to practice. Austin, TX: Pro-Ed.

Hester, E.J. (1996). Narratives of young African American children. In A.G. Kamhi, K.E. Pollock, & J.L. Harris (Eds.), Communication development and disorders in African American children (pp. 227–245). Baltimore: Paul H. Brookes.

Hughes, D., McGillivray, L., & Schmidek, M. (1997). Guide to narrative language. Eau Claire, WI: Thinking Publications.

Hunt, K.W. (1965). Grammatical structures written at three grade levels. Urbana, IL: National Council of Teachers of English.

Hunt, K.W. (1970). Syntactic maturity in school children and adults. Monograph of the Society for Research in Child Development. Serial No. 134.

Hunt, K.W. (1977). Early blooming and late blooming syntactic structures. In C.R. Cooper & L. Odell (Eds.), Evaluating writing: Describing, measuring, judging (pp. 91–106). Urbana, IL: National Council of Teachers of English.

Hyter, Y., & Westby, C.E. (1996). Oral narratives of African American children. In A.G. Kamhi, K.E. Pollock, & J. L. Harris (Eds.), Communication development and disorders in African American children (pp. 245–265). Baltimore: Paul H. Brookes.

Idol, L., Nevin, A., & Paolucci-Whitcomb, P. (1986). Models of curriculum-based assessment. Rockville, MD: Aspen Publishers.

Jenkins, J.R., & Jewell, M. (1993). Examining the validity of two measures for formative teaching: Reading aloud and mazes. Exceptional Children, 59, 421–432.

Leslie, L., & Caldwell, J. (2000). Qualitative reading inventory-III. New York: Longman.

Loban, W.D. (1976). Language development: Kindergarten through grade twelve. Urbana, IL: National Council of Teachers of English.

Lyon, G.R. (1995). Toward a definition of dyslexia. Annals of Dyslexia, 45, 3–27.

Michaels, S. (1981). Sharing time: Children's narrative styles and differential access to literacy. Language in Society, 10, 423–442.

Nelson, N.W. (1989). Curriculum-based language assessment and intervention. Language, Speech, and Hearing Services in Schools, 20, 170–184.

Nelson, N.W. (1994). Curriculum-based language assessment and intervention across the grades. In G.P. Wallach & K.G. Butler (Eds.), Language learning disabilities in school-age children and adolescents (pp. 104–131). Boston: Allyn & Bacon.

Nelson, N.W. (1998). Childhood language disorders in context: Infancy through adolescence. Boston: Allyn & Bacon.

Nelson, N.W., Bahr, C.M., Van Meter, A.M. (in press). The writing lab approach to language intervention. Baltimore: Paul H. Brookes.

Nelson, N.W., & Van Meter, A.M. (1996, November). Language-based homework lab: Helping preadolescents make language connections. Presented at the annual conference of the American Speech-Language-Hearing Association, Seattle, WA.

Nelson, N.W., Van Meter, A.M., Chamberlain, D.M., & Bahr, C.M. (2001). The speech-language pathologist's role in a writing lab approach. Seminars in Speech and Language, 22(3), 209–220.

Nippold, M.A. (1998). Later language development: The school-age and adolescent years (2nd ed.). Austin, TX: Pro-Ed.

Nippold, M.A., Allen, M.A., & Kirsch, D.I. (2001). Proverb comprehension as a function of reading proficiency in preadolescents. Language, Speech, and Hearing Services in Schools, 32, 90–100.

Nippold, M.A., Moran, C., & Schwarz, I.E. (2001). Idiom understanding in preadolescents: Synergy in action. American Journal of Speech-Language Pathology, 10, 169–179.

Palincsar, A.S., Brown, A.L., & Campione, J.C. (1994). Models and practices of dynamic assessment. In G.P Wallach & K.G. Butler (Eds.), Language learning disabilities in school-aged children and adolescents (pp. 132–144). Boston: Allyn & Bacon.

Peña, E., Iglesias, A., & Lidz, C. (2001). Reducing test bias through dynamic assessment of children's word learning ability. American Journal of Speech-Language Pathology, 10, 138–154.

Scanlon, D.M., & Vellutino, F.R. (1996). Prerequisite skills, early instruction and success in first grade reading: Selected results from a longitudinal study. Mental Retardation and Developmental Disabilities, 2, 54–63.

Scott, C.M. (1988). Spoken and written syntax. In M. Nippold (Ed.), Later language development: Ages nine through nineteen (pp. 49–95). Austin, TX: Pro-Ed.

Scott, C.M. (1994). A discourse continuum for school-age students. In G.P. Wallach & K.G. Butler (Eds.), Language learning disabilities in school-aged children and adolescents (pp. 219–252). Boston: Allyn & Bacon.

Scott, C.M. (1999). Learning to write. In H.W. Catts & A.G. Kamhi (Eds.), Language and reading disabilities (pp. 224–258). Boston: Allyn & Bacon.

Singer, B.D., & Bashir, A.S. (1999). What are executive functions and self-regulation and what do they have to do with language-learning disorders. Language, Speech, and Hearing Services in Schools, 30, 265–273.

Snow, C.E., Burns, S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington D.C.: National Academy Press.

Stein, N., & Glenn, C. (1982). Children's concept of time: The development of story schema. In R. Freedle (Ed.), New directions in discourse processing (Vol. 2, pp. 255–282). Norwood, NJ: Ablex.

Swoger, P.A. (1989). Scott's gift. English Journal, 78, 61–65.

Torgesen, J.K. (1999). Assessment and instruction for phonemic awareness and word recognition skills. In H.W. Catts & A.G. Kamhi (Eds.), Language and reading disabilities (pp. 128–153). Boston: Allyn & Bacon.

Treiman, R., & Bourassa, D. (2000a). Children's written and oral spelling. Applied Psycholinguistics, 21, 183–204.

Treiman, R., & Bourassa, D.C. (2000b). The development of spelling skill. Topics in Language Disorders, 20(3), 1–18.

Tucker, J.A. (1985). Curriculum-based assessment: An introduction. Exceptional Children, 52, 199–204.

Individuals with Disabilities Education Act Amendments of 1997, Pub. L. No. 105-17, §614.
Weaver, C. (1982). Welcoming errors as signs of growth. Language Arts, 59, 438–444.

Weaver, C. (1994). Reading process and practice (2nd ed.). Portsmouth, NH: Heinemann.

Westby, C.E. (1994). The effects of genre, structure, and style of oral and written texts. In G.P. Wallach & K.G. Butler (Eds.), Language learning disabilities in school-age children and adolescents (pp. 180–218). Boston: Allyn & Bacon.

Westby, C.E. (1999). Assessing and facilitating text comprehension problems. In H.W. Catts & A.G. Kamhi (Eds.), Language and reading disabilities (pp. 154–223). Boston: Allyn & Bacon.

Westby, C.E., & Clauser, P.S. (1999). The right stuff for writing: Assessing and facilitating written language. In H.W. Catts & A.G. Kamhi (Eds.), Language and reading disabilities (pp. 259–313). Boston: Allyn & Bacon.

Winograd, P., & Niquette, G. (1988). Assessing learned helplessness in poor readers. Topics in Language Disorders, 8(3), 38–55.

Zhang, X., & Tomblin, J.B. (2000). The association of intervention receipt with speech-language profiles and social-demographic variables. American Journal of Speech-Language Pathology, 9, 345–357.

Back to Top | Article Outline
Appendix A
Reading Assessment Worksheet Cited Here...
Table. No caption av...
Image Tools
Back to Top | Article Outline
Appendix B
Writing Process and Product Worksheet Cited Here...
Table. No caption av...
Image Tools
Back to Top | Article Outline
Appendix C
Reading Assessment Summary and Objectives Cited Here...
Table. No caption av...
Image Tools
Back to Top | Article Outline
Appendix D
Writing Assessment Summary and Objectives Cited Here...
Table. No caption av...
Image Tools

Cited By:

This article has been cited 5 time(s).

Reading and Writing
Developmental and individual differences in Chinese writing
Guan, CQ; Ye, FF; Wagner, RK; Meng, WJ
Reading and Writing, 26(6): 1031-1056.
10.1007/s11145-012-9405-4
CrossRef
American Journal of Speech-Language Pathology
Assessing the microstructure of written language using a retelling paradigm
Puranik, CS; Lombardino, LJ; Altmann, LJP
American Journal of Speech-Language Pathology, 17(2): 107-120.

Reading and Writing
Writing through retellings: an exploratory study of language-impaired and dyslexic populations
Puranik, CS; Lombardino, LJ; Altmann, LJ
Reading and Writing, 20(3): 251-272.
10.1007/s11145-006-9030-1
CrossRef
Topics in Language Disorders
From the Editor
Butler, KG
Topics in Language Disorders, 25(1): 1-2.

PDF (51)
Topics in Language Disorders
The Context of Discourse Difficulty in Classroom and Clinic: An Update
Nelson, NW
Topics in Language Disorders, 25(4): 322-331.

PDF (103)
Back to Top | Article Outline
Keywords:

curriculum-based language assessment; literacy benchmarks; literacy outcomes; reading assessment; writing assessment

© 2002 Aspen Publishers, Inc.

Login

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.