As a newly credentialed speech–language pathologist, my clinical work included evaluating young children whose parents were concerned about their speech and language development. In graduate school, I had learned about Developmental Sentence Scoring (DSS; Lee & Canter, 1971) from its developer, Laura Lee, herself. The procedure relied on a tape-recorded sample of 50 consecutive complete and intelligible utterances obtained during a play-based conversation with an adult. Once the sample was transcribed, clinicians examined each sentence for the presence of major syntactic features (pronouns, verbs, negatives, conjunctions, questions, etc.), with results informing disorder identification and intervention. I remember being thrilled to have an assessment tool based on samples of real, connected speech and language. At the same time, I found myself wondering whether utterances from a play session in a strange place with a strange person would be representative of a child's “true” language. Would DSS results hold if I had followed the child around at home for a few days? So, I developed a research project in which preschool children were fitted with a wireless microphone/transmitter so they could be recorded unencumbered talking with their mothers at home (Scott & Taylor, 1978). Compared with clinical language samples from the same children, sentences at home were significantly longer and had higher frequencies of structures reflecting the content of home conversations (e.g., past tense verbs, questions, and complex sentences). The use of language samples for clinical purposes has continued to motivate much of my research agenda to the present time.
Since the early 1970s, clinical applications tied to language sample analysis (LSA) have expanded considerably. For instance, other grammar-based analysis systems in addition to DSS appeared (e.g., Index of Productive Syntax [IPSyn]; Scarborough, 1990). Jon Miller, beginning in the 1980s, has been a strong advocate for the use of language samples by researchers and clinicians who work with children with language disorders (Miller, 1981). Miller et al. developed a computer program, now widely used, that provides automatic calculation of lexical and syntactic features along with normative information in the form of reference databases (Systematic Analysis of Language Transcripts [SALT]; Miller & Iglesias, 2016). Interest in naturalistic language characteristics expanded beyond young children to older school-age children and adolescents; beyond conversation during play to narrative and expository discourse; beyond spoken language to written language; and beyond assessment to progress monitoring during intervention. There is a robust literature addressing methodological issues that impact reliability and validity of LSA (e.g., sample size needed for reliability, relationships between LSA and norm-referenced test outcomes, sensitivity and specificity of various measures, and comparisons of computer-based programs). This high level of interest in LSA continues to the present (see Pezold, Imgrund, & Storkel, 2020, for a comparison of computer analyses of language samples from preschool children).
Although LSA has found broad use among researchers of spoken and written language disorders, surveys of clinical and educational use are disappointing. Because survey respondents often cite time constraints as a major obstacle, some research has addressed this concern by documenting time requirements in approaches that streamline LSA in various ways. The articles in this issue take a different approach. Each article explores a new and/or neglected topic of LSA that authors believe should stimulate both research and clinical interest in LSA applications. Collectively, the first three articles cover topics pertinent to preschool, school-age, and adult uses of LSA. Two articles are devoted to LSA in clinical practice with bilingual children. A final contribution centers on LSA of writing in school-age populations.
In her article on LSA with preschool children, Eisenberg (2020) argues for the use of three measures that are seldom discussed: use of word combinations (particularly those involving a verb and another word), use of required and optional sentence constituents (e.g., SVO), and use of complex sentences (two or more clauses). She makes a compelling case for the developmental significance of these measures and their importance as indicators of intervention outcomes, as contrasted with more commonly reported measures including mean length of utterance and morphosyntactic accuracy. Although difficulty with morphosyntax is a common feature of language impairment in preschool children and, consequently, a common, if not predominant, focus of intervention, Eisenberg argues that the measures she describes are critical. They increase a child's ability to convey information (content) and are consistent with the principle that communicative informativeness should be a central goal of intervention even before grammatical correctness. Underutilized to date in the preschool LSA literature, all three measures await study of their reliability and validity.
Turning to school-age children and adolescents, Lundine's (2020) contribution is motivated by a gap that exists between language skills required to succeed in school, where learning depends on the comprehension and production of information encoded in expository discourse, and the availability of tools to assess such skills. Language sample analysis of expository discourse has the potential to help bridge this gap; yet, surveys show that even when clinicians use LSA with older students, they are more likely to assess conversational or narrative discourse. Lundine then offers a roadmap for LSA of expository discourse that includes discussion of elicitation techniques and observations at word, morphological, sentence, and discourse levels—those particularly characteristic of expository discourse. For example, at the morphological level, it is important to note a student's facility with derivational affixes needed for systems such as nominalization. Lundine then considers how a LSA assessment of a student's expository discourse can be used cooperatively by clinicians and educators in real classrooms to support student learning.
In the next article, Spencer, Bryant, and Colyvas (2020) are concerned with the issue of LSA reliability. The fact that variability is inherent in naturalistic language poses problems for researchers addressing issues such as differential diagnosis and clinicians interested in whether a client's change over a course of intervention represents meaningful change. The authors provide an extensive literature review of how sample length variability and time-based variability (successive samples of the same individual) have been handled by researchers of both child and adult populations. Finding methodological problems, the authors propose a new method for determining whether measures in repeated language samples of individuals represent substantive change or simply normal variation. Their method, captured in a formula termed the Reliable Change Index (RCI), is complex but should stimulate considerable interest for both research and clinical applications.
Two articles in the issue are concerned with LSA in bilingual children. As if variability inherent in monolingual speakers was not complicated enough, additional questions and methodological issues surface when using LSA with bilingual speakers. Parameters of the child's language experience—for instance, whether L1 and L2 are simultaneously or sequentially learned and whether L1 fades in light of L2—these and other factors must be navigated when devising elicitation methods, quantitative measures, and use of LSA for decision making. Drawing from an expanding literature on LSA of narrative discourse in bilingual children, Ebert (2020) walks the reader through these issues as she addresses procedural considerations such as elicitation, coding, and analysis (e.g., how to handle instances of code switching). The use of LSA in the identification of developmental language disorder in bilingual children is addressed next via a literature review centered on word-, sentence-, and discourse-level measures. Finally, Ebert is concerned with the use of LSA to determine language strengths and weaknesses that would, in turn, inform intervention. In a second article on LSA in bilingual children, Guiberson (2020) reports data from his own research aimed at determining associations among two alternative (and less time-consuming) LSA measures, traditional LSA measures, and a norm-referenced language test in a population of preschool bilingual children with and without language impairment. The alternative measures were clinician-reported and parent–reported longest utterance(s). Interesting patterns of associations among the three domains pointed to potential clinical uses for the alternative measures, although association strengths differed according to whether clinicians or parents were reporting.
The last article in this issue centers on LSA of writing samples in school-age children and adolescents. Although writing has been less studied than speaking, Scott (2020) uncovered a body of work that allows at least tentative answers to questions that arise for practitioners. She reports on the sensitivity of common measures of writing to questions concerning developmental change, language ability differences, relation to quality ratings, practical utility, and effects of genre and task. Scott also encourages writing-specific observations such as literate vocabulary use, unique syntax patterns, and spelling.
The authors have provided rich examples and case studies that should assist practitioners interested in clinical and classroom applications. Even though LSA topics in this issue are new and/or neglected, all the authors have addressed ways their information can inform practice. Language sample analysis has come a long way from its early days when its main use was to count a small set of grammatical structures to the point where we learn about language use at many levels, in multiple genres and modes, and for different types of speakers and writers.
—Cheryl M. Scott, PhD
Ebert K. D. (2020). Language sample analysis with bilingual children: Translating research to practice. Topics in Language Disorders, 40(2), 182–201.
Eisenberg S. L. (2020). Using general language performance measures to assess grammar learning. Topics in Language Disorders, 40(2), 135–148.
Guiberson M. (2020). Alternatives to traditional language sample measures with emergent bilingual preschoolers. Topics in Language Disorders, 40(2), E1–E6.
Lee L. L., Canter S. M. (1971). Developmental sentence scoring: A clinical procedure for estimating syntactic development in children's spontaneous speech. Journal of Speech and Hearing Disorders, 36, 315–340.
Lundine J. P. (2020). Assessing expository discourse abilities across elementary, middle, and high school. Topics in Language Disorders, 40(2), 149–165.
Miller J. (1981). Assessing language production in children. Boston: Allyn & Bacon.
Miller J., Iglesias A. (2016). Systematic analysis of language transcripts (SALT): Research (Version 16) [Computer software]. Middleton, WI: SALT Software LLC.
Pezold M. J., Imgrund C. M., Storkel H. L. (2020). Using computer programs for language sample analysis. Language Speech and Hearing Services in Schools, 51, 103–114.
Scarborough H. (1990). Index of Productive Syntax. Applied Psycholinguistics, 11, 1–22.
Scott C. M. (2020). Language sample analysis of writing in children and adolescents: Assessment and intervention contributions. Topics in Language Disorders, 40(2), 202–220.
Scott C. M., Taylor A. (1978). A comparison of home and clinic gathered language samples. Journal of Speech and Hearing Disorders, 43, 482–495.
Spencer E., Bryant L., Colyvas K. (2020). Minimizing variability in language sampling analysis: A practical way to calculate text length and time variability and measure reliable change when assessing clients. Topics in Language Disorders, 40(2), 166–181.