Generalizability of Writing Scores and Language Program Placement Decisions: Score Dependability, Task Variability, and Score Profiles on an ESL Placement Test

Eskin, Daniel

Second Language (L2) testing has increasingly relied on performance assessment to evaluate practical command of language acquired. However, such forms of assessment entail more complex task design and subjective human scoring judgment (Bachman, 2004), raising challenges for score dependability and score use due to variability associated with task design (Deville & Chalhoub-Deville, 2006; In’nami, & Koizumi, 2016), differences in rater behavior (Bachman, Lynch, & Mason, 1995), and rating rubric functionality, especially when consisting of multiple subscales (Grabowski & Lin, 2019; Sawaki, 2007, Xi, 2007). The current study illustrates the use of Multivariate Generalizability Theory (MG-Theory) analyses for examining score variability and dependability for written performance assessment on an ESL placement test, rated using an analytic rubric with three subscales. In particular, this study identified the presence of task-related variability that did reduce score dependability for the writing scores yielded from this test. By the same token, this variability could substantively be justified as an artifact of representing the construct of L2 writing ability in a sufficiently broad manner. Simply said, should we expect test takers to have equivalent levels of proficiency when writing a review of an experience as a customer and when writing an argumentative essay as a student? 


  • thumnail for Eskin_2022_Generalizability of Writing Scores and Language Program Placement Decisions.pdf Eskin_2022_Generalizability of Writing Scores and Language Program Placement Decisions.pdf application/pdf 293 KB Download File

Also Published In

Studies in Applied Linguistics and TESOL

More About This Work

Published Here
August 29, 2022


MG-theory, Score Profiles, Score Dependability, Task Variability