The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.
Please refer to the submission guidelines on this page for information for authors and submission guidelines.
Volume 3, Issue 2, 2007
Issues in Large-Scale Writing Assessment: Perspectives from the National Assessment of Educational Progress
This article reviews the development of the framework for the 2011 National Assessment of Educational Progress in writing. An issue paper commissioned by the National Assessment Governing Board is used to consider a number of continuing issues in large-scale assessment of writing, including the definition of the domain of writing tasks, which tasks should actually be assessed at which grade levels, the relationship of the assessment to postsecondary demands, the role of commonly available tools such as word processing software in the construct of writing achievement, the specification and measurement of achievement, the development of appropriate topics for writing, the issue of time for writing, and accommodations for English learners, students with disabilities, and low achievers.
This article examines the CUNY-ACT as a high-stakes, standardized exit exam for developmental writing students at one CUNY school, Kingsborough Community College. I chart the political conditions at CUNY that led to the establishment of the exam, and its disruption of the existing assessment procedures already in place at Kingsborough. I present examples of ACT prompts and explain the test preparation course for students who have failed the exam numerous times. I critique the report that presents the rationale and procedure of testing presented to Kingsborough by the central office of CUNY, consisting of the CUNY Board of Trustees and Chancellor, in conjunction with New York City politicians, including the then mayor, Rudolph W. Giuliani. I explain the role of the CUNY central office in forcing the ACT to be implemented at Kingsborough without consideration of wellestablished research on validity in the area of writing assessment. I explain the destructive effects that the ACT has on Kingsborough students, especially on those who are non-native speakers and writers. I argue for better assessment procedures at Kingsborough derived from research in the area of writing assessment, and ask for greater coordination of effort among Kingsborough students, faculty, and the CUNY Central administration to establish an assessment policy that rests upon appropriate pedagogical practice and sound validity.
This article describes a hybrid, first-year composition program--part online instruction and part classroom instruction--that relies on anonymous assessment and response to student writing. Writing program administrators (WPAs) at Texas Tech University designed this program largely in answer to budgetary constraints to handle ever-increasing student populations and stagnant departmental funding. I examine the precarious balance between university pragmatics and classroom pedagogy, suggesting that when faced with budgetary restrictions, composition programs should not let the economic and pragmatic question of how teachers assess and respond to student writing precede the more important question of why: Why do we grade and respond to student writing? While both how and why we grade and respond to student writing remain important, this article considers how university administrators, WPAs, and instructors might keep the why center stage by engaging in productive, proactive dialogue using what Porter et al. (2000) call "rhetorical action," for "engaging in situated theorizing and relating that theorizing through stories of change and attempted change" (p. 631).
This annotated bibliography focuses on issues surrounding minorities and writing assessment, including issues associated with various ethnic groups as well as those issues associated with gender studies and with minorities in special education. In addition to reporting on minorities and testing consequences, several selections make recommendations for revising or constructing testing instruments to address some of the special issues minorities face with classroom and large-scale assessments. Several selections also pose important questions that classroom instructors, school administrators, and writing assessment specialists might ask before conducting assessments that involve minority students. Over the past 30 years, studies of SAT, ACT, and other high-stakes tests report that test scores continue to lag behind for certain minority groups. Several selections that we annotate below offer reasons for these differences and fresh insights and analyses into trends for minority test scores.