The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.
Please refer to the submission guidelines on this page for information for authors and submission guidelines.
Volume 1, Issue 2, 2003
During the last ten years, it has been my pleasure and privilege to help found and edit two journals, first Assessing Writing and now, The Journal of Writing Assessment. The experience has taught me more than I anticipated: as an editor, you develop a view of the field that, I think, is simply impossible otherwise. And of course, in the process of editing, I've had the pleasure of meeting and working with many, many smart--and gracious--colleagues. I am in their debt.
Validity of Automated Scoring: Prologue for a Continuing Discussion of Machine Scoring Student Writing
Writing assessment has developed along two separate lines, one centered in professional organizations for writing teachers and the other centered in professional organizations for the broader assessment community. As the controversy about automated scoring continues to develop, it is important for writing teachers and researchers to become fluent in the discourse of the broader assessment community. Continuing to label the work of the broader assessment community as positivist and continuing to ignore it will only result in a continuing sense of defeat as automated assessment is adopted more widely. On the other hand, an examination of the literature on educational assessment will reveal that the theoretical base for assessment is quite consistent with the principles adopted by the writing assessment community.
The Politics of High-Stakes Writing Assessment in Massachusetts: Why Inventing a Better Assessment Model is Not Enough
What happens when government officials conspire with a national testing company to control literacy standards for teacher preparation students on a statewide level? This essay documents the politics of the Massachusetts teacher test story, focusing on the flawed process that led to a writing test that excluded the participation and negotiation of stakeholders. I argue that as a discipline, we need to learn to play politics better, faster, and with a strong disciplinary commitment to promoting assessment models that are fairly negotiated. Writing professionals should organize in order to participate directly in good faith discussions with powerful interests so as to promote locally developed and decentralized assessment models.
Portfolios Across the Centuries, a review of Liz Hamp-Lyons and William Condon: Assessing the Portfolio
An examination of the status and uses of writing portfolios in university writing programs at the close of the 20th century, Assessing the Portfolio was written out of the firsthand experiences of two writing program administrators (WPAs) who worked together in the mid-1980s at the University of Michigan just at the time that Belanoff and Elbow (1986) published their germinal piece on the demise of timed writing tests and the birth of university writing portfolios as exit measures at Stony Brook, ushering in a period of profound interest and attention to portfolios in the writing classroom. Pat Belanoff and Marcia Dickson's 1991 anthology Portfolios: Process and Product and Kathleen Blake Yancey's (1992) Portfolios in the Writing Classroom began a half decade or so of conferences and publications that has helped establish the portfolio as a mainstay in the writing classroom and as a viable option for large-scale assessment as well.
In this, our second installment of the bibliography on assessment, we survey the literature on reliability and validity, the first of a two-part series that will continue in the next issue of JWA. The works we annotate focus primarily on the theoretical and technical definitions of reliability and validity--and in particular, on the relationship between the two concepts. We summarize psychometric scholarship that explains, defines, and theorizes reliability and validity in general and within the context of writing assessment. Later installments of the bibliography will focus on specific sorts of assessment practices and occasions, such as portfolios, placement assessments, and program assessment--all practices for which successful implementation depends on an understanding of reliability and validity.