Skip to main content
eScholarship
Open Access Publications from the University of California

About

The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.

Please refer to the submission guidelines on this page for information for authors and submission guidelines.

Articles

Afterword: Volume 5, 2012

Traditionally, editors write an introduction to each issue of a journal, linking articles, identifying themes, and explaining what unifies the volume. However, because our volumes evolve organically with articles being published online as they are deemed ready through the peer-review process, that traditional approach doesn’t work. Instead, we have opted for an Afterword, an opportunity to look back, to see what links and themes have emerged in the volume.

College Students' Use of a Writing Rubric: Effect on Quality of Writing, Self-Efficacy, and Writing Practices

Fifty-six college students enrolled in two sections of a psychology class were randomly assigned to use one of three tools for assessing their own writing: a long rubric, a short rubric, or an open-ended assessment tool. Students used their assigned self-assessment tool to assess drafts of a course-required, five-page paper. There was no effect of self-assessment condition on the quality of students'; final drafts, or on students' self-efficacy for writing. However, there was a significant effect of condition on students' writing beliefs and practices, with long rubric users reporting more productive use of self-assessment than students using the open-ended tool. In addition, across conditions, most students reported that being required to assess their writing shaped their writing practices in desirable ways. Keywords: rubrics, self-efficacy, self-assessment, working memory, writing quality, writing beliefs, college writers

Response Rethought…Again: Exploring Recorded Comments and the Teacher-Student Bond

The argument has long been made that audio-recorded response to student writing provides more commentary than does traditional written response. However, an analysis of one instructor's audio comments suggests that audio response differs not only in degree but also in kind. A taxonomy of comments primarily found in audio response and tied to the temporal aspect of the teacher-student relationship is proposed, featuring three kinds of responses:

•retrospective: comments that refer to previous shared experiences in the writing course
•synchronous: comments that refer to the teacher/reader's current reading experience in responding to a student's text
•anticipatory: comments that refer to future shared activities in the writing course

Such comments, although so identified, can be found in the literature of response but have been neglected for the most part. An analysis of the respondents in Straub's (1999) Sourcebook underscores the greater frequency of temporal comments in audio response than in written comments.

The importance of these temporal comments is that they offer a potential explanation for the enhanced bonding of teachers and students reported by S. Sipple (2007) because they emphasize the ongoing connection between classroom activities and teacher response. Further research possibilities include comparative studies of temporal comments; frequency and impact in a single classroom where audio and written comments are both employed; an examination of student response to temporal comments; and studies of teacher intention in employing temporal comments through speak aloud reflection-on-action.




























Big Rubrics and Weird Genres: The Futility of Using Generic Assessment Tools Across Diverse Instructional Contexts

Interest "all-purpose" assessment of students' writing and/or speaking appeals to many teachers and administrators because it seems simple and efficient, offers a single set of standards that can inform pedagogy, and serves as a benchmark for institutional improvement. This essay argues, however, that such generalized standards are unproductive and theoretically misguided. Drawing on situated approaches to the assessment of writing and speaking, as well as many years of collective experience working with faculty, administrators, and students on communication instruction in highly specific curricular contexts, we demonstrate the advantages of shaping assessment around local conditions, including discipline-based genres and contexts, specific and varied communicative goals, and the embeddedness of communication instruction in particular "ways of knowing" within disciplines and subdisciplines. By sharing analyses of unique genres of writing and speaking at our institutions, and the processes that faculty and administrators have used to create assessment protocols for those genres, we support contextually-based approaches to assessment and argue for the abandonment of generic rubrics.

The Empirical Development of an Instrument to Measure Writerly Self-Efficacy in Writing Centers

Post-secondary writing centers have struggled to produce substantial, credible, and sustainable evidence of their impact in the educational environment. The objective of this study was to develop a college-level writing self-efficacy scale that can be used across repeated sessions in a writing center, as self-efficacy has been identified as an important construct underlying successful writing and cognitive development. A 20-item instrument (PSWSES) was developed to evaluate writerly self-efficacy. 505 university students participated in the study. Results indicate that the PSWSES has high internal consistency and reliability across items and construct validity, which was supported through a correlation between tutor perceptions of client writerly self-efficacy and client self-ratings. Factor analysis revealed three factors: local and global writing process knowledge, physical reaction, and time/effort. Additionally, across repeated sessions, the clients' PSWSES scores appropriately showed an increase in overall writerly self-efficacy. Ultimately, this study offers a new paradigm for conceptualizing the daily work in which writing centers engage, and the PSWSES offers writing centers a meaningful quantitative program assessment avenue by (1) redirecting focus from actual competence indicators to perceived competence development and (2) allowing for replication, causality, and sustainability for program improvement. Key Words: Self-efficacy, Writing, Cognition, Perceived Competence Development, Writing Center, Assessment, Student Learning, Post Secondary

An Annotated Bibliography of Writing Assessment: Machine Scoring and Evaluation of Essay-length Writing

This installment of the JWA annotated bibliography focuses on the phenomenon of machine scoring of whole essays composed by students and others. "Machine scoring" is defined as the rating of extended or essay writing by means of automated, computerized technology. Excluded is scoring of paragraph-sized free responses of the sort that occur in academic course examinations. Also excluded is software that checks only grammar, style, and spelling. Included, however, is software that provides other kinds of evaluative or diagnostic feedback along with a holistic score. While some entries in this bibliography describe, validate, and critique the ways computers "read" texts and generate scores and feedback, other sources critically examine how these results are used. The topic is timely, since the use of machine scoring of essays is rapidly growing in standardized testing, sorting of job and college applicants, admission to college, placement into and exit out of writing courses, content tests in academic courses, and value-added study of learning outcomes.

Introduction from the New Editors

Welcome to the first volume of JWA under our editorship. For the last 14 months, we have been working with Brian Huot (the former editor), Hampton Press (the former publisher), and technology support to move JWA to an online, open-access format. Because of everyone's cooperation, we were able to get all of the JWA archives online. We also helped to publish Volume 4, Brian's last volume as editor.