About
The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.
Please refer to the submission guidelines on this page for information for authors and submission guidelines.
Volume 9, Issue 2, 2016
Articles
Editors' Introduction: Volume 9 Issue 2
Editors Introduction to Volume 9 Issue 2
Globalizing Plagiarism & Writing Assessment: A Case Study of Turnitin
This article examines the plagiarism detection service Turnitin.com's recent expansion into international writing assessment technologies. Examining Turnitin's rhetorics of plagiarism alongside scholarship on plagiarism detection illuminates Turnitin's efforts to globalize definitions of and approaches to plagiarism. If successful in advancing their positions on plagiarism, Turnitin's products could be proffered as a global model for writing assessment. The proceedings of a Czech Republic conference partially sponsored by Turnitin demonstrate troubling constructions of the "student plagiarist". They demonstrate, too, a binary model of west and nonwest that stigmatizes nonwestern institutions and students. These findings support an ongoing attention to the global cultural work of corporate plagiarism detection and assessment.
Recognizing Multiplicity and Audience across the Disciplines: Developing a Questionnaire to Assess Undergraduates' Rhetorical Writing Beliefs
How do students feel about expressing uncertainty in their academic writing? To what extent do they think about their readers as they compose? Understanding the enactment of rhetorical knowledge is among the goals of many rich qualitative studies about students' reading and writing processes (e.g. Haas & Flower, 1988; Roozen, 2010). The current study seeks to provide a quantitative assessment of student' rhetorical beliefs based on a questionnaire. This study reports on (1) the development of the Measure of Rhetorical Beliefs and (2) demonstration of the measure's construct validity and utility by comparing undergraduates' rhetorical and epistemological beliefs, as well as their composing process, across different majors. The new Measure of Rhetorical Beliefs (MRB) was administered to engineering, business, and liberal arts and science majors, along with the Inventory of Process in College Composition (Lavelle and Zuercher, 2001) and the Epistemological Belief Inventory (Schraw, Bendixen, and Dunkle, 2002). Findings suggest that rhetorical writing beliefs are a measurable construct distinct from, but related to, epistemological beliefs and composing practices and that students from different majors may have different rhetorical beliefs and composing practices. Implications for use of the Measure of Rhetorical Beliefs are discussed, to include further validation of the instrument and its potential use for research, program evaluation, and instructional practice.
Multimodal Assessment as Disciplinary Sensemaking: Beyond Rubrics to Frameworks
This study argues that organizational studies scholar Karl Weick's concept of sensemaking can help to integrate competing scales of multimodal assessment: the pedagogical attention to the purposes, motivations, and needs of composing students; the programmatic desire for consistent outcomes and expectations; and the disciplinary mandate to communicate collective (though not necessarily consensual) values to composition scholars and practitioners. It addresses an ongoing debate about the prevalence of common or generic rubrics in conducting multimodal assessment; while some scholars argue that multimodal assessment is compatible with common, and even print-oriented, programmatic rubrics, others insist that only assignment-specific, context-driven assessments can account for the rich diversity of multimodal processes and texts. Adopting sensemaking theory, by contrast, argues for multimodal assessment efforts to attend to cross-programmatic and disciplinary frameworks--plastic, scalable assessment categories that can be adapted to local contexts. An analysis of current multimodal assessment research and practice demonstrates how emergent sensemaking frameworks are integrating global (cross-programmatic) and local (classroom- or assignment-specific) scales of assessment. Keywords: sensemaking, multimodal assessment
Contract Grading in a Technical Writing Classroom: A Case Study
The subjectivity of assessing writing has long been an issue for instructors, who carefully craft rubrics and other indicators of assessment while students grapple with understanding what constitutes an "A" and how to meet instructor-generated criteria. Based on student frustration with traditional grading practices, this case study of a 20-student technical writing classroom employed teacher-as-researcher observation and student surveys to examine how students in a technical writing classroom in the Northeast collaborated together to generate criteria relating to the quality of their writing assignments. The study indicates that although students perceive more involvement in the grading process, they resist participation in crafting criteria as a class and prefer traditional grading methods by an "expert", considering it a normative part of the grading process. The study concludes with implications for integrating contract grading in the technical writing classroom.
Keywords: technical writing, contract grading, assessment, student feedback
ePortfolios: Foundational Measurement Issues
Using performance information obtained for program assessment purposes, this quantitative study reports the relationship of ePortfolio trait and holistic scores to specific academic achievement measures for first-year undergraduate students. Attention is given to three evidential categories: consensus and consistency evidence related to reliability/precision; convergent evidence related to validity; and score difference and predictive evidence related to fairness. Interpretative challenges of ePortfolio-based assessments are identified in terms of consistency, convergent, and predictive evidence. Benefits of these assessments include the absence of statistically significant differences in ePortfolio scores for race/ethnicity sub-groups. Discussion emphasizes the need for principled design and contextual information as prerequisite to score interpretation and use. Instrumental value of the study suggests that next-generation ePortfolio-based research must be alert to sample size, design standards, replication issues, measurement of fairness, and reporting transparency. Keywords: ePortfolios, fairness, program assessment, reliability, validity