About
The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.
Please refer to the submission guidelines on this page for information for authors and submission guidelines.
Volume 17, Issue 2, 2024
Articles
Editor’s Introduction: The “Accidental California Issue” – Critical Questions about Fairness and Equity in Writing Assessment and Placement
JWA 17.2 features five articles that explore these evolving practices and critical questions around fairness and equity. Daniel Gross (2024) examines the implications of construct validity in the discontinuation of the Analytical Writing Placement Examination (AWPE) at the University of California. Julia Voss, Loring Pfeiffer, and Nicole Branch (2024) share how they used interviews from programmatic assessment to understand student learning outcomes in ways that value minoritized students’ experiential knowledge. Edward Comstock (2024) investigates the interplay between self-efficacy and programmatic assessment, emphasizing the value of qualitative methods in evaluating writing programs. Sarah Hirsch, Kenneth Smith, and Madeleine Sorapure (2024) present on Collaborative Writing Placement (CWP). Julie Prebel and Justin Li (2024) critique of a first-year writing portfolio assessment through lenses of equity, curricular design, performance, and reliability.
Construct Validity and the Demise of the Analytical Writing Placement Examination (AWPE) at the University of California: A Tale of Social Mobility
In 2021, the University of California System ended its decades-old timed writing assessment for course placement, due in part to challenges presented by the COVID-19 pandemic. Beyond practical crisis, however, the event marks a sea change in educational philosophy away from a universalizing model of cognitive development, which dominated in the 1970s and 1980s, towards a concern for social mobility and student self-assessment. The article explores the historical factors that led to this change, including the emergence of the social mobility index as a new method for evaluating student success. It also unpacks UC's discourse on preparatory education and levels of proficiency, emphasizing instead fairness in writing assessment.
Assessment is Constructed and Contextual: Identity, Information Literacy, and Interview-Based Methodologies in the First-Year Writing Classroom
Over the past twenty years,the field of writing assessment has moved from critical theories that questiontraditional models of validity and objectivity (Huot, 2002; Lynne, 2004) toscholarship that exposes how traditional assessment perpetuates inequality(Inoue 2015, 2019) and advocates new approaches that take social justice astheir central goal (Poe, et. al., 2018). We report on a collaboration betweentwo writing instructors and one librarian that assessed first-year writing(FYW) students' information literacy when researching and writing with popularnews sources. In addition to the typical practice of analyzing students'written work, this project used interviews as an assessment methodology. Thisresearch produced three important findings: 1) minoritized studentsdemonstrated superior critical information literacy skills compared tomajoritized students; 2) these differences were made visible through the use ofmultiple measures (written artifacts and interviews); and 3) the use ofinterviews is an assessment methodology that invites students to engage incounterstory and draw on personal experiences, revealing new sources ofknowledge and countering narratives of deficit. Ultimately, we argue thatinterviews hold promise for antiracist revamping of student learning outcomesas well as assessment practices.
- 1 supplemental ZIP
The Strange Loop of Self-Efficacy and the Value of Focus Groups in Writing Program Assessment
It’s long been presumed that increases in self-efficacy are correlated with other “habits of mind,” including more effective metacognitive strategies that will enable writing skills to transfer to different situations. Similarly, it’s long been understood that high self-efficacy is associated with more productive habits of mind and more positive emotional dispositions towards writing tasks. However, this two-year assessment of College Writing classes a private, mid-sized, urban four-year university complicates these assumptions. By supplementing substantial survey data we the analysis of data collected in focus groups, we found that the development of self-efficacy does not necessarily correlate to the development of more sophisticated epistemological beliefs—beliefs about how learning happens—nor the development of rhetorically-effective “writing dispositions." In short, suggesting the value of focus groups in assessment, we discovered a “strange loop” of self-efficacy in which gains made towards self-efficacy frequently have a unanticipated, complex, and problematic relation to our desired learning outcomes.
- 1 supplemental ZIP
Collaborative Writing Placement: Partnering with Students in the Placement Process
This paper will discuss how the Writing Program at the University of California, Santa Barbara “flipped the script” on placement by implementing a model that emphasizes the importance of student voices. Our Collaborative Writing Placement (CWP) shares many similarities with Directed Self-Placement (DSP) in that its instrument consists of survey questions and reflective writing opportunities (Aull, 2021; Gere, Ruggles, et. al., 2013). But it differs from DSP in that students work with writing faculty in choosing the first-year course that is the best fit for them. Through an examination of our initial data, and the first two years of CWP’s implementation, our paper will discuss how the CWP offers another avenue for promoting student agency and generating more equitable placement outcomes.
- 1 supplemental ZIP
Multifaceted Equity: Critiquing a First-Year Writing Assessment through Curricular, Performance, and Reliability Lenses
This article examines whether a college’s new portfolio-based first-year writing assessment process is equitable. We build on the existing literature by arguing that equity must be assessed through multiple, complementary facets within the writing assessment ecology. We present and operationalize three lenses through which to examine the equity of a first-year writing assessment process. The curricular lens shows that the new writing assessment is more aligned with and improved the classroom pedagogy of our first-year seminars. The performance lens shows the ongoing disparities between students across demographic backgrounds. Finally, the reliability lens reveals faculty differences in how they interpret the writing rubric. We conclude that while the new portfolio-based writing assessment is more equitable, it is also constrained by institutional structures and systems of power that prevent it from being equitable, period.