Complex Assessments, Teacher Inferences, and Instructional Decision-Making
The purpose of this study is to understand how teachers make sense of data from a complex set of reading assessments and how inferences they derive from the data affect decisions about instruction. There are two key research questions - 1) How does a teacher's developing expertise with a complex set of assessments (assessment literacy) affect the quality of the inferences they make about student learning?; and 2) How does the extent of a teacher's Pedagogical Content Knowledge affect the range and quality of instructional decisions made upon the basis of assessment results?
Six English teachers from a middle school and a high school in Northern California administered a set of reading assessments constructed from various core elements in the reading domain. The data generated from the assessments were organized to facilitate analysis of the progress of teacher thinking about each of the different reading dimensions and with assessment and instruction. Measures of assessment literacy, a series of coded interviews and professional development with the teachers provided data about the development of their Pedagogical Content Knowledge (PCK) of reading. Their ability to make sense of this complex set of student reading data to make instructional decisions was examined over time.
It was found that the skill and sophistication of the teacher's PCK of reading and interpretation of the assessment evidence increased as they delved deeper into close analysis of both the reading measures and the individual student data. The results have implications for a teacher-based "community of judgment" as a key element in effective school accountability systems.