This dissertation explores conceptual and methodological issues related to measuring implementation in quantitative educational research. Implementation, defined as the enactment of interventions in educational contexts, plays a pivotal role in understanding how programs operate and affect student outcomes. Despite its importance, there is significant variability in how implementation is conceptualized, measured, and accounted for in research, reflecting the complex relationship between interventions, schools, and outcomes.The study examines these issues through a two-part approach. First, I conduct a systematic review to analyze the frameworks and constructs used to define implementation in a sample of grants awarded by the Institute of Education Sciences (IES). The review highlights the tensions between fidelity (adherence to program design) and adaptation (modifications to the original design). It also assesses the data collection instruments, methods of data reduction, and analytical strategies used to incorporate implementation into outcome evaluations.
Second, the dissertation uses data from the implementation of the Success for All (SFA) program to illustrate the practical consequences of methodological choices. Using correlations and hierarchical linear modeling, I estimate the association between different operationalizations of implementation and student outcomes. I also explore how implementation affects estimates of the program’s impact, with a particular focus on English Language Learners (ELL) and Special Education (SPED) students. The findings indicate that while higher implementation fidelity correlates with improved outcomes for some groups, it may not benefit others equally, raising concerns about potential inequities.
This work underscores the need for nuanced approaches to measuring and accounting for implementation that reconcile fidelity and adaptation frameworks to better understand the schools and classrooms where interventions are delivered. In the context of school improvement, it is especially relevant to consider that adaptations to program design reflect teacher agency and should not be conceptualized only as deviations from the intended intervention.