Chetty, Friedman, and Rockoff (2014a, 2014b) study value-added (VA) measures of teacher effectiveness. CFR (2014a) exploits teacher switching as a quasi-experiment, concluding that student sorting creates negligible bias in VA scores. CFR (2014b) finds VA scores are useful proxies for teachers’ effects on students’ long-run outcomes. I successfully reproduce each in North Carolina data. But I find that the quasi-experiment is invalid, as teacher switching is correlated with changes in student preparedness. Adjusting for this, I find moderate bias in VA scores, perhaps 10-35% as large, in variance terms, as teachers’ causal effects. Long-run results are sensitive to controls and cannot support strong conclusions.