“Value-added models” (VAMs) purport to be able to identify a teacher’s causal effect from data on students’ test scores. But with non-random assignment of students to teachers, some teachers may be penalized and others rewarded based on the students that they teach rather than on their own effectiveness. In a recent paper, “Revisiting the impact of teachers,” I show that even the most sophisticated VAMs remain importantly biased. I also show that recent conclusions that high-VAM teachers have dramatic effects on students’ long-run outcomes are unsupported, as student sorting accounts for much if not all of the teachers’ apparent long-run effects. Sorting-adjusted long-run impacts of high-VAM teachers cannot be distinguished from zero. Any policies that use VAM scores for teacher evaluation will need to account for the biases that this introduces, and will have more modest effects – if any – on students’ long-run outcomes than has been claimed.