Value added methods is a research methodology designed to be used at a building or district level to guage the effectiveness of instructional strategies, materials, and other aspects of how a school teaches. It was never designed to be used at the individual teacher level. In a briefing from leading education researchers to policymakers, three reasons were given on why this is a bad idea:
1: Value-added models of teacher effectiveness are highly unstable
Researchers have found that teachers’ effectiveness ratings differ substantially from class to classand from year to year, as well as from one statistical model to the next...
2: Teachers’ value-added ratings are significantly affected by differences in the students who are assigned to them.
VA models require that students be assigned to teachers randomly. However, students are not randomly assigned to teachers – and statistical models cannot fully adjust for the fact that some teachers will have a disproportionate number of students who have greater challenges (students with poor attendance, who are homeless, who have severe problems at home, etc.) and those whose scores on traditional tests may not accurately reflect their learning (e.g. those who have special education needs or who are new English language learners).
3: Value-added ratings cannot disentangle the many influences on student progress
It is impossible to fully separate out the influences of students’ other teachers, as well as school conditions, on their reported learning. No single teacher accounts for all of a student’s learning. Prior teachers have lasting effects, for good or ill, on students’ later learning, and current teachers also interact to produce students’ knowledge and skills.
It looks like policymakers and others are started to see the problem with using value added measures. The Ohio Senate introduced a bill this week that lowers the value added measures for teachers in grades 4-8 from 50% to 35%. On Techcrunch.com (a site dedicated to following the tech side of the news and start up culture) has posted an article about the LA Times shaming teachers. The Techcrunch author discusses a couple of problems with the use of this data.
Data is good, data is important, but data only based on standardized testing should not be the only data that is informing your decisions. Note I didn't say driving your decisions.