With the availability of data increasing, value-added measures (VAMs) have been designed to identify a teacher's causal effect on student test scores. Over the past several years, policymakers have adopted such measures as part of their consideration for teacher evaluation, promotion, compensation, and dismissal. Yet, there are concerns over the validity and reliability of VAMs for high-stakes personnel decisions. To further understand the limitations, researchers Marianne Bitler, Sean Corcoran, Thurston Domina, and Emily Penner use student data from New York City and apply VAMs to student height, an outcome that teachers cannot control.
They discuss the results of their analysis in Teacher Effects on Student Achievement and Height: A Cautionary Tale.The estimated teacher “effects” on height are nearly as large as those on math and reading achievement. Such results can be explained by the typically small sample sizes of children used to estimate a teacher’s individual impact, which creates large sampling variation, also known as “noise.” This effect largely disappears when the researchers looked at multiple years of data or when researchers use models that adjusted for the imprecision. The researchers caution against using value-added measures without multiple years of data and without appropriate research models.