We are an independent, solution-oriented think tank at Georgetown University's McCourt School of Public Policy.Learn More.

Getting Teachers On Board With Evaluations

After nearly a decade of teacher evaluation reform, we are in the position to ask two key questions: How have teachers and administrators responded to the attempts to make teacher evaluation more meaningful? And are the new evaluation systems leading us down a path of improvement?

To explore the answers to those questions, our research team from the University of Southern California and Tulane University studied eight New Orleans schools implementing Louisiana’s Compass multiple-measure teacher evaluation system, which, like many of the new state systems, includes observations rated on a standard rubric, progress toward student learning goals, and value-added measures. Before the Compass system, Louisiana used a very simple observation procedure, which resulted in 99 percent of teachers receiving a “satisfactory” rating.

Several of the schools viewed the new Louisiana evaluation requirements as "top-down" mandates and time-consuming. They reported “going through the motions,” but didn’t typically change their teaching in response to evaluations. In other schools, staff undertook strategic actions to enhance their evaluation results, such as setting low goals for student learning or modifying their instruction only during observations to get high marks. In these cases, teachers were not focused on improvement but on “gaming the system.”

But in three schools the teachers and administrators we interviewed saw the evaluation process as a valuable opportunity for growth and reported using the new, more comprehensive evaluation results to improve instruction, such as learning to better facilitate small group instruction and to integrate a range of low-stakes quizzes and tests into daily instruction.

[Read More: How D.C. Schools Are Revolutionizing Teaching]

Why did these schools respond more reflectively? While we can’t offer a definitive recipe, we identified several ingredients that we suspect contributed to the constructive responses.

First, schools that were allowed to contribute to the design of their evaluation systems tended to be more invested in reform. We found that educators' buy-in was stronger in two schools that developed new evaluation systems that included more frequent observations, a much more detailed rubric for assessing classroom instruction, and extensive one-on-one coaching to support teachers in areas identified for growth.

Second, we found that in schools where administrators and teachers shared responsibilities for observing teachers, evaluators had the time to complete more thorough observations and to provide coaching.

Finally, dedicated time for teacher collaboration played a part. Opportunities for teachers to work together helped them address the feedback they received during evaluations. Teachers in one school established goals based on their evaluations—such as improving student performance on interim math tests—and met weekly to review student work, model instructional techniques, and find and discuss new instructional strategies to meet their goals.

Our study suggests that simply tinkering with state evaluation systems may not move the needle. If we want the tremendous investment in improved teacher evaluations to pay off—and let’s be clear, everyone knows how labor-intensive it is to observe, evaluate, provide feedback to, and support teachers—we seemingly need to look beyond minor adjustments to the standards being assessed, the measures used, or even the number of classroom observations required.

Let’s engage school educators to help develop the tools and processes. Let’s ensure that school leaders have help with other job responsibilities so they or others have the time to devote to the evaluation process (instruction, after all, is the heart of the educational process, isn't it?). And let’s guarantee that teachers have regular opportunities to collaborate with peers, receive coaching to respond to feedback and, ultimately, achieve the standards on which they are being evaluated.

Susan Bush-Mecenas is a PhD candidate and Provost’s fellow at the University of Southern California’s Rossier School of Education. Her research interests include organizational learning, district capacity building, and accountability policy.

Julie A. Marsh, is an Associate Professor of Education Policy at the University of Southern California’s Rossier School of Education, who specializes in research on K-12 policy. Her research blends perspectives in education, sociology, and political science. 

Their study was published in Educational Evaluation and Policy Analysis, and their research informs a related policy brief.