By Kevin Hart
One of the country’s leading economists is warning that a Gates Foundation study on value-added teacher evaluation not only fails to meet key academic standards, but that it dangerously misinterprets its own data.
Last month, the Gates Foundation released the first report of the Measures of Effective Teaching project, and the report claimed to find strong evidence for a value-added teacher evaluation model, where teachers are evaluated based on student progress on standardized tests.
But a report by University of California at Berkeley economist Jesse Rothstein, former chief economist at the U.S. Department of Labor and a former senior economist for the Council of Economic Advisers, found that the report was based on flawed research and conclusions that were contradicted by the study’s own data.
“In fact, the preliminary MET results contain important warning signs about the use of value-added scores for high-stakes teacher evaluations,” Rothstein wrote in his analysis of the research. “These warnings, however, are not heeded in the preliminary report, which interprets all of the results as support for the use of value-added models in teacher evaluation… This limits the report’s value and undermines the MET Project’s credibility.”
Rothstein’s critique of the study, scathing by academic standards, found that some of the correlations presented and conclusions reached in the MET research were “shockingly weak.”
In particular, Rothstein wondered how the report can reach its main conclusion that “a teacher’s past track record of value-added is among the strongest predictors of their students’ achievement gains in other classes and academic years,” when the report did not attempt to study the strength of several other possible predictors.
Rothstein’s analysis seems to validate a concern that many educators had about the report when it was released – that it was designed to reach predetermined conclusions. The conclusions that should have been reached, Rothstein wrote, would cast serious doubt about whether a value-added model is useful at all in teacher evaluation.
For example, 40 percent of the teachers who scored in the bottom quartile based on their students’ state standardized test scores actually placed in the top half of teachers when an alternative assessment was used.
That means, Rothstein wrote, that a value-added model based on standardized state test scores is only slightly more reliable than flipping a coin when used to determine whether a teacher is effective.
“In particular, the correlations between value-added scores on state and alternative assessments are so small that they cast serious doubt on the entire value-added enterprise,” Rothstein wrote.
Previous studies have uncovered serious problems with the reliability of value-added teacher evaluations – and the data from the Gates Foundation, once they are interpreted correctly, raise those same concerns. But, Rothstein wrote, the data were not interpreted correctly.
“The [data] analyses do not support the report’s conclusions,” he wrote. “Interpreted correctly, they undermine rather than validate value-added-based approaches to teacher evaluation.”
The real danger, of course, is that the study was heralded by several media outlets and is being used by policymakers and school districts to push for a value-added teacher evaluation model. But using the Gates Foundation research as a basis to push for value-added teacher evaluation is a mistake, Rothstein wrote.
“The design of the MET study … places sharp limits on its ability to inform policy,” Rothstein wrote. He added that it is “especially troubling” that the Gates Foundation is circulating a stand-alone policy brief arguing for value-added evaluations, even though analysis of the MET data do not lend support to the model.
“Even careful readers [of the policy brief] will be unaware of the weak evidentiary basis for its conclusions,” Rothstein wrote.
Value-added teacher evaluations have become a significant topic of debate in public education. The Los Angeles Times last year released a report on the performance of Los Angeles teachers based on value-added data, and judge recently ruled that New York City media outlets had the right to publish a similar analysis of city teachers.
Rothstein’s review was produced by the National Education Policy Center (NEPC), housed at the University of Colorado at Boulder School of Education, with funding from the Great Lakes Center for Education Research and Practice.