As 2013 closed out, the education world was roiled by yet another controversy over the calculation and interpretation of statistical data used to govern teachers and school services.
This controversy, coming to us from the nation’s capital, involved, according to the report in The Washington Post, “Faulty calculations of the ‘value’ that D.C. teachers added to student achievement in the last school year.”
“The evaluation errors,” noted reporter Nick Anderson, “underscore the high stakes of a teacher evaluation system that relies in part on standardized test scores to quantify the value a given teacher adds to the classroom.”
This controversy falls into a long line of previous ones stretched across the year. Now that the results from tests are being used to judge just about anything having to do with education, debates over education policy have become an endless back-and-forth over whether the data are reliable and what, if anything, they reveal.
Whether it’s “white suburban moms” disputing their children’s standardized test results or pundits parsing out the meaning of PISA, the nation has descended into a heated cross-fire over the impact and relevance of education statistics brandished by “reform” advocates.
While these arguments rage over the relevancy of test scores in policy making, some are now questioning, to use the operative phrase in Anderson’s sentence above, whether it’s even possible or preferable “to quantify the value” in education.
The whole idea that teaching and learning is a pursuit that can be expressed and judged by numbers and rankings, which seems to be a forgone conclusion to policy makers and economists, is increasingly an unsettled matter to most Americans. What they see instead more and more looks like a nation turning its back on the well being of students – especially those who are most in need.
The Impact Of IMPACT?
The reported problems with D.C.’s teacher evaluation system are just the latest example of the problems that occur when test data become a source for policy direction.
The mistake affected 44 teachers, or about 10 percent of faculty the calculations apply to. But the overall effect is way more significant when taking into account the numbers of students who are linked to each teacher.
Further, any report of flaws with the teacher evaluations in D.C. is apt to reverberate across the country. The district’s system, known as IMPACT, was created under the administration of Michelle Rhee and has been touted by education advocates aligned with Rhee as a model for the nation.
As the Post’s Valerie Strauss, who also reported on the IMPACT controversy, noted, “Such evaluation has become a central part of modern school reform … In some places around the country, teachers received evaluations based on test scores of students they never had.”
The Truth Behind TUDA?
The reported problems with IMPACT fell on the heels of yet another statistical data dump from the week before.
That statistical disgorge is known as the Trial Urban District Assessment, or TUDA, which analyzed the performance of students in some large cities that took part in the National Assessment for Educational Progress.
The education reporter for The Huffington Post, Joy Resmovits, noted, “Washington, D.C. – a standard bearer for what’s known as the education reform movement since former school chancellor Michelle Rhee’s tumultuous tenure at D.C. Public Schools – was the only city to show score increases in both grades in both subjects since 2011.”
So Michelle Rhee’s organization, StudentFirst, immediately issued a press release claiming D.C. schools as one of the “bright spots” that show “what we can learn” from TUDA. First among the lessons was, you guessed it, IMPACT.