by Diana Senechal
A new study by the Measures of Effective Teaching (MET) Project, funded by the Bill and Melinda Gates Foundation, finds that students’ perceptions of their teachers correlate with the teachers’ value-added scores; in other words, “students seem to know effective teaching when they experience it.” The correlation is stronger for mathematics than for ELA; this is one of many discrepancies between math and ELA in the study. According to the authors, “outside the early elementary grades when students are first learning to read, teachers may have limited impacts on general reading comprehension.” This peculiar observation should raise questions about curriculum, but curriculum does not come up in the report.
When the researchers combined student feedback and math value-added (from state tests) into a single score, they found that “the difference between bottom and top quartile was .21 student standard deviations, roughly equivalent to 7.49 months of schooling in a 9-month school year.” For ELA, the difference between top and bottom quartle teachers was much smaller, at .078 student-level standard deviations.
What are students learning in ELA? Beginning in fourth grade, students appear to gain just as much in reading comprehension from April to October as from October to April—that is, the summer months away from school do not seem to affect their gains. According to the researchers, “the above pattern implies that schooling itself may have little impact on standard reading comprehension assessments after 3rd grade.” They posit, somewhat innocently, that “literacy includes more than reading comprehension … It involves writing as well.” The lack of teacher effects applied mainly to the state tests; when the researchers administered the written Stanford 9 Open-Ended Assessment for ELA, the teacher effects were larger than for math.
What explains the relatively low teacher effects on the ELA state tests? The researchers offer two possibilities: (a) teacher effects on reading comprehension are small after the early elementary years and (b) the tests themselves may fail to capture the teachers’ impact on literacy. Both of these hypotheses seem plausible but tangential to the central problem: this amorphous concept of “literacy.” Why should schools focus on “literacy” in the first place? Why not literature and other subjects?
A curious detail may offer a clue to the problem: the correlation between value-added on state tests and the Stanford 9 in ELA is low (0.37). That is, teachers whose students see gains on the ELA state tests are not very likely to see gains on the Stanford 9 as well. That is, teachers whose students see gains on the ELA state tests are unlikely to see gains on the Stanford 9 as well. (The researchers do not state whether the reverse is true.) The researchers thought some of this might be due to the “change in tests in NYC this year.” When they removed NYC from the equation, the correlation was significantly higher. (But the New York math tests changed this year as well, and this apparently did not affect things—the correlation for math between the state and BAM value-added is “moderately large” at 0.54.)
Is it not possible that NYC suffers from a weak or nonexistent ELA curriculum, more so than the other districts in the study? Certainly curriculum should be considered, if an entire district shows markedly different results from the others.
In math, there usually is a curriculum. It may be strong or weak, focused or scattered, but there is actual material that students are expected to learn. In ELA, this may or may not be the case. In schools and districts with a rigorous English curriculum (as opposed to a literacy program), students read and discuss challenging literary works, study grammar and etymology, write expository essays, and more. In the majority of New York City public schools, by contrast, this kind of concrete learning is eschewed; lessons tend to focus on a reading strategy, and students practice the strategy on their separate books. New York City has taken the strategy approach since 2003 (and in some cases much earlier); Balanced Literacy, or a version of it, is the mandated ELA program in most NYC elementary and middle schools. The MET researchers do not consider curriculum at all; they seem to assume that a curriculum exists in each of the schools and that it is consistent within a district.
In short, when analyzing teacher effects on achievement gains, the researchers forgot to ask: achievement of what? This is not a trivial question; the answers could shed light on the value-added results and their implications. It may turn out that the curricular differences are too slight or vague to make a difference, or that they do not significantly affect performance on these particular tests. Or the investigation of such differences may turn the whole study upside down. In any case, it is a mistake to ignore the question.
Diana Senechal taught for four years in the New York City public schools and holds a Ph.D. in Slavic languages and literatures from Yale. Her book, Republic of Noise: The Loss of Solitude in Schools and Culture, will be published by Rowman & Littlefield Education in late 2011.