The New York Times is agog about a move, funded by the Gates Foundation, to “develop a better system for evaluating classroom instruction.” The project involves scrutinizing tens of thousands of hours of classroom teaching to find correlations between particular teaching practices and “student achievement” (higher test scores). “The effort has also become a large-scale field trial of using classroom video, to help teachers improve and to evaluate them remotely,” the Times notes.
“Video lasts,” Dr. McClellan [a director for the Educational Testing Service who is overseeing the process] said, creating possibilities for dialogue among teachers about improving classroom techniques. “Colleagues can watch your video and say, ‘Right here — where you did that — try this next time.’ So the teacher learns a new skill.”
That could happen. Then again, maybe not. A tool — any tool — is only as good as the person using it. Same thing for data, including videotape. It is not an intrinsic good; it is useful only insofar as it is used to diagnose perceptively and respond wisely. And that’s not a sure thing. Talk to a teacher in a struggling school and chances are good they’ve had this conversation, or a variation of it:
“Make sure you have math manipulatives out when the instructional supervisor visits on Thursday. And make sure your students are working in groups.”
“She likes to see group work and manipulatives for differentiation. The research shows it helps kids learn.”
“It doesn’t matter. She wants to see manipulatives and group work. Make sure that’s what she sees in your room.”
Classroom teachers are too often the last recipient in a long game of telephone. Sophisticated and subtle research, larded with variables, caveats and judgement calls, gets passed from the field to journals to trade publications to conferences and seminars to district officials to instructional supertindents to principals to APs and coaches until it arrives in the classroom teacher’s ear in the form of a bumper sticker–or worse: a civil service compliance item on a checklist. Something we do because “they” want to see it.
Over at Larry Ferlazzo’s blog, he describes a process of videotaping and professional development as “one of the most significant professional development experiences I’ve had.”
“Our school, led by principal Ted Appel, has begun having Kelly Young, an extraordinarily talented consultant on instructional strategies who we have been working with for years, videotape our lessons (I’ve written much about Kelly in this blog). He then meets with us to review an edited version of the tape, with us initially giving our own critique and reflections followed by his comments. This process is entirely outside of the official evaluation process, and is focused on helping teachers improve their craft.”
Clearly, videotape can be a powerful tool for improving teaching, but so can observations. The issue is whether they’re used, as Larry suggests, to help teachers improve, or just another “accountability” measure or evidence in a gotcha game. How soon before this happens:
“Mr. Pondiscio, we’ve reviewed the videotapes of your classroom for the last two weeks, and we noticed you’ve only used math manipulatives in twice in your last ten lessons. We’ve also seen three instances of whole class instruction, including asking students to read out loud, which as you know is an instructional practice we frown upon. I’m afraid we’re going to have to write you up and put a letter in your file.”
The lead in the Times’ piece points out that over 90% of teachers get top marks in evaluations. The clear implication is that there is something wrong with the tools used to evaluate teachers. The old saying about a sloppy worker blaming his tools is true. The dirty little secret about evaluations is that they are typically used to validate and reinforce the observer’s take on a teacher. If your principal or AP likes or values you, you get a good observation. If not, it’s a tool to irritate, harass and make you want to seek a job elsewhere.
So videotape teachers. Go nuts. Put an unblinking eye in every classroom. I’m not impressed with state-of-the-art data collection. In the absence of state-of-the-art data interpretation and dissemination–and an honest commitment to improvement–it doesn’t amount to much. If videotape is used to “improve practice” will the findings be delivered to teachers in a more timely, coherent and useful way than research findings tend to be currently disseminated? If used for evaluation, will it be handled differently than existing observations in which what is seen is what the observer wishes to see?
Color me skeptical.