Et tu, Yglesias?

by Robert Pondiscio
June 16th, 2011

Oh no he didn’t!

American Progress pundit Matthew Yglesias commits the unpardonable sin of repeating what has to be to most wrong-headed idea in all of education: that teaching kids content can wait until they’ve learned the “skill” of reading.  He wraps up a column on the just-released NAEP history scores with this jaw-dropper:

“What we’re seeing, in particular, is that trying to teach history to kids who can’t read is a fool’s errand. Focusing more clearly on making sure that kids aren’t falling behind in their core skills is helping the worst-off kids do better across the board even at history.”

Teacher/blogger Rachel Levy sets Yglesias straight, so I don’t have to.

Turning Decoders Into Readers

by Robert Pondiscio
June 9th, 2011

I’m a fan of PBS’s John Merrow.  He is the rare television journalist—perhaps the only one—who has the interest, background and sufficient airtime to give thorny education topics the nuanced treatment they deserve.  The other night he devoted nearly ten minutes of PBS’s Newshour to an intriguing question:  Can a good school have bad test scores?  To answer it, he and his producer Cat McGrath visited P.S. 1, a South Bronx school that appears to be filled with bright, eager learners and devoted teachers, yet is “failing” as judged by its terrible reading scores.  What’s going on?

“We discovered that the FIRST graders at that school were reading confidently and competently, Merrow writes on his blog, “but the fourth graders weren’t according to the results of the state test. Is this a paradox, or a full-blown contradiction?”  Merrow attempted to figure out where things leave the rails between first and fourth grades–an earnest, but ultimately frustrating piece that correctly diagnoses the problem, but fails to uncover or sufficiently examine its root causes.

Merrow starts by correctly pointing out that there is a big difference between “reading” in the first grade and “reading” in fourth grade.  Indeed, they’re hardly the same activity. Observing a phonics lesson in a first grade classroom, he points out that “Ms. Hunt’s students seem to be getting it. What they are doing is called decoding, but decoding is only half the battle. Understanding what the words mean is a much harder skill called comprehension. It’s where many children fall flat.”

For starters comprehension is not a “skill” at all.  Your ability to read with comprehension depends on many things.  You must be able to decode.  You must know all (or nearly all) of the words.  And you must know at least a little about the subject matter of the text to construct a mental model that allows you to make meaning correctly.   “My dog is sleeping on the couch” is easy to understand; “ My Havanese is snoozing on the divan” means more or less the same thing as long as you know about dogs and furniture, and understand that “snooze” is a synonym for sleep. 

Only 18% of P.S 1′s 4th graders are reading at or above grade level.  The good decoders have failed to become strong readers. What happened?  One 4th grade teacher says the children’s home lives start to take a toll.

“They’re not as innocent anymore. They’re realizing the things that are affecting their schoolwork. You know, I mean, I have homeless students in my room. I have students with fathers in jail. There’s drugs. So, that obviously comes into play at a certain point as well.”

Another 4th grade teacher suggests the grind of test prep and test anxiety is the issue.  “The system takes the fun out of reading,” observes Brenda Cartagena.  “I want them to read for enjoyment. I want them to grab that book because it’s fun. I tell them, reading, you travel, you meet new friends, you learn how to do new things. But it’s very difficult, you know? They take the joy out. And it’s hard to infuse it back.”

Full disclosure: I spent a significant amount of time talking to producer McGrath as she and Merrow prepared the piece.  I stressed the importance of vocabulary and background knowledge and how reading comprehension, unlike decoding, is not a transferable “skill” at all.  How the tests children take are de facto tests of general knowledge.  To what degree, I wondered, does the instruction these South Bronx kids receive reflect that?  Having taught at a nearby school in the same district a few blocks away, I suspected the answer is “not at all.”   To their credit Merrow and McGrath looked at the tests.  Merrow writes on his blog:

“We looked over past tests, and, sure enough, the passages were about subjects that poor kids in the south Bronx may not be familiar with (cicadas or dragonflies were two of the subjects, for example). Answering the questions did require inferential leaps, just as we had been told.

“So we asked to talk with a couple of fourth graders who were reading below grade level, and here’s where it got complicated.  As you will see in the NewsHour piece, both children, one age 9 and the other 11, handled the passages and answered all the questions. Maybe the personal attention helped, but they read easily and drew inferences correctly. We only ‘tested’ a couple of kids, but both were below grade-level, their teacher assured us.”

Again, did they “read easily?”  Or did they decode easily?  And I’m not as confident as Merrow that they “drew inferences correctly.”  Here’s what viewers saw Monday night on the Newshour:

JOHN MERROW: I wondered how the fourth-grade class might perform on the state test this year, and asked Ms. Cartagena to send me two of her students who were reading below great level.

Jeannette, who is 9, came first.

STUDENT: So far, I have hoped to find many new species.

JOHN MERROW: I asked her to read a passage about dragonflies from last year’s state test.

STUDENT: About 5,500 dragonfly species buzz around the world. Who doesn’t like — like looking at these amazing insects?

JOHN MERROW: What are species?

BRENDA CARTAGENA: Many kinds.

JOHN MERROW: Kinds. It’s kinds of species. Right. Exactly. Yes.

Exactly right?  It is impossible to know, based on this exchange, if the child understands “species” as well as Merrow assumes or if she has a sufficient grasp of what a dragonfly is to apply the concept.  As a teacher, I’d want to probe more for understanding, “if you’re looking at two dragonflies, how can you tell if they are different species?” you might ask.  If she said they might be different colors or have different shaped wings, I’d feel reasonably confident that she understands the basic idea.  If she says “one’s male and one’s female” or can’t explain the difference at all, then the concept is still shaky, or she might not know enough about dragonflies to apply it. Either way it would impact her ability to draw inferences and make meaning from the passage.

Given that the achievement gap long predates test-driven accountability, you could sensibly argue that that testing makes the problem worse, but it cannot be the root cause.  Similarly, the idea that “real life catches up with kids” by 4th grade is unsatisfying.  If reading comprehension is a skill like riding a bike or throwing a ball through a hoop (it’s not), it is not an ability you would suddenly lose if your father was sent to prison or you were evicted from your home.

What Merrow either didn’t probe or did not air is what – what exactly – the instruction given to these children in 2nd, 3rd and 4th grade looks like.  Are they being steeped in a content-rich curriculum that would make it less likely that concepts like cicadas, dragonflies and species would be unfamiliar at test time?  Or is the school operating, as most do, on the incorrect  assumption that reading comprehension is a transferable skill?  That decoding + engagement + content-free reading strategies is enough to guarantee success? When this formula fails, as it inevitably must, it is normal to look to outside causes like poverty, fractured families and test anxiety as root causes.  These things certainly work against student engagement and achievement, but they are clearly not the cause of failure. 

Merrow is due a lot of credit for  taking a nuanced view of reading and and asking the right question: why doesn’t early decoding success automatically turn in to comprehension success?  But ultimately the piece doesn’t provide the answer.

Study Finds Lectures Worth Insulting

by Guest Blogger
June 2nd, 2011

by Diana Senechal

I am fond of the old-fashioned lecture. It gives me something to sink into, something to think about. It’s often supplemented with discussions and labs, so students don’t just sit and listen. If it is taught well, it can be intriguing, even rousing, even lingering. I remember those packed lecture halls in college, and other superb lecture courses as well.

But I must defer to research-based research. Research has just shown that certain research-based methods bring greater learning gains in physics than the lecture approach. Sarah D. Sparks describes the study in an Education Week blog, but I got curious and decided to read the report for myself (Science, May 13, 2011, available by subscription or purchase only).

Yes, indeed. Researchers at the University of British Columbia in Vancouver conducted a week-long experiment near the end of a year-long physics course. They found—

Wait—for a week? Near the end of a full year?

Don’t interrupt. This blog doesn’t get interactive until I’m done.

Yes, ahem, as I was saying, the students had been taking a lecture course in physics. The lectures were supplemented throughout the year with labs, tutorials, recitations, and assignments. In week 12 of the second semester, the researchers conducted an experiment with two of the three sections of this course. There was a control section (267 students) and an experimental section (271 students).  The instructor of the control section continued teaching through lectures. The instructors of the experimental section used “deliberate practice”—in this case, “a series of challenging questions and tasks that require the students to practice physicist-like reasoning and problem solving during class time while provided with frequent feedback.

The experimental group did much better than the control group on the test, which was administered in the first class session of week 13. All students were informed that this test would not affect their grade but would serve as good practice for the final exam. (Wait—what? —No interruptions. This is your second warning.) In the control section, 171 of the 267 students (64 percent) attended class on the day of the test; 211 out of the 271 students in the experimental section (78 percent) attended. The control section scored an average of 41 percent on the test; the experimental section, 74 percent. Victory for experimental things! Students in both sections took an average of 20 minutes to complete the test. (All this stir over a twenty-minute quizzy-poo that doesn’t affect the grade? —I’ve already warned you. If you interrupt again, I’m calling your parents).

The researchers state confidently at the end:

“In conclusion, we show that use of deliberate practice teaching strategies can improve both learning and engagement in a large introductory physics course as compared with what was obtained with the lecture method. Our study compares similar students, and teachers with the same learning objectives and the same instructional time and tests. This result is likely to generalize to a variety of postsecondary courses.”

Or, as they put it succinctly in the abstract: “We found increased student attendance, higher engagement, and more than twice the learning in the section taught using research-based instruction.”

I am convinced. It doesn’t matter that all of the students had been learning through lecture, lab, tutorial, and recitation all year long. What matters is what happened in this one week. The present is now. What happened was magical. There was learning. Even more learning in the experimental group—oh, much more—than in the control group. What this means—if you can just hold your horses for a moment—I’m telling you, I’m serious, I’ve got my cell phone here—what this means is that we should expand the findings to other courses. We should expand it everywhere! We should get rid of lectures altogether, or, at the very least, insult them.

Sarah D. Sparks seems to agree with the researchers: “While the study focused only on one section of college students, it gives yet more support for educators moving away from lecture-based instruction.” (One does this just as one might slide away from a misfit at a party.) According to Sparks, this study suggests that “interactive learning can be more than twice as effective as lecturing.” Take that, lecture!

Well, anything can be anything, except when it can’t. But that isn’t the point. The point is that lots of people are excited about this, and we really shouldn’t let them down. If I were to be reasonable about it, I’d suggest that “deliberate practice” of this sort works well when students already have a strong foundation. They need to know what they’re practicing. To get rid of the lectures would be simply reckless. But why be reasonable? Insulting can be fun. Bad lecture! Good experiment! More effective! Chopped thoughts! Research-based!

Diana Senechal’s book, Republic of Noise: The Loss of Solitude in Schools and Culture, will be published by Rowman & Littlefield Education in November 2011.