A Visit to the Core Knowledge Auto Body Shop

by Robert Pondiscio
February 14th, 2012

The New York Times offers up a piece about a New York City school that has put building background knowledge at the heart of its curriculum.  P.S. 142, a school in lower Manhattan hard by the Williamsburg Bridge “has made real life experiences the center of academic lessons,” the paper notes, “in hopes of improving reading and math skills by broadening children’s frames of reference.”

“Experiences that are routine in middle-class homes are not for P.S. 142 children. When Dao Krings, a second-grade teacher, asked her students recently how many had never been inside a car, several, including Tyler Rodriguez, raised their hands. ‘I’ve been inside a bus,’ Tyler said. ‘Does that count?’”

This is not a Core Knowledge school, but the teachers and staff clearly understand the critical connection between background knowledge, vocabulary and language proficiency.  The Times describes the school’s “field trips to the sidewalk,” with children routinely visiting parking garages and auto body shops, or examining features of every day life.

“In early February the second graders went around the block to study Muni-Meters and parking signs. They learned new vocabulary words, like ‘parking,’ ‘violations’ and ‘bureau.’ JenLee Zhong calculated that if Ms. Krings put 50 cents in the Muni-Meter and could park for 10 minutes, for 40 minutes she would have to put in $2. They discovered that a sign that says ‘No Standing Any Time’ is not intended for kids like them on the sidewalk.

The “no standing” example illustrates perfectly how easily a lack of shared references and experiences conspire to thwart comprehension.  It is simply inconceivable that a non-driver would connect the act of balancing on two feet with the act of idling by the curb in a car.  Our language is deeply idiomatic and context driven.  Even a simple word like “shot” means something different on a basketball court, a doctor’s office, or when the repairman says your dishwasher is “shot.”

Obvious?  Sure it is.  To you. But you’re not a low-income kid who has never sat in a car.  Or stood in one.  These things either need to be taught explicitly or experienced first-hand.

“Reading with comprehension assumes a shared prior knowledge,” the Times notes.  It’s gratifying to see this point rendered as if it’s widely known in our schools.   Still the piece ends on a bittersweet note.  A local superintendent says he wished more principals would adopt the program but that they’re fearful. “There is so much pressure systematically to do well on the tests, and this may not boost scores right away,” Daniel Feigelson said. “To do this you’d have to be willing to take the long view.”

The long view should win out simply because there is no short view. At least not one that has been proven effective. Language growth is a slow growing plant, E.D. Hirsch points out.  There is no shortcut to building the vocabulary and background knowledge that drives comprehension. All the reading strategies instruction in the world can’t compensate.

Here’s my suggestion:  Although I love the phrase, PS 142 should immediately stop calling these activities “field trips to the sidewalk.”

Call it “test prep.”  Because that’s what it really is.

A Little More Text, A Little Less Self

by Robert Pondiscio
December 19th, 2011

When studying a story or an essay, is it possible to be too concerned with what the author is saying? In an opinion piece in Education Week, Maja Wilson and Thomas Newkirk complain the publisher’s criteria for Common Core State Standards are overly “text dependent,” discouraging students from bringing their own knowledge and opinions to bear on their reading.

Wilson, a former high school English teacher, and Newkirk, a University of New Hampshire English professor applaud the guidelines’ “focus on deep sustained reading—and rereading.” However they pronounce themselves “distressed” by the insistence that students should focus on the “text itself.”

“There is a distrust of reader response in this view; while the personal connections and judgments of the reader may enter in later, they should do so only after students demonstrate ‘a clear understanding of what they read.’ Publishers are enjoined to pose ‘text-dependent questions [that] can only be answered by careful scrutiny of the text … and do not require information or evidence from outside the text or texts.’ In case there is any question about how much focus on the text is enough, ‘80 to 90 percent of the Reading Standards in each grade require text-dependent analysis; accordingly, aligned curriculum materials should have a similar percentage of text-dependent questions.”

Consider me undistressed. If this means less reliance on the creaky crutch that is “reader response” in ELA classrooms, then I’m very nearly overjoyed.

The very worst that can be said about an over-reliance on text-dependent questions is that it’s an overdue market correction. As any teacher can tell you, it’s quite easy to glom on to an inconsequential moment in a text and produce reams of empty “text-to-self” meandering using the text as nothing more than a jumping off point for a personal narrative. The skill, common to most state standards, of “producing a personal response to literature” does little to demonstrate a student’s ability to read with clarity, depth and comprehension.

Indeed, educator, author and occasional Core Knowledge Blog contributor Katharine Beals points out in a response to the piece that Wilson and Newkirk have it precisely backwards: research from cognitive science suggests that making external associations during reading can actually worsen comprehension. She cites a paper by Courtenay Frazier Norbury and Dorothy Bishop which found that “poor readers drew inferences that were distorted by associations from their personal lives. For example, when asked, in reference to a scene at the seashore with a clock on a pier, ‘Where is the clock?’ many children replied, ‘In her bedroom.’”

“Norbury and Bishop propose that these errors may arise when the child fails to suppress stereotypical information about clock locations based on his/her own experience. As Norbury and Bishop explain it: ‘As we listen to a story, we are constantly making associations beween what we hear and our experiences in the world. When we hear “clock,” representations of different clocks may be activated, including alarm clocks. If the irrelevant representation is not quickly suppressed, individuals may not take in the information presented in the story about the clock being on the pier. They would therefore not update the mental representation of the story to include references to the seaside which would in turn lead to further comprehension errors.’

Struggling readers in particular would benefit from a lot more text and a lot less self. As Beals explains, “Text-to-self connections, in other words, may be the default reading mode (emphasis mine) and not something that needs to be taught. What needs to be taught instead, at least where poor readers are concerned, is how not to make text-to-self connections.”

Wilson and Newkirk illustrate their concern about over-reliance on text by describing their preferred way of teaching Nicholas Carr’s 2008 essay from The Atlantic, “Is Google Making Us Stupid?”

“Before assigning the essay, we would have students log their media use for a day (texts, emails, video games, TV, reading, surfing the Internet) and share this 24-hour profile with classmates. We might ask students to free-write and perhaps debate the question: “What advantages or disadvantages do you see in this pattern of media use?” This ‘gateway’ activity would prepare students to think about Carr’s argument. As they read, they’d be mentally comparing their own position with Carr’s. Surely, we want them to understand Carr’s argument, but we’d help them do that by making use of their experiences and opinions.”

It’s critical to understand that this approach to teaching Carr’s essay would not be verboten under CCSS publishing guidelines, which have nothing whatsoever to say about teaching methods. In fact, there’s much to recommend Wilson and Newkirk’s approach. But the test of whether the students understand Carr’s line of argument has nothing to do with the “gateway” activity, which serves mostly as an engaging hook to draw students into Carr’s thesis. Students cannot be said to have understood the piece—or any piece—of writing without the ability to show internal evidence.

Thus if publishers are “enjoined to pose text-dependent questions [that] can only be answered by careful scrutiny of the text” that is at heart not a teaching question–it’s an assessment question that probes whether or not the student understands the text.

All those connections—to our own experience, to other works of literature, make the study of literature thrilling and rewarding. But for those connections to be deep and meaningful requires more than just the superficial, paper-thin connections that too often pass for “personal response.”

What often gets lost in our rush to engage young readers and make their reading personally relevant is the simple fact that text has communicative value. When someone commits words to print, they mean to communicate facts, ideas, imagery or opinions. They should expect, if they’ve done their job well, to be understood. Might the reader have a response? Let’s hope so. But unless they have understood the author’s words and intent clearly, any response they make is less than satisfying and may not be particularly relevant as a “response.”

The bottom line: Demonstrating comprehension based on what a text says is not a problem. It’s a baseline skill for any literate human being.

Reading Solution “Hiding in Plain Sight”

by Robert Pondiscio
July 14th, 2011

Sol Stern shines a welcome spotlight on New York City’s Core Knowledge Language Arts (CKLA) pilot program in a Daily News op-ed.  Launched to considerable fanfare under then-Chancellor Joel Klein three years ago, the program has quietly continued in ten low-income elementary schools.  It represents ”a ray of reading hope in the city,” says Stern, and one that stands in sharp contrast to other initiatives “including giving cash bonuses to teachers and principals and paying minority children to show up in class and behave.”

Two large (and largely overlooked) problems remain at the root of the reading crisis:  a lack of a coherent elementary school curriculum, and a stubborn insistence on teaching and testing reading comprehension as a how-to ”skill.”  Comprehension is highly correlated with general knowledge—the more you know, the greater your ability to read, write, speak and listen with fluency and comprehension.  Thus an essential component of reading comprehension instruction must be a focused commitment to build broad background knowledge in a coherent manner from the earliest days of schools–precisely what CKLA seeks to do. Stern elaborates on how the curriculum differs from the dominant approach in most classrooms:

“Fourth-grade reading scores around the country improved somewhat over the past decade thanks to greater emphasis on phonics and word decoding in early grades. But the effect wore off by the eighth grade, as children had to show greater comprehension of more difficult texts. What was missing E.D. Hirsch believed, was greater attention in the early grades to building students’ background knowledge.  So Hirsch and his foundation created a reading program for the early grades that contained the necessary phonics drills as well as the background knowledge that students need to improve their reading comprehension.”

Perhaps most significantly, the New York City pilot program also includes a study of 10 matched control schools for comparison.  Stern points out that the program has produced stunning results to-date:

“After the first year, Klein announced the early results: On a battery of reading tests, the kindergartners in the Core Knowledge program had achieved gains five times greater than those of students in the control group. The second-year study showed that the Core Knowledge kids made reading gains twice as great as those of students in the control group. The results of the third-year study, now that the children have completed second grade, won’t be announced until sometime this autumn, probably at about the same time as the 2011 NAEP reading results are made public. It is probable that the Core Knowledge program will continue to show promising results, while scores on the NAEP eighth-grade reading test will be as stagnant as ever.

Stern, a senior fellow at the Manhattan Institute and contributing editor at City Journal, where his piece will also appear, argues that New York should keep the program in place ”showing the education authorities that the solution to the city’s reading problem is in plain sight.”

Unfortunately, rationality is usually in short supply at the Department of Education; Klein has moved on, and it’s not clear whether Hirsch’s reading program remains on the department’s agenda. Right now, there’s no guaranteed funding for continuation of the program.

Hacking at Branches

by Robert Pondiscio
July 11th, 2011

“There are a thousand hacking at the branches of evil to one who is striking at the root.”  — Henry David Thoreau

As of Friday, your humble blogger completed a travel jag that had him on the road for all but one week since Memorial Day.  I was pleased to attend the 2011 National Charter Schools Conference in Atlanta, the TEAM CFA conference, and the annual Education Commission of the States Forum in Denver along the way. 

The blogging has been light to non-existent during this stretch, which I regret on the one hand.  But on the other, I’m happy to have had an excuse to sit on the sidelines during the ongoing rhetorical summer heat wave.  Like another July battle 150 years ago, lines have been drawn, and the big guns come out to boom and blast at each other from fixed positions, losing sight now as they did then, that what unites us ought to be more important than what divides us.  All wars end eventually, and common purpose, one hopes, will one day be restored to the combatants in the ”education wars” — a dispiriting term being tossed about with greater frequency of late.

Speaking at the ECS conference was a particular privilege.  I was pinch-hitting for E.D. Hirsch on the topic “What is holding back reading achievement?” and addressed the need for state-level education and elected officials to understand the problems embedded in the skills-driven, how-to approach to teaching reading comprehension that dominates elementary education.  The main message:  reading comprehension is not a skill (despite how we typically teach it and test it), and a vision of education reform that does not account for the absolute necessity to build student knowledge and vocabulary as a means of enhancing reading comprehension tacitly encourages poor classroom practice.

Hack, hack, hack…

Turning Decoders Into Readers

by Robert Pondiscio
June 9th, 2011

I’m a fan of PBS’s John Merrow.  He is the rare television journalist—perhaps the only one—who has the interest, background and sufficient airtime to give thorny education topics the nuanced treatment they deserve.  The other night he devoted nearly ten minutes of PBS’s Newshour to an intriguing question:  Can a good school have bad test scores?  To answer it, he and his producer Cat McGrath visited P.S. 1, a South Bronx school that appears to be filled with bright, eager learners and devoted teachers, yet is “failing” as judged by its terrible reading scores.  What’s going on?

“We discovered that the FIRST graders at that school were reading confidently and competently, Merrow writes on his blog, “but the fourth graders weren’t according to the results of the state test. Is this a paradox, or a full-blown contradiction?”  Merrow attempted to figure out where things leave the rails between first and fourth grades–an earnest, but ultimately frustrating piece that correctly diagnoses the problem, but fails to uncover or sufficiently examine its root causes.

Merrow starts by correctly pointing out that there is a big difference between “reading” in the first grade and “reading” in fourth grade.  Indeed, they’re hardly the same activity. Observing a phonics lesson in a first grade classroom, he points out that “Ms. Hunt’s students seem to be getting it. What they are doing is called decoding, but decoding is only half the battle. Understanding what the words mean is a much harder skill called comprehension. It’s where many children fall flat.”

For starters comprehension is not a “skill” at all.  Your ability to read with comprehension depends on many things.  You must be able to decode.  You must know all (or nearly all) of the words.  And you must know at least a little about the subject matter of the text to construct a mental model that allows you to make meaning correctly.   “My dog is sleeping on the couch” is easy to understand; “ My Havanese is snoozing on the divan” means more or less the same thing as long as you know about dogs and furniture, and understand that “snooze” is a synonym for sleep. 

Only 18% of P.S 1′s 4th graders are reading at or above grade level.  The good decoders have failed to become strong readers. What happened?  One 4th grade teacher says the children’s home lives start to take a toll.

“They’re not as innocent anymore. They’re realizing the things that are affecting their schoolwork. You know, I mean, I have homeless students in my room. I have students with fathers in jail. There’s drugs. So, that obviously comes into play at a certain point as well.”

Another 4th grade teacher suggests the grind of test prep and test anxiety is the issue.  “The system takes the fun out of reading,” observes Brenda Cartagena.  “I want them to read for enjoyment. I want them to grab that book because it’s fun. I tell them, reading, you travel, you meet new friends, you learn how to do new things. But it’s very difficult, you know? They take the joy out. And it’s hard to infuse it back.”

Full disclosure: I spent a significant amount of time talking to producer McGrath as she and Merrow prepared the piece.  I stressed the importance of vocabulary and background knowledge and how reading comprehension, unlike decoding, is not a transferable “skill” at all.  How the tests children take are de facto tests of general knowledge.  To what degree, I wondered, does the instruction these South Bronx kids receive reflect that?  Having taught at a nearby school in the same district a few blocks away, I suspected the answer is “not at all.”   To their credit Merrow and McGrath looked at the tests.  Merrow writes on his blog:

“We looked over past tests, and, sure enough, the passages were about subjects that poor kids in the south Bronx may not be familiar with (cicadas or dragonflies were two of the subjects, for example). Answering the questions did require inferential leaps, just as we had been told.

“So we asked to talk with a couple of fourth graders who were reading below grade level, and here’s where it got complicated.  As you will see in the NewsHour piece, both children, one age 9 and the other 11, handled the passages and answered all the questions. Maybe the personal attention helped, but they read easily and drew inferences correctly. We only ‘tested’ a couple of kids, but both were below grade-level, their teacher assured us.”

Again, did they “read easily?”  Or did they decode easily?  And I’m not as confident as Merrow that they “drew inferences correctly.”  Here’s what viewers saw Monday night on the Newshour:

JOHN MERROW: I wondered how the fourth-grade class might perform on the state test this year, and asked Ms. Cartagena to send me two of her students who were reading below great level.

Jeannette, who is 9, came first.

STUDENT: So far, I have hoped to find many new species.

JOHN MERROW: I asked her to read a passage about dragonflies from last year’s state test.

STUDENT: About 5,500 dragonfly species buzz around the world. Who doesn’t like — like looking at these amazing insects?

JOHN MERROW: What are species?

BRENDA CARTAGENA: Many kinds.

JOHN MERROW: Kinds. It’s kinds of species. Right. Exactly. Yes.

Exactly right?  It is impossible to know, based on this exchange, if the child understands “species” as well as Merrow assumes or if she has a sufficient grasp of what a dragonfly is to apply the concept.  As a teacher, I’d want to probe more for understanding, “if you’re looking at two dragonflies, how can you tell if they are different species?” you might ask.  If she said they might be different colors or have different shaped wings, I’d feel reasonably confident that she understands the basic idea.  If she says “one’s male and one’s female” or can’t explain the difference at all, then the concept is still shaky, or she might not know enough about dragonflies to apply it. Either way it would impact her ability to draw inferences and make meaning from the passage.

Given that the achievement gap long predates test-driven accountability, you could sensibly argue that that testing makes the problem worse, but it cannot be the root cause.  Similarly, the idea that “real life catches up with kids” by 4th grade is unsatisfying.  If reading comprehension is a skill like riding a bike or throwing a ball through a hoop (it’s not), it is not an ability you would suddenly lose if your father was sent to prison or you were evicted from your home.

What Merrow either didn’t probe or did not air is what – what exactly – the instruction given to these children in 2nd, 3rd and 4th grade looks like.  Are they being steeped in a content-rich curriculum that would make it less likely that concepts like cicadas, dragonflies and species would be unfamiliar at test time?  Or is the school operating, as most do, on the incorrect  assumption that reading comprehension is a transferable skill?  That decoding + engagement + content-free reading strategies is enough to guarantee success? When this formula fails, as it inevitably must, it is normal to look to outside causes like poverty, fractured families and test anxiety as root causes.  These things certainly work against student engagement and achievement, but they are clearly not the cause of failure. 

Merrow is due a lot of credit for  taking a nuanced view of reading and and asking the right question: why doesn’t early decoding success automatically turn in to comprehension success?  But ultimately the piece doesn’t provide the answer.

The MET Research Paper: Achievement of What?

by Guest Blogger
December 19th, 2010

by Diana Senechal

A new study by the Measures of Effective Teaching (MET) Project, funded by the Bill and Melinda Gates Foundation, finds that students’ perceptions of their teachers correlate with the teachers’ value-added scores; in other words, “students seem to know effective teaching when they experience it.” The correlation is stronger for mathematics than for ELA; this is one of many discrepancies between math and ELA in the study. According to the authors, “outside the early elementary grades when students are first learning to read, teachers may have limited impacts on general reading comprehension.” This peculiar observation should raise questions about curriculum, but curriculum does not come up in the report.

When the researchers combined student feedback and math value-added (from state tests) into a single score, they found that “the difference between bottom and top quartile was .21 student standard deviations, roughly equivalent to 7.49 months of schooling in a 9-month school year.” For ELA, the difference between top and bottom quartle teachers was much smaller, at .078 student-level standard deviations.

What are students learning in ELA? Beginning in fourth grade, students appear to gain just as much in reading comprehension from April to October as from October to April—that is, the summer months away from school do not seem to affect their gains. According to the researchers, “the above pattern implies that schooling itself may have little impact on standard read­ing comprehension assessments after 3rd grade.” They posit, somewhat innocently, that “literacy includes more than reading comprehension … It involves writing as well.” The lack of teacher effects applied mainly to the state tests;  when the researchers administered the written Stanford 9 Open-Ended Assessment for ELA, the teacher effects were larger than for math.

What explains the relatively low teacher effects on the ELA state tests? The researchers offer two possibilities: (a) teacher effects on reading comprehension are small after the early elementary years and (b) the tests themselves may fail to capture the teachers’ impact on literacy. Both of these hypotheses seem plausible but tangential to the central problem: this amorphous concept of “literacy.” Why should schools focus on “literacy” in the first place? Why not literature and other subjects?

A curious detail may offer a clue to the problem: the correlation between value-added on state tests and the Stanford 9 in ELA is low (0.37). That is, teachers whose students see gains on the ELA state tests are not very likely to see gains on the Stanford 9 as well.  That is, teachers whose students see gains on the ELA state tests are unlikely to see gains on the Stanford 9 as well. (The researchers do not state whether the reverse is true.) The researchers thought some of this might be due to the “change in tests in NYC this year.” When they removed NYC from the equation, the correlation was significantly higher. (But the New York math tests changed this year as well, and this apparently did not affect things—the correlation for math between the state and BAM value-added is “moderately large” at 0.54.)

Is it not possible that NYC suffers from a weak or nonexistent ELA curriculum, more so than the other districts in the study? Certainly curriculum should be considered, if an entire district shows markedly different results from the others.

In math, there usually is a curriculum. It may be strong or weak, focused or scattered, but there is actual material that students are expected to learn. In ELA, this may or may not be the case. In schools and districts with a rigorous English curriculum (as opposed to a literacy program), students read and discuss challenging literary works, study grammar and etymology, write expository essays, and  more. In the majority of New York City public schools, by contrast, this kind of concrete learning is eschewed; lessons tend to focus on a reading strategy, and students practice the strategy on their separate books. New York City has taken the strategy approach since 2003 (and in some cases much earlier); Balanced Literacy, or a version of it, is the mandated ELA program in most NYC elementary and middle schools. The MET researchers do not consider curriculum at all; they seem to assume that a curriculum exists in each of the schools and that it is consistent within a district.

In short, when analyzing teacher effects on achievement gains, the researchers forgot to ask: achievement of what? This is not a trivial question; the answers could shed light on the value-added results and their implications. It may turn out that the curricular differences are too slight or vague to make a difference, or that they do not significantly affect performance on these particular tests. Or the investigation of such differences may turn the whole study upside down. In any case, it is a mistake to ignore the question.

Diana Senechal taught for four years in the New York City public schools and holds a Ph.D. in Slavic languages and literatures from Yale. Her book, Republic of Noise: The Loss of Solitude in Schools and Culture, will be published by Rowman & Littlefield Education in late 2011.

 
 

 

 

Confirmation Bias: When Educators Underestimate Children

by Robert Pondiscio
November 10th, 2010

Guest blogger Katharine Beals, PhD is the author of “Raising a Left-Brain Child in a Right-Brain World: Strategies for Helping Bright, Quirky, Socially Awkward Children to Thrive at Home and at School.”  She teaches at the University of Pennsylvania Graduate School of Education and at the Drexel University School of Education, specializing in the education of children on the autistic spectrum.  She blogs about education at Kitchen Table Math and on her own blog, Out in Left Field.

By Katharine Beals

Why underestimate what children understand?

Recent anecdotes from parents and recommendations from educators suggest that the underestimation of American children is alive and well in the world of K-12 education. In particular, more and more teachers and education experts  seem convinced that kids don’t really understand the words they read or the numbers they manipulate nearly as well as their parents claim they do. Thus, one mother learns from her daughter’s 2nd grade teacher that her child doesn’t understand the chapter books she’s been reading for pleasure since kindergarten. She should be reading picture books instead. Another mother learns that the multi-digit arithmetic that her 3rd grade son has been doing since preschool is mere calculation, devoid of conceptual understanding. He should be doing simpler calculations using manipulatives and repeated addition.

How, and why, have so many educators become so skeptical about children’s understanding?

How to become skeptical is child’s play. Simply ask the child a question that ostensibly probes comprehension, but is either vague enough, open-ended enough, or verbally challenging enough that the child is unlikely to give the “correct” answer: What is that? What is it about? Why did you do that? If further probing seems necessary, ask equally difficult follow-up questions.

Ground-breaking math education theorist Constance Kamii  has shown how this works with place value in particular:

1. Show the child a number like this: 27

2. Place your finger on the left-most digit and ask the child what number it is.

3. When the child answers “two” rather than “twenty,” immediately conclude that he or she doesn’t understand place value.

4. Banish from your mind any suspicion that a child who can read “27″as “twenty seven” might simultaneously (a) know that the “2″ in “27″ is what contributes to twenty seven the value of twenty and (b) be assuming that you were asking about “2″ as a number rather than about “2″ as a digit. 

How might you convince yourself that a 3rd grader doesn’t understand multi-digit arithmetic? Why not tap into her immature verbal skills? Ask her to elaborate how she subtracted 562 from 831. When she stumbles, ignore any suspicion that articulating why one borrowed from the 8 in the hundreds place and reduced the 8 to a 7 is beyond the verbal skills of your typical 8 or 9-year-old.

How might you convince yourself that a 2nd grader doesn’t understand his above-grade level chapter book? Here, a sufficiently open-ended question may do the trick. Ask him what the book is about, or what will happen next, or how the text relates to himself. Then interpret any hesitation, stumbling, vagueness, or reluctance to respond as an unequivocal sign of deficient comprehension. Dismiss any suspicion that this line of reasoning implies that a teenager who answers “What did you do today?” with “I don’t know” doesn’t comprehend his day.

Perhaps less obvious is why some educators seem determined to underestimate understanding. Here are a couple of possibilities. First, doing so may level the range of apparent abilities in a class of twenty-something children. Parents might think their children are ahead academically, but if they don’t really understand what they are doing, there’s less pressure to provide them with an accelerated curriculum. There’s also less of an apparent achievement gap to be troubled by.

Underestimating comprehension may also serve to avoid or postpone teaching harder material that, frankly, can be a pain in the neck to teach. Believing that children don’t understand place value, for example, gives you an excuse not to teach those pesky standard algorithms of arithmetic. Why? Because if children don’t understand place value, then they can’t understand borrowing and carrying (regrouping), let alone column multiplication and long division. And unless they understand how these procedures work from the get-go, educators claim (though mathematicians disagree), using them will permanently harm their mathematical development.

What’s particularly striking about this underestimation is how much it seems to have permeated the establishment’s take even on those children it itself identifies as “gifted.” For example, at the recent New England Conference on the Gifted and Talented, most of the math talks either expressed concerns about children’s comprehension of place value, and/or advocated the use of manipulatives in place of abstract math. The mathematically gifted kids I know, however, grasp place value and other aspects of arithmetic with only minimal exposure to manipulatives, and quickly advance to higher levels of abstraction by the time they hit first or second grade. 

So, indeed, do children in other developed countries around the world (see examples on my blog, Out in Left Field here, here and here)–whether or not we’d consider them “mathematically gifted.”

To stop holding our students back relative to their international peers, we need to stop asking them the wrong questions. Sometimes, indeed, no questions are necessary. If a child enjoys reading a particular book, then even if she fails to tell you what it’s about, she probably has a reasonable understanding of its content.  If his multi-digit calculations are error-free, then even if he can’t clearly explain his steps in words, he probably has a reasonable understanding of his calculations. Comprehension may not be perfect—when is it ever so? — but the fact that it may need refinement is reason to encourage a child forward, not to stand in his or her way.

 

Data-Driven…Off a Cliff

by Robert Pondiscio
October 20th, 2010

Miami English teacher Roxanna Elden makes a compelling case for how “data-driven instruction” can be misleading and self-defeating.  Writing at Education Next, Elden describes a nonfiction passage about owls on a practice test for the state’s FCAT test: Which of the owls’ names is the most misleading? Is it the screech owl “because its call rarely approximates a screech?” Or is it the long-eared owl, “because its real ears are behind its eyes and covered by feathers?”

Each question on the practice test supposedly corresponds to a specific reading skill or benchmark. “Teachers are supposed to discuss test results in afterschool ‘data chats’ and then review weak skills in class,” Elden writes.  Like so:

First Teacher: Well, it looks like my students need some extra work on benchmark LA.910.6.2.2: The student will organize, synthesize, analyze, and evaluate the validity and reliability of information from multiple sources (including primary and secondary sources) to draw conclusions using a variety of techniques, and correctly use standardized citations.

Second Teacher: Mine, too! Now let’s work as a team to help students better understand this benchmark in time for next month’s assessment.

Third Teacher: I am glad we are having this “chat.”

Forget for a moment that people only speak like this after they fall asleep next to a pod.   Here’s how Elden’s actual “data chat” went:

First Teacher: My students’ lowest area was supposedly synthesizing information, but that benchmark was only tested by two questions. One was the last question on the test, and a lot of my students didn’t have time to finish. The other question was that one about the screech owl having the misleading name, and I thought it was kind of confusing.

Second Teacher: We read that question in class and most of my students didn’t know what approximates meant, so it really became more of a vocabulary question.

Third Teacher: Wait … I thought the long-eared owl was the one with the misleading name.

Language arts teachers, Elden points out, “know that answering comprehension questions correctly does not rest on just one benchmark.”  That may work for math, but, she correctly observes, “reading is different.”

“After students have mastered basics like decoding, reading cannot be taught through repeated practice of isolated skills. Students must understand enough of a passage to utilize all the intricately linked skills that together comprise comprehension. The owl question, for example, tests skills not learned from isolated reading practice but from processing information on the varying characteristics of animal species. (The correct answer, by the way, is the screech owl.)”

Data-driven instruction says teach the skill?  Well, data-driven instruction is wrong.  Reading is not a transferable skill with components that can be separated like an egg yolk from the egg white. Comprehension is a function of interwoven skill, prior knowledge and vocabulary.   Expecting teachers to tease out a specific skill from the question Elden cites is like asking them to separate the yolk from a scrambled egg.

“Unfortunately, strict adherence to data-driven instruction can lead schools to push aside science and social studies to drill students on isolated reading benchmarks. Compare and contrast, for example, is covered year after year in creative lessons using Venn diagrams. The rersult is students who can produce Venn diagrams comparing cans of soda, and act out Venn diagrams with Hula–hoops, but are still lost a few paragraphs into a passage about owls. When they do poorly on reading assessments, we pull them again from subjects that give them content knowledge for more review of Venn diagrams. Many students learn to associate reading with failure and boredom.”

The expectation that teachers should use data in a way that belies what we know about reading is a prime example of what Rick Hess called The New Stupid – “a reflexive and unsophisticated reliance on a few simple metrics.”

“It’s impossible to teach kids to read well while denying them the knowledge they need to make sense of complex material,” Elden concludes “Following the data often forces teachers to do just that.”

Yet Another Study to Ignore

by Robert Pondiscio
August 5th, 2010

Another blow for metacognitive reading strategies. 

A study by a team from the University of York in the U.K. sought to learn which of three interventions led to lasting improvement among 8- and 9-year olds with reading comprehension difficulties.   One intervention relied heavily on reading strategies; a second emphasized vocabulary and relied exclusively on spoken language; the third blended the two approaches.  Science Daily reports the children were assessed before the program began, and nearly a year after it ended.

“The results showed that while all three of the training programs helped to improve reading comprehension, the largest long-term gains occurred for children who were in the oral language training group.  According to the authors, ‘The [oral language] and [combined] groups also showed improvements in knowledge of the meanings of words that they had been taught and these improvements, in turn, helped to account for these children’s improved reading comprehension skills.”

Among those least surprised by the findings:  the developers of the Core Knowledge Language Arts program, which has been piloted in New York City and elsewhere with promising results.  The program relies heavily on building vocabulary and content knowledge via a “listening and learning” component.  Interestingly, children in the oral language group showed greater lasting gains than the blended group, which suggests “the total amount of time devoted to oral-language training may be crucial for overcoming reading-comprehension difficulties.”

“Deficits in oral vocabulary may be one important underlying cause of children’s reading-comprehension problems,” the study concludes.

Just so.  In fact, there’s so much evidence for this, I predict this is exactly the kind of thing DOE will throw millions at when the i3 grants are announced…Er…what?  Last night?   Who??  You’re kidding.  Seriously?!?

I keep forgetting that DOE already knows what works for kids.  It has nothing to do with curriculum. Right. 

OK, folks, show’s over.  Nothing more to see here.  Everybody go on back to your homes.

There’s No Such Thing as a Reading Test

by Robert Pondiscio
June 16th, 2010

“Children who do not learn to read proficiently by the end of third grade are unlikely ever to read at grade level,” writes Sara Mead in the July/August issue of The American Prospect.  The issue features a special section titled “Reading By Grade Three” that examines the crisis in early childhood literacy.  In addition to Sara’s piece, which lays out the case for national action on early childhood literacy, Cornelia Grumman,  executive director of the First Five Years Fund, looks at the need to get kids off to a good start even before formal schooling even begins; the New America Foundation’s Lisa Guernsey cautions against letting the clear need for improved early literacy translate into classrooms that are all skills and no play; Gordon MacInnes of The Century Foundation describes how providing low-income kids with “stable, high-quality preschool and kindergarten” has made a difference in New Jersey.  Lots more good reads; the whole package can be found here.   E.D. Hirsch and I contributed a piece as well, a version of which is below.  It looks at how a fundamental misconception about the nature of reading leads to mischief in how we teach and test it.   

There’s No Such Thing as a Reading Test
E.D. Hirsch Jr. and Robert Pondiscio

It is among the most common nightmares.  You dream of taking a test for which you are completely unprepared having never studied or even attended the course.  For millions of American schoolchildren, however, it is a nightmare from which they cannot wake, a Kafkaesque trial visited upon them each year when they are required by law to take a reading tests with little preparation.  Eyebrows are already being raised.  Not prepared!?  Why, preparing for reading tests has become more than just an annual ritual for schools.  It is practically their raison d’être!

Schools and teachers may indeed be making a Herculean effort to raise reading scores, but paradoxically these efforts do little to improve reading achievement and to prepare children for college, career and a lifetime of productive, engaged citizenship.  This wasted effort is not because, as many would have it, our teachers are lazy or of low quality.   Rather, too many of our schools labor under fundamental misconceptions about reading comprehension, how it works, how to improve it, and how to test it.

Reading is not a Skill

Reading, like riding a bike, is an ability we acquire as children and generally never lose.  Some of us are more confident on two wheels than others and some of us, we believe, are better readers than others.  We view reading ability as a broad, generalized skill that is easily measured and assessed.  We judge our schools and increasingly individual teachers based on their ability to improve the reading ability of our children.  When you think about your ability to read—if you think about it at all—the chances are good that you perceive it as not just a skill, but a readily transferable skill.  Once you learn how to read you can competently read a novel, a newspaper article, or the latest memo from corporate headquarters.  Reading is reading is reading.  Either you can do it, or you cannot. 

This view of reading is only partially correct. The ability to translate written symbols into sounds, commonly called “decoding,” is indeed a skill that can be taught and mastered.  This explains why you are able to “read” nonsense words such as “rigfap” or “churbit.”  Once a child masters “letter-sound correspondence,” or phonics, we might say she can “read” since she can reproduce the sounds represented by written language.  But clearly there’s more to reading making sounds.  To be fully literate is to have the communicative power of language at your command—to read, write, listen and speak with understanding.  As nearly any elementary school teacher can attest, it is possible to decode skillfully yet struggle with comprehension.  And reading comprehension, the ability to extract meaning from text, is not transferable. 

Cognitive scientists describe comprehension as “domain specific.”  If a baseball fan reads “A-Rod hit into a 6-4-3 double play to end the game” he needs not another word to understand that the New York Yankees lost when Alex Rodriguez came up with a man on first base and one out; he hit a groundball to the shortstop, who threw to the second baseman, who relayed to first in time to catch Rodriguez for the final out.  If you’ve never heard of A-Rod or a 6-4-3 double play and cannot reconstruct the game situation, you are not a poor reader.  You merely lack the domain specific knowledge of baseball to fill in the gaps.  

Even simple texts, like the ones our children read on their all-important reading tests, are filled with gaps—presumed domain knowledge—that the writer assumes the reader knows.  Research also tells us that familiarity with domain knowledge increases fluency, broadens vocabulary (you can pick up words in context), and enables deeper reading and listening comprehension.

A simple model, then, would be to think of reading as a two-lock box, requiring two keys to open. The first key is decoding skills. The second key is oral language, vocabulary and domain-specific or background knowledge sufficient to understand what is being decoded.   Even this simple understanding of reading enables us to see that the very idea of an abstract skill called “reading comprehension” is ill-informed.  Yet most U.S. schools teach reading as if both decoding and comprehension are transferable skills (more on that in a moment). Worse, we test our children’s reading ability without regard to whether or not we have given them the requisite background knowledge they need to be successful.    

           
Who is a “good reader?”

Researchers have consistently demonstrated that in order to understand what you’re reading, you need to know something about the subject matter.   Students who are identified as “poor readers” often comprehend with relative ease when asked to read passages on familiar subjects, outperforming even “good readers” who lack relevant background knowledge.  One well-known study looked at junior high school students judged to be either good or poor readers in terms of their ability to decode or read aloud fluently. Some knew a lot of about baseball, while others knew little. The children read a passage written at an early 5th-grade reading level, describing the action in a game. As they read, they were asked to move models of ballplayers around a replica baseball diamond to illustrate the action in the passage. If reading comprehension was a transferable skill that could be taught, practiced and mastered then the students who were “good” readers should have had no trouble outperforming the “poor” readers.  In fact (and perhaps intuitively) just the opposite happened.  Poor readers with high content knowledge outperformed good readers with low content knowledge.  Such findings should challenge our very idea of who is or is not good reader: if reading is the means by which we receive ideas and information, then the good reader is one who best understands the author’s words.

You have probably felt the uncomfortable sensation of feeling like a poor reader when struggling to understand a new product warranty, directions for installing a computer operating system, or some other piece of writing where your lack of background knowledge left you feeling out of your depth.  Your rate of reading slows.  You find yourself repeating sentences over and over to make sure you understand.  If this happens only rarely to you, it is because you possess a broad range of background knowledge—the more you know, the more you are able to communicate and comprehend. The implications of this insight for teaching children to read should be obvious: The more domain knowledge our children receive the more capable readers they will become.

The message has not reached American classrooms, however.  A stubborn belief in reading comprehension as a transferable skill combined with the immense pressures of testing and accountability has created something like a perfect storm—ever more time is being wasted (and wasted is not too strong a word) on scattered, trivial and incoherent reading.   A study sponsored by the National Institute of Child Health and Human Development found that only four percent of 1st grade class time in American elementary schools is spent on science, and two percent on social studies.  In third grade, about five percent of class time goes to each of these subjects.  Meanwhile a whopping 62% in 1st grade and 47% in 3rd is spent on language arts. 

Most young American children spend anywhere from 90 minutes to two-and-a-half hours a day in something educators call the “literacy block,” an extended period which might include  reading aloud, small group “guided reading,” independent writing, and other activities aimed at increasing children’s verbal skills.  Reading instruction largely focuses on teaching and practicing all-purpose “reading comprehension strategies”—helping students to find the main idea of passage, make inferences or identify the author’s purpose.  The general idea is to arm young readers with a suite of all-purpose tricks and tips for thinking about reading that can be applied to any text the child may encounter.   Careful readers may be thinking, “if the ability to understand what you read is a function of your domain-specific background knowledge, then how is it possible to teach all-purpose reading strategies?”  It’s a question well worth asking.  And one that seldom is.  

Reading strategies figured prominently in the 2000 report of the National Reading Panel, and reading strategies work, to a point. Reading comprehension scores tend to go up after instruction in strategies, but it’s a one-time boost.  The major contribution of such instruction is to help beginning readers know that text, like speech, is supposed to make sense.  If someone says something you don’t understand, you can always ask that person to repeat, explain, or give an example.  Reading strategies offer similar workarounds for print.  They’re not useless, but repeated practice seems to have little or no effect.

“The mistaken idea that reading is a skill—learn to crack the code, practice comprehension strategies and you can read anything—may be the single biggest factor holding back reading achievement in the country,” wrote Daniel T. Willingham, Professor of Psychology at the University of Virginia, recently in the Washington Post.  “Students will not meet standards that way. The knowledge base problem must be solved.”

Tests Worth Teaching To

Once the connection between content and comprehension becomes clear, two conclusions come almost unbidden.  Our present system of testing reading ability is inherently unfair.  Since reading comprehension is not a transferable skill, unless we ensure that all children have access to the same body of knowledge, the student with the greater store of background knowledge will always have a strong advantage.  The second conclusion is even worse: the content-neutral way that we teach reading, as a discrete set of skills and strategies, is counterproductive and even irresponsible.

If our schools understood and acted upon the clear evidence that domain-specific content knowledge is foundational to literacy, reading instruction might look very different in our children’s classrooms.  Rather than idle away precious hours on trivial stories or randomly chosen nonfiction, reading, writing and listening instruction would be built into study of ancient civilizations in first grade, for example, Greek mythology in second , or the human body in third.  Recently, the Core Knowledge Foundation has been piloting precisely such a language arts program in a small number of schools in New York City and elsewhere.  Initial results are promising, however it is crucial to remember that building domain knowledge is a long-term proposition.  All reading tests are cumulative.  The measurable benefit of broad background knowledge can take years to reveal itself.

At present, teachers are tacitly discouraged from taking the long view.  Indeed, what incentive would a 2nd grade teacher have to emphasize content that might not show up on a test until 6th grade, if even then?  There is more upside for teachers in doing exactly what they chiefly do now – test prep, skills and strategies – unless we actively incentivize a domain-specific approach to language arts.

Let us propose a reasonable, simple, even elegant alternative to replace the vicious circle of narrowed curriculum and comprehension skills of limited efficacy, which over time depress reading achievement.  By tying the content of reading tests to specific curricular content, the circle becomes virtuous.   Here’s how it would work:  let’s say a state’s 4th grade science standards includes the circulatory system, atoms and molecules, electricity, Earth’s geologic layers and weather; Social Studies standards include world geography, Europe in the Middle Ages, the American Revolution and the U.S. Constitution, among other domains.  The state’s reading tests should include not just fiction and poetry, but nonfiction readings on those topics and others culled from those specific curriculum standards.  Teachers would still teach to the test, emphasizing domain specific knowledge (because it might be on the test), but no one would object, since it would help students not only pass the current year’s test, but to build the broad background knowledge that enables them to become stronger readers in general. 

The benefits of such “curriculum-based reading tests” would be many:  tests would be fairer, and a better reflection (teacher quality advocates take note) of how well a student had learned the particular year’s curriculum.  The tests would also exhibit “consequential validity” meaning they would actually improve education.  Instead of wasted hours of mind-numbing test prep and reading strategy lessons of limited value, the best test-taking strategy would be to spend time learning the material in the curriculum standards—a true virtuous circle.

By contrast let’s imagine what it is like to be a 4th grade boy in a struggling South Bronx elementary school, sitting for a high-stakes reading test.  If you do not pass, you are facing summer school or repeating the grade.  Because the school has large numbers of students below grade level, they have drastically cut back on science, social studies, art, music—even gym and recess—to focus on reading and math.  You have spent the year learning and practicing reading strategies.  Your teacher, worried about her performance, has relentlessly hammered test-taking strategies for months.

The test begins and the very first passage concerns the customs of the Dutch colony of New Amsterdam.  You do not know what a custom is; neither do you know who the Dutch were, or even what a colony is.  You have never heard of Amsterdam, old or new.  Certainly it’s never come up in class.  Without background knowledge you struggle with most of the passages on the test.  You never had a chance.   Meanwhile, across town, more affluent students take and pass the test with ease. They are no more bright or capable than you are, but because they have wider general knowledge—as students who come from advantaged backgrounds so often do—the test is not much of a challenge.  Those who think reading is a transferable skill and take their background knowledge for granted may well wonder what all the fuss is about.  Those kids and teachers in the Bronx struggle all year and fail to get ready for this? Why, all the answers are right there on the page!

It ends, as it inevitably must, in the finger pointing that plagues American education.  Do not blame the tests.  Taxpayers are entitled to know if the schools they support are any good, and reading tests, all things considered, are quite reliable.  Do not blame the test writers.  They have no idea what topics are being taught in school and their job is done when tests show certain technical characteristics. It is unfair to blame teachers, since they are mainly operating to the best of their ability using the methods in which they were trained.  And let’s certainly not blame the parents of our struggling young man in the South Bronx.  Is it unreasonable to assume that a child who dutifully goes to school every day will gain access to the same rich, enabling domains of knowledge that more affluent children take for granted?

It’s not unreasonable at all.  That’s what schools are supposed to be for.  The only thing unreasonable is our refusal to see reading for what it really is, and to teach and test reading accordingly.