Second Thoughts on Pineapplegate

by Robert Pondiscio
May 4th, 2012

Writing in his TIME Magazine column, Andy “Eduwonk” Rotherham offers up a largely exculpatory take on Pineapplegate.  The media jumped all over a bowdlerized version of the test passage, he notes.  New York state officials should have been clearer in explaining that nothing makes its way onto standardized tests by accident.  And in the end, Andy writes, what is needed is “a more substantive conversation rather than a firestorm” over testing.

Very well, let’s have one.

In the unlikely event you haven’t heard, a minor media frenzy was ignited a few weeks back when the New York Daily News got hold of a surreal fable, loosely modeled on the familiar tale of the Tortoise and the Hare, which appeared on the just-administered New York State 8th grade reading test.  In the test passage, a talking pineapple challenges a hare to a foot race in front of a group of woodland creatures, loses the race (the pineapple’s lack of legs proving to be a fatal competitive disadvantage)  and gets eaten by the other animals.

Rotherham points out that the passage picked up by the paper was not the actual test passage, but a second-hand version plucked from an anti-testing website. “The passage the paper ran was so poorly written that it would indeed have been inexcusable,” he wrote.  Perhaps, but the correct passage wasn’t exactly a model of clarity and coherence either.  Indeed, the fable’s author mocked the decision by the testing company, Pearson, to create multiple choice questions about his story on a state test.  “As far as I am able to ascertain from my own work, there isn’t necessarily a specifically assigned meaning in anything,” Daniel Pinkwater told the Wall Street Journal. “That really is why it’s hilarious on the face of it that anybody creating a test would use a passage of mine, because I’m an advocate of nonsense. I believe that things mean things but they don’t have assigned meanings.”

Ultimately the real version of the test passage was released by the state to quiet the controversy.  But it did little to reverse the impression that this was a questionable measure of students’ ability.  Rotherham’s big “get” in Time is a memo from Pearson to New York State officials detailing the question’s review process as well as its use on other states’ tests as far back as 2004.  The message:  nothing to see here, folks.  Show’s over.  Go on back to your schools, sharpen those No. 2 pencils and get ready for more tests.

“Standardized tests are neither as bad as their critics make them out to be nor as good as they should be,” Rotherham concludes.  Perhaps, but they’re bad enough.  The principal problem, which Pineapplegate underscores vividly, is that we continue to insist on drawing conclusions about students’ reading ability based on a random, incoherent collection of largely meaningless passages concocted by test-makers utterly disconnected from what kids actually learn in school all day.  This actively incentivizes a form of educational malpractice, since reading tests reinforce the mistaken notion that reading comprehension is a transferable skill and that the subject matter is disconnected from comprehension.   But we know this is not the case as E.D. Hirsch and Dan Willingham have pointed out time and again, and as we have discussed on this blog repeatedly.

So this is not a simple case of an uproar based on bad information and sloppy damage control.  What Rotherham misses in a somewhat strident defense of standardized tests and testing is that we are suffering generally from a case of test fatigue. The entire edifice of reform rests on testing, and while the principle of accountability remains sound, the effects of testing on schools has proven to be deleterious, to be charitable. Thus the conditions were ripe for people to overreact to perceived absurdity in the tests. And that’s exactly what happened here.

Was the story was blown out of proportion by some people playing fast and loose with the facts?  Perhaps.  But the facts, once they became clear, were more than bad enough.

15 Comments »

  1. Thanks for this post Robert. Yesterday, I made my Beginning English Language Learner class take the regular state English test. It is one of the more morally questionable acts I’ve done in my life.

    Comment by Larry Ferlazzo — May 4, 2012 @ 4:50 pm

  2. I don’t have a problem with using a nonsense story. The issue to me was taking questions that were like a book-group discussion-starter and giving them absolute right/wrong answers. That’s so wrong that every state should fire Pearson now and sue for a full refund. (Well, that should happen anyway.)

    The fact that it was a nonsense story (though that’s technically irrelevant to the issue) helped make the point effectively to the public, despite the mega-PR firepower behind the testing advocates and the total absence thereof on the test critics’ side.

    Just for starters, it’s obviously likely that a smart kid would overthink the “pineapples don’t have sleeves” point by noting that having “something up (one’s) sleeve” doesn’t refer to a literal sleeve.

    The other thing I personally view as deplorable is having some functionary at Pearson World HQ rewrite a highly regarded author’s story. That actually has drawn insufficient criticism amid the brouhaha.

    Comment by CarolineSF — May 4, 2012 @ 5:05 pm

  3. Quick aside: “…plucked from an anti-testing website.” Let me guess. Monty, Lisa, and Robert S from the People’s Republic of Fair Test? This is the same crowd that lists over 800 “colleges/universities” that do not require the use of the SAT test in their admissions process…until you get a gander at their “list.” Over 40 ITT Techs? Colleges and universities? Then examine some of the other non-Ivys on their “list.” Talk about a doozie!

    Missing for me here and in a number of articles on “Pineapplegate” (Core knowledge reference here folks. If you are not familiar with contemporary history you have no clue why “gate” is added to “pineapple”) is the absurdity, not so much of the passage, which as DP asserts himself is nonsense, but the use of such a passage as a connection to a teacher’s evaluation. While I am a strong advocate of the value added by a teacher to their students’ state test scores, I find “nonsense” items on the test, well…NONSENSE, especially if they’re going to be connected in any way to high stakes with teachers.

    Comment by Paul Hoss — May 5, 2012 @ 6:55 am

  4. The Pineapple story made for a good laugh, and it’s a great example of the wrongheadedness of trying to assess reading comprehension as an all-purpose skill that can be applied to any text.

    But how can we assess the reading abilities of students so that the public isn’t woodhinked by self-serving Edworld employees. Think of the high school “graduates” 20 years ago who couldn’t read a simple newspaper article, or the retired pro football player sitting in an elementary classroom learning to read, even though he played football at a four year college.

    I’m a mere layman – are there actually meaningful assessments that can’t be gamed by teachers and administrators?

    Comment by John Webster — May 5, 2012 @ 9:56 am

  5. @ John:

    I love standardized tests –just not the ones that purport to test reading and writing ability. Bring on the history, science, Spanish, literature, math, and geography tests (so long as they’re linked to content-specific standards).

    The problem with the reading tests is that they’re trying to test something that does not exist.

    Comment by Ponderosa — May 5, 2012 @ 11:34 am

  6. “The problem with the reading tests is that they’re trying to test something that does not exist.”

    Damn, I wish I’d said that. But I will from now on!

    Comment by Robert Pondiscio — May 5, 2012 @ 12:30 pm

  7. I love that story!

    Had never read it.

    Comment by Catherine — May 5, 2012 @ 3:15 pm

  8. My goodness – the questions are crazy!

    Which animal is the wisest?

    What???

    I missed that one, and I recently scored an 800 on the SAT Verbal test, SO THERE.

    (Seriously. I did.)

    Comment by Catherine — May 5, 2012 @ 3:49 pm

  9. (Caveat: I don’t intend this as an unqualified defense of “Hare and the Pineapple”, but…)

    #1 “reading” problem I have observed in my 15 years as a teacher of English: lack of background knowledge. Widely observed on this blog. Should be fairly obvious by now.

    #2 “reading” problem I have observed: inability to be fluent in moving between the literal and the metaphorical.

    “HatP” disproportionately taxes those who cannot differentiate between the literal and the metaphorical. (As per the story: “Well, you know what I mean.”) I like the fact that something that might at first seem like nonsense in fact makes some sense. (I do believe that it does, especially — as has been pointed out on this blog — as a rejoinder to the Aesop fable to which it alludes in a clever way.)

    Without accepting this test as-is, which I do not encourage, what can we do to encourage the development of testing that assays critical thinking in and out of the literal realm?

    Comment by Carl Rosin — May 5, 2012 @ 4:09 pm

  10. This is crazy!

    First of all, the story itself is delightful; as a parent and a taxpayer, I certainly do want to know whether students can read and understand it. (The fact that this story appears on an 8th grade test is distressing, but that’s another story.)

    Also, as far as I can tell, understanding a popular narrative **is** a transferrable skill. All stories have the “Four Cs” of narrative (Conflict, Character, Causality, and Complication), and these components don’t change across genres to my knowledge. Offhand, I don’t see any reason why a standardized test **can’t** test students’ ability to understand fiction.

    The problem is that the test writers don’t seem to understand the story themselves.

    Two of the questions are fine:

    Number 6, which asks about the sequence in which the events are related

    and

    Number 11 – What does ‘something up his sleeve’ mean? – which tests background knowledge of idiomatic expressions

    The rest of the questions treat the story as if it were a straightforward, non-ironic piece of dramatic fiction, which it isn’t. It’s a parody, and the test should contain questions about parody.

    A proper set of questions might ask what the tone of the story is, what kind of story it is, which fable it refers to, etc., etc. If you really wanted to get fancy, you could probably come up with a set of alternative morals and ask which one fits best – or perhaps which moral is the opposite of “Pineapples don’t have sleeves.” You could ask where the first sign that the story is a parody appears. You could ask all kinds of things!

    What you can’t ask is: “Which animal spoke the wisest words?”

    Comment by Catherine — May 5, 2012 @ 4:54 pm

  11. “Finally, the owl declares that “Pineapples don’t have sleeves,” which is a factually accurate statement. This statement is also presented as the moral of the story, allowing a careful reader to infer that the owl is the wisest animal.”

    http://ideas.time.com/2012/05/04/pineapplegate-exclusive-memo-detailing-the-hare-and-the-pineapple-passage/

    Well, this would be true if you take “careful” to mean “literal-minded.”

    Comment by Catherine — May 5, 2012 @ 4:58 pm

  12. Robert,

    For “proven to be deleterious” you link to your post about test prep.

    I agree that many schools that have historically failed to get kids to read, when faced with testing, respond with dumb strategies like test prep.

    But that’s their response to every stimulus. English Language Learners? the old way was – “teach” them in native language, isolate them. The new way is fake training for teachers in “ELL strategies.”

    So my question is: do you believe the tests within these schools deleterious (I’m guessing you, but confirming), or simply ineffective? Essentially, are you arguing that things were better for bad-reading poor kids before the state testing began to make things worse?

    Comment by MG — May 6, 2012 @ 9:17 am

  13. Mike,

    I think the key phrase is that testing incentivizes bad practice. To my mind, what is most needed for low-achieving kids is to build word knowledge and fluency in the early grades and build up background knowledge from the first days of school since, as Don Hirsch has long observed, every reading test is a de facto test of background knowledge. To come at it another way, think of the achievement gap as a knowledge gap. You can’t remediate or close that very quickly. An important goal should be to do so over time, but it’s naive to think it can be done quickly, or at an early age.

    By being content-agnostic, reading tests do nothing to address the fact that kids who are poor readers in general are (also in general) missing the broad general knowledge and vocabulary that “good” readers bring to the task. By attaching high stakes to these fundamentally unfair instruments, the tacit message to teachers is “don’t waste you time investing in language growth (vocabulary and background knowledge). Just get the scores up ASAP.” Thus it’s pretty natural to resort to ratcheting up the test prep and strategies.

    It’s interesting, Mike, that you say “that’s their response to every stimulus.” Well yes. Because that’s how we train teachers. I make this point repeatedly: the story of failing schools is often a storry of good people trying their best and failing. And they are not failing despite doing what they were trained to do. They are failing BECAUSE they are doing what they were trained to do.

    If you want teachers — and we should — to focus on building knowledge and vocabulary (language competence) then it makes more sense to have tests that actively encourage the teaching of what kids need to know to close the language gap. This is a simple as it is improbable: give teacher a list of the knowledge domains and works of literature in each grade from which test items will be culled (the three branches of government, the water cycle, Ancient Egypt, The Grapes of Wrath and The House on Mango Street + 100 others). Then teachers are incentivized to build instruction around that content, not strategies.

    I’m not arguing that things were better before, and I’m frustrated, frankly, that the most vocal critics of testing and accountability tend to argue (essentially) that we should go back to the way things were. We need a commitment to content-rich education and accountability structures that favor it. Not because I want to be Content Czar. But because that’s how you drive achievement for all kids.

    Comment by Robert Pondiscio — May 6, 2012 @ 10:50 am

  14. @ Mike, @ Robert,

    I think you deserve credit for pointing it out first Robert, but one of the other proof points in favor of teaching content over teaching strategies is that these same standardized tests (despite their errors) indicate our children are consistently better at math than reading.

    One strong hypothesis for this is that math instruction, however flawed it may be, does involve the delivery of a lot more agreed-upon ‘content.’

    There is not a lot of argument on the value of knowing the multiplication facts that lead to understanding division, factoring and a whole host of high level math skills.

    So while we argue about the best way to get there in the elementary grades, the goal is not in dispute.

    And surprise, surprise, many of our children (even the needy ones) can achieve these goals, led there by the same teachers who are denounced for their inability to teach these same children to read or write.

    Comment by matthew — May 6, 2012 @ 11:21 am

  15. I’m surprised no pundits have called it a “smart” story yet. But it’s early yet.

    Comment by Barry Garelick — May 6, 2012 @ 9:24 pm

RSS feed for comments on this post. TrackBack URL

Leave a comment

While the Core Knowledge Foundation wants to hear from readers of this blog, it reserves the right to not post comments online and to edit them for content and appropriateness.