Can Knowledge Level The Learning Field For Children?

by Guest Blogger
December 2nd, 2013

By Esther Quintero

Esther Quintero is a senior research fellow at the Albert Shanker Institute. This post first appeared on the Shanker Blog.

How much do preschoolers from disadvantaged and more affluent backgrounds know about the world and why does that matter? One recent study by Tanya Kaefer (Lakehead University) Susan B. Neuman (New York University) and Ashley M. Pinkham (University of Michigan) provides some answers.

The researchers randomly selected children from preschool classrooms in two sites, one serving kids from disadvantaged backgrounds, the other serving middle-class kids. They then set about to answer three questions:

  1. Do poor and middle-class children possess different knowledge about the world?
  2. Do differences in knowledge influence the children’s ability to learn in the classroom?
  3. If differences in preexisting knowledge were neutralized, would the two groups of children learn similarly?

To answer the first question, the researchers determined how much children from both groups knew about birds and the extent to which they were able to make inferences about new words based on such knowledge.

Not surprisingly, lower-income children had significantly less knowledge about birds and bird behaviors than did their middle-class peers. To rule out the possibility that these differences were the result of disparities in language proficiency, Kaefer et al. measured the children’s receptive vocabularies. This way, they were able to establish that poor kids knew less about birds, not merely because they knew fewer words related to birds, but because they had less information about the domain in general.

To answer the second question — whether differences in knowledge influence the kids’ ability to learn in the classroom — a second study evaluated children’s ability to understand words out of context and to comprehend a story that was read to them. As predicted, children from middle-class backgrounds, who had greater knowledge about the domain category (i.e., birds), performed better in these two tasks than children with more limited knowledge about the domain.

It may not be obvious to adults, but learning words from books is not an automatic or straightforward task for young children. In fact, argue the authors of the paper, one of the factors influencing this process is children’s preexisting knowledge. Previous research (cited in the paper) has established that children with larger vocabularies acquire new words implicitly from storybooks more readily than children with smaller vocabularies. At least two mechanisms might explain the relationship between vocabulary and learning.

First, the authors note, one possible explanation is that metalinguistic factors (e.g., verbal IQ, working memory) explain the relationship between vocabulary knowledge and implicit word learning.

Alternatively, if children’s vocabulary is viewed as an indicator (or “reflection”) of their general background knowledge, it may be the breadth and depth of their preexisting knowledge that influences their implicit word learning.

The logic of the second mechanism is as follows: Children’s preexisting knowledge creates a framework that facilitates the acquisition of new information; knowing more words and concepts scaffolds children’s ability to slot new information in the “right places,” and to learn related words and concepts more efficiently.

To recap, the first study discussed above established that children from disadvantaged backgrounds know less about a topic (i.e., birds) than their middle-class peers. Next, in study two, the researchers showed that differences in domain knowledge influenced children’s ability to understand words out of context, and to comprehend a story. Moreover, poor kids—who also had more limited knowledge—perform worse on these tasks than did their middle-class peers. But could additional knowledge be used to level the playing field for children from less affluent backgrounds?

In study three, the researchers held the children’s prior knowledge constant by introducing a fictitious topic—i.e., a topic that was sure to be unknown to both groups. When the two groups of children were assessed on word learning and comprehension related to this new domain, the researchers found no significant differences in how poor and middle-class children learned words, comprehended a story or made inferences.

These results:

  • Add to the body of research showing that preexisting knowledge shapes incidental vocabulary learning and comprehension for children, and that this is true for children as young as preschool age;
  • Highlight the need to build children’s background knowledge more systematically and strategically, and suggest that procedures to activate children’s prior knowledge—e.g., storybook reading—may prove fruitless when such knowledge does not exist.

While this research, like all research, has limitations—see the paper for a discussion of these—the results taken together suggest that one powerful way to level the “learning field” for all children is to facilitate poor kids’ access to “taken for granted” knowledge that middle class children, on average, are more likely to possess, primarily because they have been exposed to it in the first place.

When poor and middle-class children are given the same opportunities to assimilate new knowledge, their subsequent learning is comparable. Of course this is only one study, but the main finding and its implications are extremely powerful. It suggests that if preschool programs are not making a difference for children from disadvantaged backgrounds, it might be the case that the programs are not tackling an important but solvable problem: A deficit in knowledge.


Best of the Blogs: Dumbing Down and Building Up

by Guest Blogger
March 27th, 2013

Good sense, sound research, and cultivated open-mindedness—these three things help us all live healthier, happier lives. But they tend to be in short supply.

Not so yesterday in blogdom: E. D. Hirsch shared his good sense, Daniel Willingham offered a guide to sound research, and Diana Senechal revealed the joys of cultivated open-mindedness. I hope you’ll read their posts in full, so here are just a few highlights.

Over at the Huffington Post, E. D. Hirsch asks, “Are Schools Dumbing Down the Common Core Standards?

The arguments against [the Common Core State Standards] grow ever more fierce — as if … schools were being forced to descend from their current level of excellence to study “informational texts” like tax codes which will drive Langston Hughes and Emily Dickinson out of the curriculum.

None of the horrid scenarios need happen — given an ounce or even a milligram of common sense. Since the standards do not prescribe a definite curriculum, many different curricula could fulfill them. It’s no more reasonable to claim that Langston Hughes and Emily Dickinson will be excluded as to claim that they will be required. One could easily insist that within language arts courses “informational texts” such as historical ones must qualify as “literature” — a word that is not limited to fiction and poetry, yet does exclude tax codes.

Moreover many of the current criticisms aren’t really directed against the standards themselves but against the frantic directives that principals and superintendents are sending out to teachers. I agree that some school administrators are reacting to the coming of the standards in strange and unproductive ways — just as they did when No Child Left Behind became law. But the standards don’t require folly — against which the gods themselves struggle in vain….

The Core Knowledge example proves that effective curricula can be based on the new standards. It will be up to the critics and the practitioners themselves to create effective curricula. The fault, dear Brutus, is not in the standards but in ourselves, if we should fail in this unique new chance to improve our schools.

On his Science and Education blog, Daniel Willingham explores “A New Push for Science in Education in Britain.”

Basic scientific knowledge gleaned from cognitive and developmental psychology (and other fields) can not only help us to interpret the results of randomized trials, that knowledge can be useful to teachers on its own. Just as a physician uses her knowledge of human physiology to diagnose a case, a teacher can use her knowledge of cognition to “diagnose” how to best teach a particular concept to a particular child.

I don’t know about Britain, but this information is not taught in most American schools of Education. I wrote a book about cognitive principles that might apply to education. The most common remark I hear from teachers is surprise (and often, anger) that they were not taught these principles when they trained.

Elsewhere I’ve suggested we need not just a “what works” clearinghouse to evaluate interventions, but a “what’s known” clearinghouse for basic scientific knowledge that might apply to education….

When building a house an architect must respect certain basic facts set out by science. Physics and materials science will loom large for the architect; for educators it might be psychology, sociology et al. The rules represent limiting conditions, but so long as you stay within those boundaries there is lots of ways to get it right. Just as physics doesn’t tell the architect what the house must look like, so too cognitive psychology doesn’t tell teachers how they must teach.

Guest blogging for Joanne Jacobs, Diana Senechal considers “The pull and counter-pull of teaching.”

Education is filled with opposing principles, where neither is absolutely correct…. Most teachers have certain leanings, but those leanings are not the whole of their understanding or of the truth. Often I find that when I tip just a little bit against myself, interesting things happen.

For instance, my philosophy courses have focused on reading and discussion of texts—for good reasons. The texts are compelling, and the students approach them thoughtfully and enthusiastically. Yet when I give students a chance to take off with their own ideas, I find that they bring forth some of their best work. The moral is not that I should abandon the texts, but rather that I should vary the type of assignment now and then.

My ninth-grade students are studying rhetoric and logic. Most recently, they read G. K. Chesterton’s essay “The Fallacy of Success.” We examined how Chesterton takes apart the idea of success, and how his reference to the myth of King Midas enhances his argument. They did well with this.

Then I thought: why not have them take apart a concept themselves? … Much came out of this exercise. Yet it was informed by our reading and discussion of “The Fallacy of Success.” There need not be a contradiction between analyzing someone else’s essay and writing your own (with your own ideas). In the best of scenarios, the two support each other.


Blame the Tests

by E. D. Hirsch, Jr.
January 15th, 2013

In Praise of Samuel Messick 1931–1998, Part III

The chief practical impact of NCLB has been its principle of accountability. Adequate yearly progress, the law stated, must be determined by test scores in reading and math—not just for the school as a whole, but for key groups of students.

Now, a decade later, the result of the law, as many have complained, has been a narrowing of the school curriculum. In far too many schools,  the arts and humanities, and even science and civics, have been neglected—sacrificed on the altar of tests  without any substantial progress nationwide on the tests themselves. It is hard to decide whether to call NCLB a disaster or a catastrophe.

But I disagree with those who blame this failure on the accountability principle of NCLB. The law did not specify what tests in reading and math the schools were to use. If the states had responded with valid tests—defined by Messick as tests that are both accurate and have a productive effect on practice—the past decade would have seen much more progress.

Since NCLB, NAEP’s long-term trend assessment shows substantial increases in reading among the lowest-performing 9-year-olds—but nothing comparable in later grades. It also shows moderate increases in math among 9- and 13-year-olds.

So, it seems that a chief educational defect of the NCLB era lay in the later-grades reading tests; they simply do not have the same educational validity of the tests in early grades reading and in early- and middle-grades math.


It’s not very hard to make a verbal test that predicts how well a person will be able to read. One accurate method used by the military is the two-part verbal section of the multiple-choice Armed Forces Qualification Test (AFQT), which is known for its success in accurately predicting real-world competence. One section of the AFQT Verbal consists of 15 items based on short paragraphs on different subjects and in different styles to be completed in 13 minutes.  The other section of the AFQT Verbal is a vocabulary test with 35 items to be completed in 11 minutes. This 24-minute test predicts as well as any verbal test the range of your verbal abilities, your probable job competence and your future income level. It is a short, cheap and technically valid test. Some version of it could even serve as a school-leaving test.

Educators would certainly protest if that were done—if only because such a test would give very little guidance for classroom practice or curriculum. And this is the nub of the defects in the reading tests used during the era of NCLB: They did not adequately support curriculum and classroom practice. The tests in early-grades reading and in early- and middle-grades math did a better job of inducing productive classroom practice, and their results show it.

Early-grades reading tests, as Joseph Torgesen and his colleagues showed, probe chiefly phonics and fluency, not comprehension. Schools are now aware that students will be tested on phonics and fluency in early grades. In fact, these crucial early reading skills are among the few topics for which recent (pre-Common Core) state standards had begun to be highly specific. These more successful early reading tests were thus different from later ones in a critical respect:  They actually tested what students were supposed to be taught.

Hence in early reading, to its credit, NCLB induced a much greater correlation than before between standards, curriculum, teaching and tests. The tests became more valid in practice because they induced teachers to teach to a test based on a highly specific subject matter—phonics and fluency. Educators and policymakers recognized that teaching swift decoding was essential in the early grades, tests assessed swift decoding, and—mirabile dictu—there was an uptick in scores on those tests.

Since the improvements were impressive, let’s take a look at what has happened in over the past decade among the lowest performing 9-year-olds on NAEP’s long-term trend assessment in reading.

Note that there is little to no growth among higher-performing 9-year-olds, presumably because they had already mastered phonics and fluency.

Similarly, early- and middle-grades math tests probed substantive grade-by-grade math knowledge, as the state standards had become ever more specific in math. You can see where I’m going: Early reading and math improved because teachers typically teach to the tests (especially under NCLB-type accountability pressures), and the subject matter of these tests began to be more and more defined and predictable, causing a collaboration and reinforcement between tests and classroom practice.

In later-grades reading tests, where we have failed to improve, the tests have not been based on any clear, specific subject matter, so it has been impossible to teach to the tests in a productive way. (The lack of alignment between math course taking and the NAEP math assessment for 17-year-olds is similarly problematic.) Of course, there are many reasons why achievement might not rise. But specific subject matter, both taught and tested, is a necessary—if not sufficient—condition for test scores to rise.

In the absence of any specific subject matter for language arts, teachers, textbook makers, and test makers have conceived of reading comprehension as a strategy rather than as a side effect of broad knowledge. This inadequate strategy approach to language arts is reflected in the tests themselves. I have read many of them.  An inevitable question is something like this: “The main idea of this passage is….” And the theory behind such a question is that what is being tested is the ability of the student to strategize the meaning by “questioning the author” and performing other puzzle-solving techniques to get the right answer. But, as readers of this blog know, that is not what is being tested. The subject matter of the passage is.

This mistaken strategy-focused structure has made these tests not only valueless educationally, but worse—positively harmful. Such tests send out the misleading message that reading comprehension is chiefly strategizing. That idea has dominated language arts instruction in the past decade, which means that a great deal of time has been misspent on fruitless test-taking activities. Tragically, that time could have been spent on science, humanities and the arts—subjects that would have actually increased reading abilities (and been far more interesting).

The only way that later-grades reading tests can be made educationally valid is by adopting the more successful structure followed in early reading and math. An educationally valid test must be based on the specific substance that is taught at the grade level being tested (possibly with some sampling of specifics from previous and later grades for remediation and acceleration purposes). Testing what has been taught is the only way to foster collaboration and reinforcement between tests and classroom practice. An educationally valid reading test requires a specific curriculum—a subject of further conversations, no doubt.

Study Finds Lectures Worth Insulting

by Guest Blogger
June 2nd, 2011

by Diana Senechal

I am fond of the old-fashioned lecture. It gives me something to sink into, something to think about. It’s often supplemented with discussions and labs, so students don’t just sit and listen. If it is taught well, it can be intriguing, even rousing, even lingering. I remember those packed lecture halls in college, and other superb lecture courses as well.

But I must defer to research-based research. Research has just shown that certain research-based methods bring greater learning gains in physics than the lecture approach. Sarah D. Sparks describes the study in an Education Week blog, but I got curious and decided to read the report for myself (Science, May 13, 2011, available by subscription or purchase only).

Yes, indeed. Researchers at the University of British Columbia in Vancouver conducted a week-long experiment near the end of a year-long physics course. They found—

Wait—for a week? Near the end of a full year?

Don’t interrupt. This blog doesn’t get interactive until I’m done.

Yes, ahem, as I was saying, the students had been taking a lecture course in physics. The lectures were supplemented throughout the year with labs, tutorials, recitations, and assignments. In week 12 of the second semester, the researchers conducted an experiment with two of the three sections of this course. There was a control section (267 students) and an experimental section (271 students).  The instructor of the control section continued teaching through lectures. The instructors of the experimental section used “deliberate practice”—in this case, “a series of challenging questions and tasks that require the students to practice physicist-like reasoning and problem solving during class time while provided with frequent feedback.

The experimental group did much better than the control group on the test, which was administered in the first class session of week 13. All students were informed that this test would not affect their grade but would serve as good practice for the final exam. (Wait—what? —No interruptions. This is your second warning.) In the control section, 171 of the 267 students (64 percent) attended class on the day of the test; 211 out of the 271 students in the experimental section (78 percent) attended. The control section scored an average of 41 percent on the test; the experimental section, 74 percent. Victory for experimental things! Students in both sections took an average of 20 minutes to complete the test. (All this stir over a twenty-minute quizzy-poo that doesn’t affect the grade? —I’ve already warned you. If you interrupt again, I’m calling your parents).

The researchers state confidently at the end:

“In conclusion, we show that use of deliberate practice teaching strategies can improve both learning and engagement in a large introductory physics course as compared with what was obtained with the lecture method. Our study compares similar students, and teachers with the same learning objectives and the same instructional time and tests. This result is likely to generalize to a variety of postsecondary courses.”

Or, as they put it succinctly in the abstract: “We found increased student attendance, higher engagement, and more than twice the learning in the section taught using research-based instruction.”

I am convinced. It doesn’t matter that all of the students had been learning through lecture, lab, tutorial, and recitation all year long. What matters is what happened in this one week. The present is now. What happened was magical. There was learning. Even more learning in the experimental group—oh, much more—than in the control group. What this means—if you can just hold your horses for a moment—I’m telling you, I’m serious, I’ve got my cell phone here—what this means is that we should expand the findings to other courses. We should expand it everywhere! We should get rid of lectures altogether, or, at the very least, insult them.

Sarah D. Sparks seems to agree with the researchers: “While the study focused only on one section of college students, it gives yet more support for educators moving away from lecture-based instruction.” (One does this just as one might slide away from a misfit at a party.) According to Sparks, this study suggests that “interactive learning can be more than twice as effective as lecturing.” Take that, lecture!

Well, anything can be anything, except when it can’t. But that isn’t the point. The point is that lots of people are excited about this, and we really shouldn’t let them down. If I were to be reasonable about it, I’d suggest that “deliberate practice” of this sort works well when students already have a strong foundation. They need to know what they’re practicing. To get rid of the lectures would be simply reckless. But why be reasonable? Insulting can be fun. Bad lecture! Good experiment! More effective! Chopped thoughts! Research-based!

Diana Senechal’s book, Republic of Noise: The Loss of Solitude in Schools and Culture, will be published by Rowman & Littlefield Education in November 2011.

In Praise of The Concord Review

by Robert Pondiscio
January 10th, 2011

“Most kids don’t know how to write, don’t know any history, and that’s a disgrace,” says the redoubtable Will Fitzhugh. “Writing is the most dumbed-down subject in our schools.” 

He should know.  Since 1987, Fitzhugh has published The Concord Review, the only academic journal to publish history papers written by high school students:  924 of them penned by teenagers from 44 states and 39 nations, according to the New York Times, which gives Fitzhugh a long-overdue star turn.  But as the Times points out, Fitzhugh’s labor of love is falling on hard times.  The Review’s reputation, writes Sam Dillon, has always been bigger than its revenues.

Last year, income from 1,400 subscriptions plus charitable donations totaled $131,000 — about $5,400 short of total expenses, even though Mr. Fitzhugh paid himself only $18,000. This year, with donors less generous in the recession, Mr. Fitzhugh had to stop printing hard copies of the review, publishing its most recent issues only online, at

Fitzhugh tells the story of a history department chair at one school who no longer assigns research papers, but has students do PowerPoint presentations instead.  “Researching a history paper, Fitzhugh observes, “is not just about accumulating facts, but about developing a sense of historical context, synthesizing findings into new ideas, and wrestling with how to communicate them clearly — a challenge for many students, now that many schools do not require students to write more than five-paragraph essays.”

Fitzhugh is clearly on to something.  There is broad agreement that one of the competencies crucial for college success is the academic writing.  So if the ability to produce a good research paper is so important, why does The Concord Review struggle to keep its head above the water?  The Times suggests Fitzhugh’s “cantankerous” personality is an issue.  Or perhaps some educators see it as a showcase only for an elite.

“All but four of the 22 essays published in the two most recent issues, for example, were by private school students.  But it was not always so. In the review’s first decade, more than a third of the essays were from public school students. Mr. Fitzhugh said he would love to publish more from public school students, but does not get many exemplary submissions.

“It’s not my fault,” Fitzhugh said. “They’re not doing the work.”

You call that cantakerous?  If so, give us more cantankerous educators.  Lots more.

The MET Research Paper: Achievement of What?

by Guest Blogger
December 19th, 2010

by Diana Senechal

A new study by the Measures of Effective Teaching (MET) Project, funded by the Bill and Melinda Gates Foundation, finds that students’ perceptions of their teachers correlate with the teachers’ value-added scores; in other words, “students seem to know effective teaching when they experience it.” The correlation is stronger for mathematics than for ELA; this is one of many discrepancies between math and ELA in the study. According to the authors, “outside the early elementary grades when students are first learning to read, teachers may have limited impacts on general reading comprehension.” This peculiar observation should raise questions about curriculum, but curriculum does not come up in the report.

When the researchers combined student feedback and math value-added (from state tests) into a single score, they found that “the difference between bottom and top quartile was .21 student standard deviations, roughly equivalent to 7.49 months of schooling in a 9-month school year.” For ELA, the difference between top and bottom quartle teachers was much smaller, at .078 student-level standard deviations.

What are students learning in ELA? Beginning in fourth grade, students appear to gain just as much in reading comprehension from April to October as from October to April—that is, the summer months away from school do not seem to affect their gains. According to the researchers, “the above pattern implies that schooling itself may have little impact on standard read­ing comprehension assessments after 3rd grade.” They posit, somewhat innocently, that “literacy includes more than reading comprehension … It involves writing as well.” The lack of teacher effects applied mainly to the state tests;  when the researchers administered the written Stanford 9 Open-Ended Assessment for ELA, the teacher effects were larger than for math.

What explains the relatively low teacher effects on the ELA state tests? The researchers offer two possibilities: (a) teacher effects on reading comprehension are small after the early elementary years and (b) the tests themselves may fail to capture the teachers’ impact on literacy. Both of these hypotheses seem plausible but tangential to the central problem: this amorphous concept of “literacy.” Why should schools focus on “literacy” in the first place? Why not literature and other subjects?

A curious detail may offer a clue to the problem: the correlation between value-added on state tests and the Stanford 9 in ELA is low (0.37). That is, teachers whose students see gains on the ELA state tests are not very likely to see gains on the Stanford 9 as well.  That is, teachers whose students see gains on the ELA state tests are unlikely to see gains on the Stanford 9 as well. (The researchers do not state whether the reverse is true.) The researchers thought some of this might be due to the “change in tests in NYC this year.” When they removed NYC from the equation, the correlation was significantly higher. (But the New York math tests changed this year as well, and this apparently did not affect things—the correlation for math between the state and BAM value-added is “moderately large” at 0.54.)

Is it not possible that NYC suffers from a weak or nonexistent ELA curriculum, more so than the other districts in the study? Certainly curriculum should be considered, if an entire district shows markedly different results from the others.

In math, there usually is a curriculum. It may be strong or weak, focused or scattered, but there is actual material that students are expected to learn. In ELA, this may or may not be the case. In schools and districts with a rigorous English curriculum (as opposed to a literacy program), students read and discuss challenging literary works, study grammar and etymology, write expository essays, and  more. In the majority of New York City public schools, by contrast, this kind of concrete learning is eschewed; lessons tend to focus on a reading strategy, and students practice the strategy on their separate books. New York City has taken the strategy approach since 2003 (and in some cases much earlier); Balanced Literacy, or a version of it, is the mandated ELA program in most NYC elementary and middle schools. The MET researchers do not consider curriculum at all; they seem to assume that a curriculum exists in each of the schools and that it is consistent within a district.

In short, when analyzing teacher effects on achievement gains, the researchers forgot to ask: achievement of what? This is not a trivial question; the answers could shed light on the value-added results and their implications. It may turn out that the curricular differences are too slight or vague to make a difference, or that they do not significantly affect performance on these particular tests. Or the investigation of such differences may turn the whole study upside down. In any case, it is a mistake to ignore the question.

Diana Senechal taught for four years in the New York City public schools and holds a Ph.D. in Slavic languages and literatures from Yale. Her book, Republic of Noise: The Loss of Solitude in Schools and Culture, will be published by Rowman & Littlefield Education in late 2011.




Growing Up Gadgety

by Robert Pondiscio
November 22nd, 2010

Is prolonged, focused attention a 21st Century skill? 

“Students have always faced distractions and time-wasters,” notes the New York Times.  ”But computers and cellphones, and the constant stream of stimuli they offer, pose a profound new challenge to focusing and learning.”

“Growing Up Digital, Wired For Distraction,” a major Times thumbsucker, is long enough to challenge the attention span not just of teens but Trappist monks.  But it’s must-reading for educators.  Behind the undeniable lure of technology is a risk that “developing brains can become more easily habituated than adult brains to constantly switching tasks — and less able to sustain attention.” 

“Their brains are rewarded not for staying on task but for jumping to the next thing,” says Michael Rich of Harvard Medical School, the executive director of the Center on Media and Child Health in Boston. “The worry is we’re raising a generation of kids in front of screens whose brains are going to be wired differently.”

The tension, of course, is at the same time researchers are raising red flags about raising children immersed in a digital bath, education is redoubling efforts to increase technology use in the classroom for engagement, customization and efficiency.  The Times makes much of a research study, familiar to readers of this blog, that reading and academic works goes down not up, when computers arrive in the home.

The result is one of those Rorschach tests of an article, virtually guaranteed to confirm your biases  (The world is going to digital hell!  We’ll never engage kids if we don’t embrace technology!).  The most interesting section of the piece is the Times look at current research on ”what happens to the brains of young people who are constantly online and in touch.” 

The researchers looked at how the use of these media affected the boys’ brainwave patterns while sleeping and their ability to remember their homework in the subsequent days. They found that playing video games led to markedly lower sleep quality than watching TV, and also led to a “significant decline” in the boys’ ability to remember vocabulary words. The findings were published in the journal Pediatrics.

Other studies cited by the Times suggest that “periods of rest are critical in allowing the brain to synthesize information, make connections between ideas and even develop the sense of self.”  “Downtime is to the brain what sleep is to the body,” observes Michael Rich, an associate professor at Harvard Medical School and executive director of the Center on Media and Child Health in Boston. “But kids are in a constant mode of stimulation.”

“The headline is: bring back boredom,” says Dr. Rich, who the Times points out, recently gave a speech to the American Academy of Pediatrics entitled, “Finding Huck Finn: Reclaiming Childhood from the River of Electronic Screens.”

Where’s My Damn Juicebox?

by Robert Pondiscio
September 24th, 2010

Good news! New research shows young children are coming to school with more words at their command that ever before.  Words most of us wish they didn’t know, unfortunately.

Children are swearing at an earlier age and more often than children did just a few decades ago, according to Timothy Jay, a psychology professor at Massachusetts College of Liberal Arts.  “By the time kids go to school now, they’re saying all the words that we try to protect them from on television,” says Jay. “We find their swearing really takes off between (ages) three and four.”

The rise is not surprising and mirrors a rise in swearing among adults.  “Nearly two-thirds of the adults surveyed that had rules about their children swearing at home found they broke their own rules on a regular basis, notes a report on PsychCentral.  Jay has also found that swearing accounts for between 0.3% to 0.7% of all utterances.

Holy $#%!

Cheater Pants!

by Robert Pondiscio
May 14th, 2010

A new study shows that most high school students cheat.  But they have inconsistent notions about what is or is not cheating.    Researchers at the University of Nebraska-Lincoln surveyed 100 members of the junior class of a large midwestern high school, according to Science News.  Nearly nine out of ten said glancing at someone else’s answers during a test was cheating, but 87 percent admitting doing so anyway. Nearly all of them (94 percent) agreed that giving answers to someone during a test cheating, but 74 percent admitted to doing so.

Less than half (47 percent) agreed that providing test questions to a fellow student who had yet to take a test was cheating.   “The results suggest that students’ attitudes are tied to effort. Cheating that still required students to put forth some effort was viewed as less dishonest than cheating that required little effort,” said Kenneth Kiewra, professor of educational psychology at UNL, one of the study’s authors.

Hey, that’s not cheating.  It’s group work!

Lies, Damned Lies and Science

by Robert Pondiscio
January 8th, 2010

Let’s face it, writes Stephen Battersby at the New Scientist, science is boring.  Discoveries of new planets, medical advances and potential environmental disasters leave the impression that science is exciting and cutting edge.  Not so. 

It is now time to come clean. This glittering depiction of the quest for knowledge is… well, perhaps not an outright lie, but certainly a highly edited version of the truth. Science is not a whirlwind dance of excitement, illuminated by the brilliant strobe light of insight. It is a long, plodding journey through a dim maze of dead ends. It is painstaking data collection followed by repetitious calculation. It is revision, confusion, frustration, bureaucracy and bad coffee.

Science may be boring, but Batterby’s essay is a hoot.  Especially his description of his own inglorious research career, which involved months of sifting data from a telescope and finding…nothing.

I tip my hat, though, to New Scientist‘s San Francisco bureau chief, who spent nearly three years watching mice sniff each other in a room dimly lit by a red bulb. “It achieved little,” he confesses, “apart from making my clothes smell of mouse urine.” And the office prize for research ennui has to go to the editor of “I once spent four weeks essentially turning one screw backwards and forwards,” he says. “It was about that time that I decided I didn’t want to be a working scientist.”

Let’s keep this to ourselves and not mention it to the children, shall we?  After all, our economy and national security are at stake.

Update:  Not bored yet?  Joanne Jacobs asks “Do children need to be bored?”  Insightful Willingham response in the comments.