Study Finds Lectures Worth Insulting

by Guest Blogger
June 2nd, 2011

by Diana Senechal

I am fond of the old-fashioned lecture. It gives me something to sink into, something to think about. It’s often supplemented with discussions and labs, so students don’t just sit and listen. If it is taught well, it can be intriguing, even rousing, even lingering. I remember those packed lecture halls in college, and other superb lecture courses as well.

But I must defer to research-based research. Research has just shown that certain research-based methods bring greater learning gains in physics than the lecture approach. Sarah D. Sparks describes the study in an Education Week blog, but I got curious and decided to read the report for myself (Science, May 13, 2011, available by subscription or purchase only).

Yes, indeed. Researchers at the University of British Columbia in Vancouver conducted a week-long experiment near the end of a year-long physics course. They found—

Wait—for a week? Near the end of a full year?

Don’t interrupt. This blog doesn’t get interactive until I’m done.

Yes, ahem, as I was saying, the students had been taking a lecture course in physics. The lectures were supplemented throughout the year with labs, tutorials, recitations, and assignments. In week 12 of the second semester, the researchers conducted an experiment with two of the three sections of this course. There was a control section (267 students) and an experimental section (271 students).  The instructor of the control section continued teaching through lectures. The instructors of the experimental section used “deliberate practice”—in this case, “a series of challenging questions and tasks that require the students to practice physicist-like reasoning and problem solving during class time while provided with frequent feedback.

The experimental group did much better than the control group on the test, which was administered in the first class session of week 13. All students were informed that this test would not affect their grade but would serve as good practice for the final exam. (Wait—what? —No interruptions. This is your second warning.) In the control section, 171 of the 267 students (64 percent) attended class on the day of the test; 211 out of the 271 students in the experimental section (78 percent) attended. The control section scored an average of 41 percent on the test; the experimental section, 74 percent. Victory for experimental things! Students in both sections took an average of 20 minutes to complete the test. (All this stir over a twenty-minute quizzy-poo that doesn’t affect the grade? —I’ve already warned you. If you interrupt again, I’m calling your parents).

The researchers state confidently at the end:

“In conclusion, we show that use of deliberate practice teaching strategies can improve both learning and engagement in a large introductory physics course as compared with what was obtained with the lecture method. Our study compares similar students, and teachers with the same learning objectives and the same instructional time and tests. This result is likely to generalize to a variety of postsecondary courses.”

Or, as they put it succinctly in the abstract: “We found increased student attendance, higher engagement, and more than twice the learning in the section taught using research-based instruction.”

I am convinced. It doesn’t matter that all of the students had been learning through lecture, lab, tutorial, and recitation all year long. What matters is what happened in this one week. The present is now. What happened was magical. There was learning. Even more learning in the experimental group—oh, much more—than in the control group. What this means—if you can just hold your horses for a moment—I’m telling you, I’m serious, I’ve got my cell phone here—what this means is that we should expand the findings to other courses. We should expand it everywhere! We should get rid of lectures altogether, or, at the very least, insult them.

Sarah D. Sparks seems to agree with the researchers: “While the study focused only on one section of college students, it gives yet more support for educators moving away from lecture-based instruction.” (One does this just as one might slide away from a misfit at a party.) According to Sparks, this study suggests that “interactive learning can be more than twice as effective as lecturing.” Take that, lecture!

Well, anything can be anything, except when it can’t. But that isn’t the point. The point is that lots of people are excited about this, and we really shouldn’t let them down. If I were to be reasonable about it, I’d suggest that “deliberate practice” of this sort works well when students already have a strong foundation. They need to know what they’re practicing. To get rid of the lectures would be simply reckless. But why be reasonable? Insulting can be fun. Bad lecture! Good experiment! More effective! Chopped thoughts! Research-based!

Diana Senechal’s book, Republic of Noise: The Loss of Solitude in Schools and Culture, will be published by Rowman & Littlefield Education in November 2011.

31 Comments »

  1. Among other things you might check your spelling, since most “tuturials” lead to better spelling than yours! The most painful paradox of your argument is your intolerance for the paradox that lectures involve laboratories involve “practice” and reports of practice, and that the purported alternatives of lecture and practice are hardly exclusive. What kind of drivel is it to complain about a methodology when (a) you can’t spell it and (b) you so deliberately misunderstand even the option you seem to choose with such exclusionary passion. Sure lectures are fine, from brilliant lecturers, and, if they’re that brilliant, they can even be online. And surely not ALL lectures have the SAME impact on EVERY student, which, most obviously, suggest that there be alternatives available to every treatment. That ain’t so subtle, Diana, and I do hope in your Republic of Noise there is enough time to sleep before you next open an issue as clouded as this one!

    Comment by Joe Beckmann — June 2, 2011 @ 11:27 am

  2. The typo was my error, not Diana’s, in transcription. Thanks for pointing it out. I’ll let Diana respond to the substance of your comment, such as it is.

    Comment by Robert Pondiscio — June 2, 2011 @ 11:29 am

  3. Wow, they found out that practice helps students lern! (I misspelled learn so that if you disagree with me you’ll have reason to ignore me)

    I read about this in the Economist (can’t get the actual study from Science) so not sure about the specifics of the study. Weren’t the students required to read and go over the material before class (stuff usually in a lecture)? Does not seem applicable to dealing with k-12 education (and even higher ed) when dealing with the average student in our classrooms. Hey Johnny, go home and prepare for tomorrows class by reading your textbook. Yeah, right.

    Got to lecture, preferably interactive lecture, and then provide the problem solving portion. Of course this is nothing new or profound.

    Comment by Parker — June 2, 2011 @ 11:38 am

  4. Joe,

    I think you may have misunderstood my point. Was I complaining about a methodology? Was I choosing an option with exclusionary passion? I don’t think so.

    Comment by Diana Senechal — June 2, 2011 @ 11:38 am

  5. Parker,

    Yes, the students were assigned brief readings before each class. The study reads:

    In the control section, students were instructed to read the relevant chapter in the textbook before class.

    In the experimental section, students were assigned a three- or four-page reading before each class, and they “completed a short true/false online quiz on the reading.”

    Also, the study says:

    “To avoid student resistance, at the beginning of the first class, several minutes were used to explain to students why the material was being taught this way and how research showed that this approach would increase
    their learning.” (This refers to the experimental section, of course.)

    I agree that lectures and problem-solving sections could easily complement each other (and compliment each other, too, if the spirit so moved them). What’s absurd is using a one-week experiment, near the end of the school year, as proof (or even strong indication) of the superiority of one approach over another.

    Comment by Diana Senechal — June 2, 2011 @ 12:10 pm

  6. Just to be clear: in the last comment, I am paraphrasing the study except where I explicitly quote it.

    Comment by Diana Senechal — June 2, 2011 @ 12:12 pm

  7. Whoa! Could we first agree that this wasn’t really a “Study”? Trying something different at the end of a year-long course might wake up any group. Next year they need to try “Lecture” at the end and see if the scores and attendance are affected.

    I know some the the people involved; and am embarrassed for them. Are they now using a non-math approach to physics? Or is 74 now at least close to twice 41?

    I’m going to also assume that 41% is not a normal degree of achievement for the three classes during the year.

    So much assumed with so little explained.

    Comment by ewaldh — June 2, 2011 @ 2:17 pm

  8. Maybe Joe was annoyed you did not rely on the EdWeek summary and looked to the actual report.

    Not only are they frequently quite different but the difference is always a revealing insight into the desired spin. Sometimes the difference is so great it is the opposite of the EdWeek summary.

    Diana, you were not supposed to notice the weakness in the report. Its purpose is to be cited as the up to date research for justifying some administrator or principal insisting more K-12 and higher ed move away from lectures. Especially in those Education Leadership doctorate programs where most of what the candidates are told is factually wrong and quite nonsensical.

    They get their credentials and thus the power to implement the nonsense on the basis of flawed research. Now when someone googles that Edweek and Science story it may pull up this blog posting on the story. And thus become familiar with the weaknesses of the actual study.
    And the racket that is so-called Education reserach and “best practices”.

    No wonder Joe wanted to emphasize the typos. Change the subject fast.

    Comment by Student of History — June 2, 2011 @ 2:47 pm

  9. Taking a break from preparing my physics lecture to write this…

    I’ve been doing lectures for two quarters now including “clicker” questions that uses and automated polling system. I was somewhat skeptical at first, but I’ve become a fan, at least for classes like this (intro physics, 200+ students).

    I tell my class that physics lectures are a bit like cooking shows on TV — its easy to watch and nod and think everything makes sense – until you actually try to use the information, and discover you have all sorts of questions. So doing things in class that helps bring those questions to the surface sooner rather than later tends to help.

    Comment by Rachel — June 2, 2011 @ 2:51 pm

  10. Ewaldh,

    I was puzzled by the claim of “more than twice” as well. The report gives the following explanation:

    “The average scores were 41 +/- 1% in the control section and 74 +/- 1% in the experimental section. Random guessing would produce a score of 23%, so the students in the experimental section did more than twice as well on this test as those in the control section.”

    I am not convinced by this reasoning. And I agree with your first point: that a different approach at the end of the year might wake up a group, just by virtue of its difference (within reason, of course). At the very least, this possibility should be considered.

    Also, as I mentioned before, the students in the experimental group were told that research had shown that this approach would increase their learning. The students in the lecture section were not told this.

    Comment by Diana Senechal — June 2, 2011 @ 2:58 pm

  11. Rachel,

    Yes, I see your point. Clickers in themselves are not the problem. I audited a physics course where clickers were used, and I found them unobtrusive (the way this professor used them) and at times informative. And it was fun to have a question tossed out at us now and then.

    There are some downsides. The results can be deceptive. When a large number of students got the answer wrong, the professor had them talk to their neighbors about it. A minute or so later, he’d pose the same question. The percentage of correct answers would soar. This didn’t necessarily reflect greater understanding. The student knew who understood the material.

    Also, much of the material presented in a physics course needs to be sorted out and pondered. A student might answer incorrectly in class, simply because he or she hasn’t had a chance to think about it yet. That doesn’t mean the lecture should be made easier.

    But one of the most serious problems surrounding clickers is that some of the people pushing them really do want to do away with lectures. They see the clickers as part of a transformation of instruction. Now, I’m all for giving students a chance to work on problems with feedback from instructors. But not at the expense of lectures. Get rid of lectures, and you lose something very important. I see no reason why problem-solving can’t take place in section work outside of lecture.

    There are two other curiosities of this study: the researchers’ insistence that the experimental approach is “research-based” (and, by implication, that the lecture approach is not) and their use of the term “deliberate practice.” They cite an article on deliberate practice by K. A. Ericsson, R. Krampe, and C. Tesch-Romer. The article describes in detail the practice habits of highly proficient musicians. Turning this into group work for physics class is a bit of a stretch.

    In general terms, deliberate practice is regular, effortful, focused practice within a discipline, in between lessons. It is usually solitary, though it benefits from frequent feedback. I am not convinced that the experimental group’s activities over three class sessions qualify as “deliberate practice.” Structured practice with feedback, yes. But why it had to be called “deliberate practice,” I am not sure.

    Comment by Diana Senechal — June 2, 2011 @ 3:35 pm

  12. Correction: in the second paragraph of the last comment, it should read “The students knew,” not “The student knew.”

    Comment by Diana Senechal — June 2, 2011 @ 3:37 pm

  13. Sometimes we can’t spell our own names; except Joe, of course.

    Diana, your comments expose more and more of this farce. I have lectured (in addition to other teaching methods) for forty years. So I’m not opposed to its use as an efficient means of transferring information in an organized manner. We learned a century ago that there was not time for every generation to learn that the stove is hot.

    But now you say that the week was painted in shades of Hawthorn; and there wasn’t even a real quiz upon which these statements were based?

    Comment by Ewaldoh — June 2, 2011 @ 4:02 pm

  14. Ewaldoh,

    There was a quiz. Now, just how real it was, I couldn’t say.

    It didn’t affect the grade. It was not spectacularly well attended.

    It consisted of twelve questions. Just what those questions were, or how closely they corresponded to the instruction and homework in both sections, I don’t know.

    At the end of the article, the researchers deny that there was a Hawthorne effect. But they do not address the possibility that the timing of this experiment (near the end of the year) might have made a difference. That combined with the change of approach might well have had an effect.

    Also, two of the three authors of the paper were the two instructors for the experimental section. The instructor of the control (lecture) section was not one of the authors. The third author of the paper is Carl Wieman, a Nobel laureate and strong proponent of peer instruction and similar approaches.

    Comment by Diana Senechal — June 2, 2011 @ 4:36 pm

  15. I don’t really like the “talk to your neighbors” approach — if students seem confused the first time round I give a few hints and let them try again. There’s definitely a herd mentality. If most people get the wrong answer the first time, without some guidance, even more people will get it wrong the second.

    Comment by Rachel — June 2, 2011 @ 4:44 pm

  16. “an efficient means of transferring information in an organized manner”-

    For a sizable number of so-called professionals in both higher ed and K-12, your perfectly reasonable statement and assumption about the purpose of classrooms is the problem.

    We are getting these manufactured studies about what methods work best because if they said explicitly in a public forum for widespread distribution that the whole point is to largely prevent the transmission of the cultural knowledge we take for granted, they might lose the power to implement the nonsense. So they say it in papers most do not read or at professional forums where everyone attending agrees with a more nonacademic emphasis. But they do say it. Repeatedly.

    “Emotional well-being can be addressed by reducing the academic dominance in schools and by increasing the social and creative aspects in what students do”.

    Yes. Because civilizations can continue to thrive when you gut the intellectual foundations on which they rest.

    Comment by Student of History — June 2, 2011 @ 4:45 pm

  17. Actually, it *was* a study.

    A study with major validity issues and dubious conclusions, but a study nonetheless.

    Thank you for exposing this, Diana. I know this is already being cited far and wide as evidence of the end of the lecture.

    Comment by Hainish — June 2, 2011 @ 6:43 pm

  18. I love how studies can be as partisan as politics at times. Here’s another study that seems to suggest that lectures and tradition teaching styles are highly effective: http://www.hks.harvard.edu/pepg/PDF/Papers/PEPG10-15_Schwerdt_Wuppermann.pdf

    Comment by Mark — June 2, 2011 @ 6:54 pm

  19. This is a great blog for a number of reasons. Shoot, if only Robert hadn’t messed up with the typo. :)

    1. You’re right — there’s no way that the authors, like Ericcson*, of the Cambridge Manual of Expertise and Expert Performance would agree with the authors of this lame study about their use of “deliberate practice.” They’re the originators of 10,000 hours that Gladwell uses in his book!

    2. I’ve called before for a simple typology of RCT education experiments, so people don’t “overclaim” simply because they have a randomized control group. Some version of

    a. Small dosage, single setting.

    b. Simple repeat of “a” — do you get similar results?

    c. We’ve taken a small dosage experiment and tried it on a “medium” level, because it was promising — i.e., in this case, maybe 5 very different interventions of lecture versus practice; in various types of schools; maybe with very high-rated lecturers; etc.

    d. The occasional big daddy expensive study to test the best of “c.”

    That’s what happens, more or less, in oncology research.

    *Just keeping Joe on his toez.

    Comment by MG — June 2, 2011 @ 7:03 pm

  20. Lectures are generally anathema to student needs/differences. Lecture a class of how many? How many differences in this class of ability, readiness, motivation, etc.? Sure, it’s convenient for the teacher because it’s one lesson at the teacher’s pace. So what does that do for all the differences in the audience?

    Diana, don’t take this as any form of personal criticism. You know by now where I’m coming from and where I’m going.

    Comment by Paul Hoss — June 3, 2011 @ 6:23 am

  21. Rachel,

    That’s very interesting. I agree with you about the potential for a “herd mentality”–not just in physics but in other subjects. It’s an easy way of getting out of thinking for oneself.

    To be clear, the study isn’t about clickers vs. absence of clickers; clickers were used throughout the course (for summative evaluations at the end of class). In the experimental section, clickers were used during class. An instructor would pose a clicker question. Students would discuss the question in pairs (or groups of three or more) and then submit their answers. They would also perform small-group tasks that required a written response.

    Thanks to everyone for the comments so far. I wonder what the instructor of the control section thought of this whole thing. It seems a little unfair that the instructors of the experimental section were authors of the study, but the instructor of the control section was not.

    Comment by Diana Senechal — June 3, 2011 @ 1:47 pm

  22. “It seems a little unfair that the instructors of the experimental section were authors of the study, but the instructor of the control section was not.”

    That’s one of the factors that prompted my statement that it was not really a STUDY.

    There were multiple variables and no clear indication that the intent was made from the beginning of the year. “Hey, let’s try something next week.” is not the beginning of a study.

    As for “herd mentality”, we see it in education, politics, and economics. Gasoline price increases are in step with all fuels, food, clothing, housing, and many salaries. They all show the typical 20x increase since I got out of school. We focus on them because they’re posted on the street and discussed to death by the media.

    Teacher salaries are one of the exception having only about a 10x factor. We get by when making twice the starting salary during our career. The bottom, however, is the cause for new teachers having a hard time without some outside support.

    Comment by Ewaldoh — June 3, 2011 @ 2:26 pm

  23. No physics professor worth his or her salt would accept results from a “qualitative” study such as this, and neither would Grover J. “Russ” Whitehurst, late of the What Works Clearinghouse. It is laughingly bogus.

    Comment by TM Willemse — June 4, 2011 @ 11:56 am

  24. TM Willemse, I hope that you are right. I am sure that any physics professor who read it closely would see how bogus it was. Some may be too busy to read it closely.

    I have been asking myself two questions: why did Science publish it, and why did Edweek accept its conclusions so cheerily?

    Well, for question 1, I suspect that the presence of a Nobel laureate among the authors may have had something to do with it. Also, the study makes itself sound more serious than it is. It brings in various calculations, tables, figures–but that’s basically window-dressing, given that students took an average of 20 minutes to answer the 12 questions on the quiz, which was administered during week 13 of the second semester and didn’t count toward the final grade.

    As for question 2, Sparks may have read the study a bit too quickly, may have been partial to its conclusions, or both (or neither). She left out key details–for instance, the fact that it took place near the end of a year-long course and that the fact that the instructors (plural) of the experimental section were also authors of the study.

    Comment by Diana Senechal — June 4, 2011 @ 12:23 pm

  25. “To avoid student resistance, at the beginning of the first class, several minutes were used to explain to students why the material was being taught this way and how research showed that this approach would increase
    their learning.”

    Ha ha, looks like the study used the conclusion it was trying to prove as their very premise.

    Let alone that, mathematically, ‘false’ implies anything. And leave aside the ‘placebo effect’ – where patients feel better if they are given an ineffective treatment as long as they are told by the doctor “research shows that this treatment works”.

    No, I think the flaw in this study is more basic: the claim that students were not ‘lectured’ in the study does not hold. Students were told, without any experimental proof to back it up, that “…research showed that this approach would increase
    their learning”.

    Comment by andrei radulescu-banu — June 5, 2011 @ 9:46 pm

  26. Great example of the Hawthorne effect.

    And the experimental group was told that “research showed that this approach would increase their learning,” while the other wasn’t? That’s almost an unethical interference with the experiment. None of the behavior researchers I know in my field would dare do that with a group of subjects.

    Just based on those two things alone, I would completely discount the results of this “study.” Sigh.

    Comment by Jenifer Tidwell — June 5, 2011 @ 10:19 pm

  27. [...] that how you teach is more important than who does the teaching, has received quite a bit of criticism, much of it warranted. (In many respects, Weiman’s research design violates the 101 of sound [...]

    Pingback by Shanker Blog » Great Expectations — June 9, 2011 @ 8:28 pm

  28. Very interesting blog! It’s funny how some student’s will respond to the same lecture types but by different professors.

    Comment by Steve S — July 13, 2011 @ 5:14 pm

  29. What struck me here was the use of deliberate practice; I find some schools equate deliberate practice with rote memorization or what is negatively referred to as “kill and drill” methods.

    And if students are not experiencing joy while deliberating practicing, then teachers are, oftentimes, told their lessons are simply not engaging them.

    Lectures allow me time to dig deep, ponder, reflect, consider later and be with my own thoughts before I have to go off and do something with it.

    Comment by S. Bridget — March 26, 2012 @ 8:04 pm

  30. [...] research rarely “shows” what the researchers or the media claim it shows. (See, for instance, an egregiously flawed study that purportedly shows the superiority of “deliberate practice” to the lecture [...]

    Pingback by Why the Lecture Isn’t Obsolete « Diana Senechal — August 25, 2012 @ 6:22 pm

  31. [...] What about the study of philosophical works? They don’t learn any Kant beyond a few bullet points, she acknowledged, but they get to talk and succeed. “Most people in the real world don’t know Kant either,” she explained. “It isn’t a necessary skill. What matters is the social interaction around an essential question. These students are proving my own thesis that this works.” A research report, she said, would be published soon in Science. [...]

    Pingback by The Cronk of Higher Education - College Drops Kant, Purchases Clickers — December 4, 2012 @ 12:01 pm

RSS feed for comments on this post. TrackBack URL

Leave a comment

While the Core Knowledge Foundation wants to hear from readers of this blog, it reserves the right to not post comments online and to edit them for content and appropriateness.