Imagine reading about a new half-hour writing activity that can boost your students’ motivation and course performance. How great would that be?

Jeff Kosovich
Jeff Kosovich

In fact, a variety of social-psychological interventions have been found to provide these kinds of benefits to students. The first reaction by many educators may be to adopt these interventions immediately for all of their students. However, these interventions are not magic, as noted in a paper by Yeager and Walton back in 2011.

It’s best to ask a few questions before implementing the latest new idea in your classrooms—even when its effectiveness is backed by evidence.

  • How strong is the evidence that an intervention works at all?
  • What is the evidence that an intervention that was effective in one classroom will work elsewhere?

Answering these questions requires researchers and practitioners to work together.

Chris Hulleman
Chris Hulleman

Recently, we began a partnership with a public college through the Carnegie Alpha Lab Research Network in an effort to help struggling students in developmental (remedial) math courses. Our aim was to use a social psychological intervention focused on beliefs about math’s relevance and usefulness to boost student success.

Although there are numerous ways to determine if a particular intervention or program is effective, the simplest uses randomized experiments. Randomizing individuals (e.g., by pulling numbers out of a hat) into an intervention or comparison group lets us assume that both groups will have similar academic performance over time, all else being equal. The benefit to this random assignment is that we can give the intervention to one group—the Treatment Group—and compare them to the other group—the Control Group—and be confident that the difference in academic success is caused by the intervention. This is because the only differences between the two groups are random chance and the intervention activity.

Normally, when students are put in a Control, they complete an activity that is similar to the Treatment without the beneficial content. In our research for example, Treatment students wrote about the usefulness and relevance of their course material, whereas Control students summarized course material.

In past research that Chris has been involved in, this relevance activity has improved high school science students’ and college psychology students’ perceived value of the course, grades and interest compared to the summary activity. We did the same thing in a recent pilot study in an effort to help students see value in their coursework. We thought we’d see similar results to what was found in other classrooms.

You can imagine our surprise when the difference in pass rates between our Treatment and Control group was 8% higher for the Control group! Did we actually harm students? Our initial surprise turned to curiosity. Maybe there was more to the Control activity than we thought? After all, summarizing material is a common learning activity in the typical science or psychology classroom. Because summarization activities are less common in a typical math class, we may have inadvertently tested two interventions rather than just one.

With this in mind, we compared the Treatment and Control students to the pass rate of students who didn’t participate at all. Using this as our comparison group, the Treatment activity increased pass rates by 4%, and the Control activity increased pass rates by 13%. We also looked at traditional pass rates in the course.

As it turns out, the average pass rate the prior spring and from similar classes over the past 10 years led us to a similar conclusion: the relevance activity helped a little, but the summary activity helped a lot.

The results of our pilot study demonstrates two important points. First, comparison groups matter. An activity that is relatively innocuous for one group may be helpful (or even harmful) for another.

Second, we can’t just assume an intervention works everywhere without testing. The relevance intervention showed great effectiveness for high school science and college psychology, but developmental math classes (and their students) are very different.

As psychological interventions quickly spread quickly throughout education, it’s important to consider the implications of adopting them without careful scrutiny.

Before adopting a new intervention into your school or classroom at a large scale, ask yourself:

  • “Are the students who have previously been aided by this intervention similar to my students?”
  • “Are the academic subjects where this intervention has been used before taught in the same way as my subject?”

If the answer to both questions is yes, you’re ready to try some small-scale pilot testing to ensure that your students will also benefit.