Under the leadership of Dean Bob Pianta the Curry School has made a significant commitment to bringing research to bear on education policy and practice. (For example, see Pianta’s recent commentary in Ed Week: Academics Can’t Shy Away From Public Role).
Curry professors and alumni alike often invest their entire careers in the pursuit of scientific knowledge because they want their work ultimately to improve the quality and accessibility of education for all children.
“I’m really interested in changing education,” says Patricia Jennings, associate professor of education, whose research focuses on the social and emotional contexts for learning. “That’s why I’m so glad to be at Curry. I know it’s committed to changing education in a positive way that’s based on science.”
“I want to influence the field,” agrees Bill Therien, professor of special education at Curry and former co-director of the University of Iowa’s Center for Disability Research and Education. “We are absolutely in a position to have an influence. If not us, then who?”
Yet, moving implications for policy and practice out of peer-reviewed research reports and into the public discourse can feel like a risk for some scientists. How do education researchers wield the influence of their evidence while also maintaining their reputation as objective scientists who are open to their data? We asked some Curry faculty members for their insights on this topic.
Rely on Evidence, Then Leap
“Despite having some reticence to jump in with set conclusions, I don’t think education scientists should be shy when there is plenty of research evidence to suggest a practice or policy is effective and meaningful,” says Jason Downer, associate professor and director of the Center for Advanced Study of Teaching and Learning. “It helps, however, when this evidence cuts across multiple research groups and investigators, so that it’s not resting solely on one group’s work.”
“The skill is in being able to state policy or practice implications in usable terms while not misrepresenting what the science says and its limits of authority, that is, how clear the results are,” says Patrick Tolan, professor and director of Youth-Nex: The U.Va. Center to Promote Effective Youth Development.
“Use careful language and discuss limitations,” offers Jennings. “‘Promising’ is a good word or ‘best evidence we have.’ Policy makers can use that.”
Faculty members sometimes differ in their thinking about when the evidence is strong enough to warrant making a recommendation, however.
“I rely on cumulative evidence across settings and time,” says Sara Rimm-Kaufman, a professor in our Educational Psychology: Applied Developmental Science program.
“Ultimately, there is still a leap of faith. We always can shy away from making assertions by saying we need more research. However, as researchers, we need to consider the broad range of existing evidence and look at patterns. We need to consider signal versus noise and consider the quality of the existing evidence. Then, ultimately, we make a decision and a recommendation.”
Jennings considers herself a bit more liberal on the question: “If there’s a randomized control trial that says something has an effect, policymakers should consider adopting it. So little policy is based on any evidence. If there’s a decently studied program and it’s feasible, it’s certainly worth considering.”
On the other hand, Julie Cohen, a first-year assistant professor of education—and possibly because she is so early in her career—believes that the accumulation of knowledge is slow and should not be rushed. “I would rather go slow and provide more nuanced evidence, than advocate for a particular approach,” she says.
Differences in the research itself may be a factor in this dilemma. Cohen says that the kinds of research she does (on teacher quality measurement and teacher preparation) may not be generalizable at the national level. She is very cautious about the limitations and representativeness of her findings.
Therien, like Jennings, does more applied intervention research. “I feel very comfortable with discussing implications and writing in practitioner journals,” he says. It helps that his practitioner pieces produce more feedback than the research articles do, and they are included in the syllabi of other professors. “That’s reinforcing,” he says. “It feels like my work is having an impact.”
There are also times in the process of policy making when an awareness of the arc of educational reform history is instructive. Derrick Alridge, professor of the history of education in our Social Foundations of Education program, views the subject of objectivity in an entirely different way.
“For historians, objectivity is not about being detached from the phenomena under study. Instead, it entails offering rigorous interpretations of phenomena or events based on primary sources—documents, oral history, archives—and a rigorous explication of how other historians have discussed and debated an issue. For me, objectivity is also about forthrightly acknowledging my subjectivities as a researcher and not detaching myself from my research.”
Black, White, or Gray?
Once a researcher feels the time is right to make a recommendation, other considerations come into play. Jim Wyckoff is adamant that in the policy maker’s world of competing values, his own preferences as to policy outcomes should not be privileged more than anyone else’s. Wyckoff is a professor and director of EdPolicyWorks, the Center on Education Policy and Workforce Competitiveness.
“I feel comfortable summarizing and discussing the evidence on a particular topic but don’t feel comfortable being an advocate for any particular policy,” he says.
From Downer’s perspective one of the biggest challenges is that education science often operates in the gray, while policymakers and practitioners are looking for black or white answers.
“All too often a policymaker will ask something like, ‘How do we reduce the achievement gap in our public schools?’ As an education scientist, it is challenging to respond to this question with a succinct answer without feeling like you’re moving away from data-driven evidence.”
The response based in science, he says, is more likely of the “it depends” variety. That is, it depends on factors such as the characteristics of the leadership in the school district, the composition of the students and families, existing resources, and the district’s flexibility for making changes.
All too often a policymaker will ask something like, ‘How do we reduce the achievement gap in our public schools?’
Yet, a tension exists between policy makers’ desire for good evidence on which to base reforms and their sense of urgency to improve outcomes for students who will not get a second chance at their education, which is not lost on Wyckoff.
“My approach has been to share with policymakers as clearly as I can what I believe is known about the potential effects of their intended policy, including most likely outcomes and the range and likelihood of potential alternative outcomes,” Wyckoff says.
Ultimately, he thinks researchers may be best guided by the Hippocratic Oath to abstain from doing harm. Sometimes, even though evidence does not meet the standard for peer reviewed publication, informed judgment suggests encouraging or discouraging the implementation of a policy.
“In other cases where effects are so uncertain,” he adds, “no advice may be the best approach.”
For some researchers, writing for education practitioners and policy makers in itself is a big leap outside their comfort zone. Many agree that this kind of writing is not a skill that academia cultivates or much values.
Researchers don’t do a good job of creating bumper stickers for policymakers.
“Researchers don’t do a good job of creating bumper stickers for policymakers,” as Therien puts it.
Writing opinion pieces or policy recommendations requires academics to change the voice and structure of their arguments, explains Cohen. “It’s hard to shift gears, and it’s time consuming. If it’s not rewarded in the profession, it’s hard at the junior level to commit time to it.”
Even when faculty members are aware that the institutional leadership wants its research to be in the public domain exerting influence on policy, it may not be valued in the academic field, notes Peter Youngs, associate professor of education.
“Peer-reviewed journal articles are still what are rewarded by external letter writers for promotion and tenure. They also tend to be what is valued by professors training their doctoral students, and it’s what doctoral students are often socialized to value at professional conferences,” he says.
Youngs, whose academic career started in 2003, generally puts more emphasis on publishing in refereed research journals than on translating journal articles into forms accessible to policy makers and practitioners. He agrees, though, that it is important for tenured professors to look for ways to make their research accessible to broader audiences, which might involve “publishing in practitioner-oriented journals, writing reports for think tanks, and presenting to audiences of policy makers and practitioners.”
When researchers get involved in testifying, writing op-eds, and entering the public debate, they need to be clear how such work differs from academic research. In pursuing such activities, he says, “researchers need to make sure they are appealing to established findings from multiple studies by different groups of researchers, not simply to a finding from a single study or simply to their own values and convictions.”
Styles of teaching and views on education swing like a pendulum.
Clearly, much work remains to be done to ensure that policy and practice are guided by science. “Styles of teaching and views on education swing like a pendulum,” says Rimm-Kaufman. “With each swing, there are key policies or practices that stick and become fully realized in education policy and practice on a long-term basis. The goal is that those key policies or practices are grounded in evidence, not solely intuitive notions of what works or doesn’t work.”
Keep the Science Coming
Yet, another role remains for the education sciences, which may resolve some of the reticence around making recommendations. The science doesn’t have to end with initial research and development, says Downer. It can continue into the evaluation stage during implementation at a school, district, or state level.
“The application of any new educational initiatives—whether that be a new teacher evaluation system, or a coaching model for improving mathematics instruction—serves as a learning opportunity and a way to enact continuous quality improvement that involves data-based decision-making.”
Conservative standards for “enough” evidence may inhibit the diffusion of knowledge and leave too much room for politics and intuition to influence policy and practice. Yet, Downer believes that this critical role of rigorous evaluation leaves room for education scientists to partner with policymakers and practice leaders and keep the science ongoing in the rollout of new interventions and policies.
It’s a perspective, perhaps, that takes a little of the risk out of taking a stand.
See which nine Curry professors showed up in the 2015 RHSU Edu-Scholar Public Influence Rankings. The rankings, published in Education Week, recognize faculty members who “contribute most substantially to public debates about education.”