Curry Education Research Lectureship Series
All lectures are FREE and open to the public. No registration is required.
Bagels and coffee will be available.
Parking is available at the Central Grounds Parking Garage.
For recommended readings or other questions about the series, please contact CurryVEST@virginia.edu
Unless noted, lectures are sponsored by the Virginia Education Sciences Training (VEST) Program, supported by the U.S. Department of Education Institute of Education Sciences (IES), and the Curry School of Education Dean’s Office.
Natural Opportunities for Academic Learning and Mental Health in Urban Schools: Evidence from Intervention Trials
Elise Cappella, Associate Professor of Applied Psychology, New York University
Friday February 26th 2016, 11:00-12:30 PM
Holloway Hall (Rm 116), Bavaro Hall
Keynote for the Curry Research Conference (CRC).
ADD TO YOUR iCal/Outlook
THIS TALK IS AVAILABLE ONLINE
Elise Cappella is an Associate Professor of Applied Psychology at NYU’s Steinhardt School of Culture, Education, and Human Development, and Interim Deputy Director of NYU’s Institute of Human Development and Social Change. She is co-PI of NYU’s Institute of Education Sciences (IES) Predoctoral Interdisciplinary Research Training Fellowship (PIRT) dedicated to training the next generation of education scientists. Dr. Cappella’s work focuses on understanding and promoting mental health and academic achievement among children in urban low-income schools. She studies teaching practices, peer relationships, and school contexts that influence child and youth development, with a focus on students with disruptive behavior problems. Dr. Cappella’s research has been recognized in grants from the National Institute of Mental Health, Spencer Foundation, Institute of Education Sciences, and Foundation for Child Development. The ultimate goal of Dr. Cappella’s work is to strengthen education science and enable more schools to fulfill their mission to enhance development for all students.
Abstract: In schools, too often mental health goals are not well aligned with academic learning goals. This paper describes two innovative models designed to strengthen contexts of academic learning and mental health for students with and without behavioral difficulties. Links to Learning and BRIDGE were developed via community-university-school partnerships and delivered by existing school mental health professionals in urban elementary schools. Intent-to-treat analysis within two randomized trials revealed short-term effects on observed teaching practices and student academic and psychosocial outcomes. Secondary analysis focused on classroom peer contexts demonstrated the need to better understand and target peer social networks and academic norms. Efforts to extend these models to include peer contexts as intervention targets and embed these approaches into broader education and mental health systems will be discussed.
What Should Teachers Know About the Basic Science of Psychology?
Daniel Willingham earned his B.A. from Duke University in 1983 and his Ph.D. in Cognitive Psychology from Harvard University in 1990. He is currently Professor of Psychology at the University of Virginia, where he has taught since 1992. Until about 2000, his research focused solely on the brain basis of learning and memory. Today, all of his research concerns the application of cognitive psychology to K-16 education. He writes the “Ask the Cognitive Scientist” column for American Educator magazine, and is the author of Why Don't Students Like School?, When Can You Trust the Experts?, and Raising Kids Who Read.
Abstract: Although most teacher education programs include instruction in the basic science of psychology, teachers later report that this training has low utility. Researchers have considered what sort of information about children’s thinking, emotion, and motivation would be useful for teachers’ practice. Here I take different tack. I begin by considering three varieties of statements in basic science: observations, theoretical statements, and epistemic assumptions. I suggest that the first of these can support classroom application, but the latter two cannot. I use that conclusion as a starting point for considering preservice teacher instruction in psychology.
Randomized Trial Meets the Real World: Exploring and Explaining Null Results in Recent Federally-Funded RCTs
Heather C. Hill is the Jerome T. Murphy Professor of Education at the Harvard Graduate School of Education. Her primary work focuses on teacher and teaching quality and the effects of policies aimed at improving both. She is also known for developing instruments for measuring teachers’ mathematical knowledge for teaching (MKT) and the mathematical quality of instruction (MQI) within classrooms. She was co-director of the National Center for Teacher Effectiveness and also principal investigator of a five-year study examining the effects of Marilyn Burns Math Solutions professional development on teaching and learning. Her other interests include research use within the public sector and the role that language plays in the implementation of public policy. She has served on the editorial boards of Journal of Research in Mathematics Education and the American Educational Research Journal. She is the coauthor, with David K. Cohen, of Learning policy: When state education reform works (Yale Press, 2001).
Abstract: Since 2002, IES has invested heavily in rigorous trials of programs designed to improve educational outcomes for children. Over a decade later, the Coalition for Evidence-Based Policy (CEBP) announced that of 77 methodologically strong IES-commissioned (contract) studies, only 7 – or 9% – had a significant positive impact. This presentation explores these surprising findings in two ways. First, it examines the prevalence of null effects in IES’ competitive grant programs, seeking to replicate or revise the findings from the CEBP study. Second, we describe and analyze treatment fidelity among these studies, examining the extent to which problems with implementation, often cited as a barrier to program success, explain null results. Following the example of Rimm-Kaufman, Wanless and colleagues (2013; 2014), we argue for building study capacity to investigate reasons for implementation failure.