Tag Archives: evidence-informed policy

Correlation is Not Causation and Other Boring but Important Cautions for Interpreting Education Research

Journalists, as a general rule, use accessible language. Researchers, as a general rule, do not. So journalists who write about academic research and scholarship, like the reporters at Chalkbeat who cover school spending studies, can help disseminate research to education leaders since they write more plainly.

But the danger is that it’s easy for research to get lost in translation. Researchers may use language that appears to imply some practice or policy causes an outcome. Journalists can be misled when terms like “effect size” are used to describe the strength of the association even though they are not always causal effects.

To help journalists make sense of research findings, the Education Writers Association recently put together several excellent resources for journalists exploring education research, including 12 questions to ask about studies. For journalists (as well as practitioners) reading studies that imply that some program or policy causes the outcomes described, I would add one important consideration (a variation on question 3 from this post): if a study compares two groups, how were people assigned to the groups? This question gets at the heart of what makes it possible to say whether a program or policy caused the outcomes examined, as opposed to simply being correlated with those outcomes.

Randomly assigning people creates a strong research design for examining whether a policy or program causes certain outcomes. Random assignment minimizes pre-existing differences among the groups, so that differences in the outcomes can be attributed to the treatment (program or policy) instead of different characteristics of the people in the groups. In the image below, random assignment results in having similar-looking treatment and control groups. Continue reading

Which Aspects of the Work Environment Matter Most for New Teachers?

As a member of Bellwether’s evaluation practice, there’s nothing I love more than connecting research with policy and practice. Fortunately, I’m not alone: The National Center for Analysis of Longitudinal Data in Education Research (CALDER) has launched several initiatives to succinctly describe empirical research on contemporary topics in education and encourage evidence-based policymaking.

At CALDER’s recent 12th annual conference, I had the opportunity to serve as a discussant in a session on the career trajectories of teachers. The papers in this session illustrated the potential for research to inform policy and practice, but also left me wondering about the challenges policymakers often face in doing so.

Taking their First Steps: The Distribution of New Teachers into School and Classroom Contexts and Implications for Teacher Effectiveness and Growth” by Paul Bruno, Sarah Rabovsky, and Katharine Strunk uses data from Los Angeles Unified School District to explore how classroom and school contexts, such as professional interactions, are related to teacher quality and teacher retention. Their work builds on prior research that suggests school contexts are associated with the growth and retention of new teachers. As my Bellwether colleagues have noted, to ensure quality teaching at scale, we need to consider how to restructure initial employment to support new teachers in becoming effective.

In “Taking their First Steps,” the researchers developed four separate measures to understand the context in which new teachers were operating.  The measure of “instructional load” combined twelve factors, including students’ prior-year performance, prior-year absences, prior-year suspensions, class size, and the proportion of students eligible for free or reduced-price lunch, eligible for special education services, or classified as English learners. “Homophily” was measured by a teacher’s similarity to students, colleagues, and administrators in terms of race and gender. “Collegial qualifications” consisted of attributes such as years of experience, National Board certification, and evaluation measures. “Professional culture” was a composite of survey responses regarding the frequency and quality of professional interactions at teachers’ school sites.

Which of these factors had impact on teachers’ observation ratings and teacher attendance? As seen in the figures below, instructional load had a significant negative relationship with teachers’ observation ratings, meaning teachers with higher instructional loads (such as students with lower prior performance, more prior absences and suspensions, or larger class sizes) received lower ratings. On the other hand, professional culture had a significant positive impact on observation ratings, meaning that in schools where teachers had more and higher-quality professional interactions, new teachers received higher observation ratings. Instructional load also had a strong negative relationship with attendance rates, meaning teachers with higher instructional loads took more personal days or used more sick leave.

Figure based on Katharine Strunk’s presentation from January 31, 2019.

Continue reading