Author Archives: Cara Jackson

Correlation is Not Causation and Other Boring but Important Cautions for Interpreting Education Research

Journalists, as a general rule, use accessible language. Researchers, as a general rule, do not. So journalists who write about academic research and scholarship, like the reporters at Chalkbeat who cover school spending studies, can help disseminate research to education leaders since they write more plainly.

But the danger is that it’s easy for research to get lost in translation. Researchers may use language that appears to imply some practice or policy causes an outcome. Journalists can be misled when terms like “effect size” are used to describe the strength of the association even though they are not always causal effects.

To help journalists make sense of research findings, the Education Writers Association recently put together several excellent resources for journalists exploring education research, including 12 questions to ask about studies. For journalists (as well as practitioners) reading studies that imply that some program or policy causes the outcomes described, I would add one important consideration (a variation on question 3 from this post): if a study compares two groups, how were people assigned to the groups? This question gets at the heart of what makes it possible to say whether a program or policy caused the outcomes examined, as opposed to simply being correlated with those outcomes.

Randomly assigning people creates a strong research design for examining whether a policy or program causes certain outcomes. Random assignment minimizes pre-existing differences among the groups, so that differences in the outcomes can be attributed to the treatment (program or policy) instead of different characteristics of the people in the groups. In the image below, random assignment results in having similar-looking treatment and control groups. Continue reading

Why Some Educators Are Skeptical of Engaging in Rigorous Research — And What Can Be Done About It

In my previous post, I talked about the importance of rigorous research and the need for researchers to engage directly with education stakeholders. Yet some educators remain skeptical about the value of partnering with researchers, even if the research is relevant and rigorous. Why might education agencies fail to see the value of conducting rigorous research in their own settings?

For one thing, letting a researcher into the nitty gritty of your outcomes or practices might reveal that something isn’t working. And since it’s rare that educators/practitioners and researchers are even in the same room, education agency staff may be concerned about how findings will be framed once publicized. If they don’t even know one another, how can we expect researchers and educators to overcome their lack of trust and work together effectively?

Furthermore, engaging with researchers takes time and a shift in focus for staff in educational agencies, who are often stretched to capacity with compliance and accountability work. Additionally, education stakeholders may have strong preferences for certain programs or policies, and thus fail to see the importance of assessing whether these are truly yielding measurable improvements in outcomes. Finally, staff at educational agencies may need to devote time to help researchers translate findings, since researchers are not accustomed to creating summaries of research that are accessible to a broad audience.

Given all this, why am I still optimistic about connecting research, practice, and policy? Continue reading

Why Is There a Disconnect Between Research and Practice and What Can Be Done About It?

What characteristics of teacher candidates predict whether they’ll do well in the classroom? Do elementary school students benefit from accelerated math coursework? What does educational research tell us about the effects of homework?

three interconnected cogs, one says policy, one says practice, one says research

These are questions that I’ve heard over the past few years from educators who are interested in using research to inform practice, such as the attendees of researchED conferences. These questions suggest a demand for evidence-based policies and practice among educators. And yet, while the past twenty years have witnessed an explosion in federally funded education research and research products, data indicate that many educators are not aware of federal research resources intended to support evidence use in education, such as the Regional Education Laboratories or What Works Clearinghouse.

Despite a considerable federal investment in both education research and structures to support educators’ use of evidence, educators may be unaware of evidence that could be used to improve policy and practice. What might be behind this disconnect, and what can be done about it? While the recently released Institute of Education Sciences (IES) priorities focus on increasing research dissemination and use, their focus is mainly on producing and disseminating: the supply side of research. Continue reading

Which Aspects of the Work Environment Matter Most for New Teachers?

As a member of Bellwether’s evaluation practice, there’s nothing I love more than connecting research with policy and practice. Fortunately, I’m not alone: The National Center for Analysis of Longitudinal Data in Education Research (CALDER) has launched several initiatives to succinctly describe empirical research on contemporary topics in education and encourage evidence-based policymaking.

At CALDER’s recent 12th annual conference, I had the opportunity to serve as a discussant in a session on the career trajectories of teachers. The papers in this session illustrated the potential for research to inform policy and practice, but also left me wondering about the challenges policymakers often face in doing so.

Taking their First Steps: The Distribution of New Teachers into School and Classroom Contexts and Implications for Teacher Effectiveness and Growth” by Paul Bruno, Sarah Rabovsky, and Katharine Strunk uses data from Los Angeles Unified School District to explore how classroom and school contexts, such as professional interactions, are related to teacher quality and teacher retention. Their work builds on prior research that suggests school contexts are associated with the growth and retention of new teachers. As my Bellwether colleagues have noted, to ensure quality teaching at scale, we need to consider how to restructure initial employment to support new teachers in becoming effective.

In “Taking their First Steps,” the researchers developed four separate measures to understand the context in which new teachers were operating.  The measure of “instructional load” combined twelve factors, including students’ prior-year performance, prior-year absences, prior-year suspensions, class size, and the proportion of students eligible for free or reduced-price lunch, eligible for special education services, or classified as English learners. “Homophily” was measured by a teacher’s similarity to students, colleagues, and administrators in terms of race and gender. “Collegial qualifications” consisted of attributes such as years of experience, National Board certification, and evaluation measures. “Professional culture” was a composite of survey responses regarding the frequency and quality of professional interactions at teachers’ school sites.

Which of these factors had impact on teachers’ observation ratings and teacher attendance? As seen in the figures below, instructional load had a significant negative relationship with teachers’ observation ratings, meaning teachers with higher instructional loads (such as students with lower prior performance, more prior absences and suspensions, or larger class sizes) received lower ratings. On the other hand, professional culture had a significant positive impact on observation ratings, meaning that in schools where teachers had more and higher-quality professional interactions, new teachers received higher observation ratings. Instructional load also had a strong negative relationship with attendance rates, meaning teachers with higher instructional loads took more personal days or used more sick leave.

Figure based on Katharine Strunk’s presentation from January 31, 2019.

Continue reading

All I Want for Christmas Is for People to Stop Using the Phrase “Education Reform”

In a recent opinion piece at The 74, Robin Lake casts off the label of educator reformer, arguing that “to imply that they are some monolithic group of reformers is ridiculous.” I agree, not so much because education reform has lost its meaning but because it never had a single definition in the first place. At the heart of reform is an acknowledgement that the educational system isn’t serving all kids well, but agreeing that the system could be improved is far from agreeing on how to get there.

definition of educationTo some people “education reform” is about holding schools and districts accountable for student outcomes, which can be viewed as either a means of ensuring that society doesn’t overlook subgroups of students, or as a top-down approach that fails to account for vast differences in school and community resources. To others education reform is shorthand for increasing school choice, or requiring students to meet specific academic standards to be promoted or graduate from high school, or revising school discipline practices that disproportionately impact students of color. Each of these ideas has supporters and detractors, and I suspect that many people who are comfortable with one type of reform vehemently disagree with another.

To take a specific example, consider teacher evaluation reform. One challenge in debating this particular education reform is that there are multiple ways teacher evaluation could change outcomes: one way is by providing feedback and targeted support to educators; another is the identification and removal of low-performing teachers. So even when “education reform” types favor a policy, they might have very different views on the mechanisms through which that policy achieves its goals. In the case of teacher evaluation reform, the dueling mechanisms created trade-offs in evaluation design, as described by my Bellwether colleagues here. (As they note, in redesigning evaluation systems, states tended to focus on the reliability and validity of high-stakes measures and the need for professional development plans for low performing teachers, devoting less attention to building the capacity of school leaders to provide meaningful feedback to all teachers.)

I personally agree with those who argue that teacher evaluation should be used to improve teacher practice, and I have written previously about what that might look like and about the research on evaluation’s role in developing teachers. In a more nuanced conversation, we might acknowledge that there are numerous outcomes we care about, and that even if a given policy or practice is effective at achieving one outcome — say, higher student achievement — it might have unintended consequences on other outcomes, such as school climate or teacher retention.

Instead of broadly labeling people as “education reformers,” we need to clearly define the type of reform we’re discussing, as well as the specific mechanisms through which that reform achieves its intended goals. Doing so provides the basis for laying out the pros and cons of not just the overall idea, but of the policy details that transform an idea into action. Such specificity may help us avoid the straw man arguments that have so often characterized education policy debates.