Category Archives: Evaluation

Media: “Culture-based education — a path to healing for Native youth?” in The Hechinger Report

Today, I have an op-ed at The Hechinger Report about the benefits of culture-based education, for Native youth and all students. The piece was inspired by work our evaluation team did with The National Indian Education Association at Riverside Indian School, the nation’s oldest federally operated American Indian Boarding School.

An excerpt from my op-ed:

Culture-based education provides a path to healing and responsible citizenship for all of us. It helps students become aware of and comfortable with other belief and value systems. It furthers the goals of democracy and leads students of all ethnicities and races to think more deeply about their own cultural identities while also broadening their understanding of the experiences and perspectives of others.

Finally, the fruits of culture-based education can help us understand this country’s moral debts and how to pay them. Native Americans have for too long lived in a country controlled by men who, for nearly 300 years, have consistently “elevated armed robbery to a governing principle.” Through forced removal, boarding schools and relocation, our government stole and erased Native Americans’ languages and cultural knowledge. An investment in recovering, restoring and revitalizing lost and stolen indigenous cultural knowledge could guide us in understanding this country’s bloody history and place us on a path toward reconciliation and equity.

Read the rest of my piece at The Hechinger Report, and read more of my writing about Native education here.

Correlation is Not Causation and Other Boring but Important Cautions for Interpreting Education Research

Journalists, as a general rule, use accessible language. Researchers, as a general rule, do not. So journalists who write about academic research and scholarship, like the reporters at Chalkbeat who cover school spending studies, can help disseminate research to education leaders since they write more plainly.

But the danger is that it’s easy for research to get lost in translation. Researchers may use language that appears to imply some practice or policy causes an outcome. Journalists can be misled when terms like “effect size” are used to describe the strength of the association even though they are not always causal effects.

To help journalists make sense of research findings, the Education Writers Association recently put together several excellent resources for journalists exploring education research, including 12 questions to ask about studies. For journalists (as well as practitioners) reading studies that imply that some program or policy causes the outcomes described, I would add one important consideration (a variation on question 3 from this post): if a study compares two groups, how were people assigned to the groups? This question gets at the heart of what makes it possible to say whether a program or policy caused the outcomes examined, as opposed to simply being correlated with those outcomes.

Randomly assigning people creates a strong research design for examining whether a policy or program causes certain outcomes. Random assignment minimizes pre-existing differences among the groups, so that differences in the outcomes can be attributed to the treatment (program or policy) instead of different characteristics of the people in the groups. In the image below, random assignment results in having similar-looking treatment and control groups. Continue reading

What This Washington Post Opinion Piece Got Wrong on Charter Schools

Over the weekend, the Washington Post Outlook section ran a frustrating cover story on charter schools that offered a narrow and biased picture of the charter sector and perpetuated a number of misconceptions.

Jack Schneider’s “School’s out: Charters were supposed to save public education. Why are Americans turning against them?” argues that the charter sector as a whole isn’t living up to its promises, leading public support for the schools to shrink. Schneider is correct that the charter school hasn’t lived up to all of its most enthusiastic boosters’ promises, but his piece flatly misrepresents data about charter quality. For example, Schneider writes that “average charter performance is roughly equivalent to that of traditional public schools.” This is simply inaccurate, as my colleagues indicated in a recent analysis of charter data and research (slide 37 here). The full body of currently available, high-quality research finds that charters outperform traditional public schools on average, with especially positive effects for historically underserved student groups (a recent Post editorial acknowledged this as well).

slide from Bellwether's "State of the Charter Sector" resource, summarizing research on charter sector performance

To be clear, research also shows that charter performance varies widely across schools, cities, and states — and too many schools are low-performing. Yet Schneider cherry picks examples that illustrate low points in the sector. He cites Ohio, whose performance struggles — and the poorly designed policies that led to them — Bellwether has previously written about. He also (inexplicably, given where his piece ran) overlooks Washington, D.C., where charters not only significantly outperform comparable district-run schools, but have also helped spur improvement systemwide. Over the past decade, public schools in D.C. (including both charters and DC Public Schools, DCPS) have improved twice as fast as those in any other state in the country, as measured by the National Assessment of Educational Progress (NAEP). DCPS was the nation’s fastest growing district in 4th grade math and among the fastest in 4th grade reading and 8th grade math. These gains can be partially attributed to the city’s changing demographics, but are also the result of reforms within DCPS — which the growth of charters created the political will to implement. Over the past decade, Washington, DC has also increased the number of high-performing charter schools while systematically slashing the number of students in the lowest-performing charter schools. When I served on the District of Columbia Public Charter School Board from 2009-2017, I had the chance to observe these exciting changes firsthand, so it was particularly disappointing to see a major feature in our city’s paper overlook them.

It’s frustrating that this biased and narrow picture drew prime real estate in one of the nation’s leading papers, because the charter sector does have real weaknesses and areas for improvement that would benefit from thoughtful dialogue. For example, as Schneider notes, transportation issues and lack of good information can prevent many families from accessing high-quality schools. In cities with high concentrations of charters, such as Washington, D.C. and New Orleans, there is a real need to better support parents in navigating what can feel like a very fragmented system. And despite progress in closing down low-performing charter schools, too many remain in operation. Schneider could have referenced the real work charter leaders are undertaking to address these lingering challenges (more on this in slide 112 of our deck).

Schneider is correct that public support for charters has waned in recent years, due in part to some of the challenges he references, but also because of orchestrated political opposition from established interests threatened by charter school growth. Given the increasingly polarized political environment around charter schools, the need for nuanced, balanced, and data-informed analysis and dialogue about them is greater than ever. Bellwether’s recent report on the state of the charter sector, and our past work on charter schools more broadly, seeks to provide that kind of analysis. Unfortunately, Schneider’s piece falls short on that score.

Why Some Educators Are Skeptical of Engaging in Rigorous Research — And What Can Be Done About It

In my previous post, I talked about the importance of rigorous research and the need for researchers to engage directly with education stakeholders. Yet some educators remain skeptical about the value of partnering with researchers, even if the research is relevant and rigorous. Why might education agencies fail to see the value of conducting rigorous research in their own settings?

For one thing, letting a researcher into the nitty gritty of your outcomes or practices might reveal that something isn’t working. And since it’s rare that educators/practitioners and researchers are even in the same room, education agency staff may be concerned about how findings will be framed once publicized. If they don’t even know one another, how can we expect researchers and educators to overcome their lack of trust and work together effectively?

Furthermore, engaging with researchers takes time and a shift in focus for staff in educational agencies, who are often stretched to capacity with compliance and accountability work. Additionally, education stakeholders may have strong preferences for certain programs or policies, and thus fail to see the importance of assessing whether these are truly yielding measurable improvements in outcomes. Finally, staff at educational agencies may need to devote time to help researchers translate findings, since researchers are not accustomed to creating summaries of research that are accessible to a broad audience.

Given all this, why am I still optimistic about connecting research, practice, and policy? Continue reading

Why Is There a Disconnect Between Research and Practice and What Can Be Done About It?

What characteristics of teacher candidates predict whether they’ll do well in the classroom? Do elementary school students benefit from accelerated math coursework? What does educational research tell us about the effects of homework?

three interconnected cogs, one says policy, one says practice, one says research

These are questions that I’ve heard over the past few years from educators who are interested in using research to inform practice, such as the attendees of researchED conferences. These questions suggest a demand for evidence-based policies and practice among educators. And yet, while the past twenty years have witnessed an explosion in federally funded education research and research products, data indicate that many educators are not aware of federal research resources intended to support evidence use in education, such as the Regional Education Laboratories or What Works Clearinghouse.

Despite a considerable federal investment in both education research and structures to support educators’ use of evidence, educators may be unaware of evidence that could be used to improve policy and practice. What might be behind this disconnect, and what can be done about it? While the recently released Institute of Education Sciences (IES) priorities focus on increasing research dissemination and use, their focus is mainly on producing and disseminating: the supply side of research. Continue reading