Category Archives: Evaluation

What This Washington Post Opinion Piece Got Wrong on Charter Schools

Over the weekend, the Washington Post Outlook section ran a frustrating cover story on charter schools that offered a narrow and biased picture of the charter sector and perpetuated a number of misconceptions.

Jack Schneider’s “School’s out: Charters were supposed to save public education. Why are Americans turning against them?” argues that the charter sector as a whole isn’t living up to its promises, leading public support for the schools to shrink. Schneider is correct that the charter school hasn’t lived up to all of its most enthusiastic boosters’ promises, but his piece flatly misrepresents data about charter quality. For example, Schneider writes that “average charter performance is roughly equivalent to that of traditional public schools.” This is simply inaccurate, as my colleagues indicated in a recent analysis of charter data and research (slide 37 here). The full body of currently available, high-quality research finds that charters outperform traditional public schools on average, with especially positive effects for historically underserved student groups (a recent Post editorial acknowledged this as well).

slide from Bellwether's "State of the Charter Sector" resource, summarizing research on charter sector performance

To be clear, research also shows that charter performance varies widely across schools, cities, and states — and too many schools are low-performing. Yet Schneider cherry picks examples that illustrate low points in the sector. He cites Ohio, whose performance struggles — and the poorly designed policies that led to them — Bellwether has previously written about. He also (inexplicably, given where his piece ran) overlooks Washington, D.C., where charters not only significantly outperform comparable district-run schools, but have also helped spur improvement systemwide. Over the past decade, public schools in D.C. (including both charters and DC Public Schools, DCPS) have improved twice as fast as those in any other state in the country, as measured by the National Assessment of Educational Progress (NAEP). DCPS was the nation’s fastest growing district in 4th grade math and among the fastest in 4th grade reading and 8th grade math. These gains can be partially attributed to the city’s changing demographics, but are also the result of reforms within DCPS — which the growth of charters created the political will to implement. Over the past decade, Washington, DC has also increased the number of high-performing charter schools while systematically slashing the number of students in the lowest-performing charter schools. When I served on the District of Columbia Public Charter School Board from 2009-2017, I had the chance to observe these exciting changes firsthand, so it was particularly disappointing to see a major feature in our city’s paper overlook them.

It’s frustrating that this biased and narrow picture drew prime real estate in one of the nation’s leading papers, because the charter sector does have real weaknesses and areas for improvement that would benefit from thoughtful dialogue. For example, as Schneider notes, transportation issues and lack of good information can prevent many families from accessing high-quality schools. In cities with high concentrations of charters, such as Washington, D.C. and New Orleans, there is a real need to better support parents in navigating what can feel like a very fragmented system. And despite progress in closing down low-performing charter schools, too many remain in operation. Schneider could have referenced the real work charter leaders are undertaking to address these lingering challenges (more on this in slide 112 of our deck).

Schneider is correct that public support for charters has waned in recent years, due in part to some of the challenges he references, but also because of orchestrated political opposition from established interests threatened by charter school growth. Given the increasingly polarized political environment around charter schools, the need for nuanced, balanced, and data-informed analysis and dialogue about them is greater than ever. Bellwether’s recent report on the state of the charter sector, and our past work on charter schools more broadly, seeks to provide that kind of analysis. Unfortunately, Schneider’s piece falls short on that score.

Why Some Educators Are Skeptical of Engaging in Rigorous Research — And What Can Be Done About It

In my previous post, I talked about the importance of rigorous research and the need for researchers to engage directly with education stakeholders. Yet some educators remain skeptical about the value of partnering with researchers, even if the research is relevant and rigorous. Why might education agencies fail to see the value of conducting rigorous research in their own settings?

For one thing, letting a researcher into the nitty gritty of your outcomes or practices might reveal that something isn’t working. And since it’s rare that educators/practitioners and researchers are even in the same room, education agency staff may be concerned about how findings will be framed once publicized. If they don’t even know one another, how can we expect researchers and educators to overcome their lack of trust and work together effectively?

Furthermore, engaging with researchers takes time and a shift in focus for staff in educational agencies, who are often stretched to capacity with compliance and accountability work. Additionally, education stakeholders may have strong preferences for certain programs or policies, and thus fail to see the importance of assessing whether these are truly yielding measurable improvements in outcomes. Finally, staff at educational agencies may need to devote time to help researchers translate findings, since researchers are not accustomed to creating summaries of research that are accessible to a broad audience.

Given all this, why am I still optimistic about connecting research, practice, and policy? Continue reading

Why Is There a Disconnect Between Research and Practice and What Can Be Done About It?

What characteristics of teacher candidates predict whether they’ll do well in the classroom? Do elementary school students benefit from accelerated math coursework? What does educational research tell us about the effects of homework?

three interconnected cogs, one says policy, one says practice, one says research

These are questions that I’ve heard over the past few years from educators who are interested in using research to inform practice, such as the attendees of researchED conferences. These questions suggest a demand for evidence-based policies and practice among educators. And yet, while the past twenty years have witnessed an explosion in federally funded education research and research products, data indicate that many educators are not aware of federal research resources intended to support evidence use in education, such as the Regional Education Laboratories or What Works Clearinghouse.

Despite a considerable federal investment in both education research and structures to support educators’ use of evidence, educators may be unaware of evidence that could be used to improve policy and practice. What might be behind this disconnect, and what can be done about it? While the recently released Institute of Education Sciences (IES) priorities focus on increasing research dissemination and use, their focus is mainly on producing and disseminating: the supply side of research. Continue reading

Little Kids, Big Progress: New York Times’ Head Start Coverage

It’s not often that early childhood stories make the front page of the New York Times. But this week, the paper featured an article by Jason DeParle about Head Start, a federal early childhood program that serves nearly 900,000 low-income children, and how the quality of the program has improved over the past several years.

DeParle’s article is a great example of journalism that moves past the common (and relatively useless) question of “does Head Start work?” and goes deeper into exploring how the program has improved  its practices, including changes related to coaching, teacher preparation and quality, use of data, and the Designation Renewal System (all of which Bellwether has studied and written about previously). This type of reporting contributes to a more productive conversation about how to create high-quality early learning opportunities for all children that can inform changes to early childhood programs beyond Head Start.

Courtesy of Allison Shelley/The Verbatim Agency for American Education: Images of Teachers and Students in Action.

As DeParle points out and the data clearly show, while there is wide variation between individual programs, overall the quality of teaching in Head Start is improving. But while this trend is undoubtedly positive, it raises some questions: What effect will these changes ultimately have on children’s academic and life outcomes? And what can Head Start programs do to their program content and design to even better serve children?

Next month, Bellwether will release a suite of publications that tries to answer those questions. We identified five Head Start programs that have evidence of better-than-average impact on student learning outcomes and thoroughly examined these programs’ practices to understand how they contributed to their strong performance. I visited each program, conducted in-depth interviews with program leadership and staff, reviewed program documents and data, hosted focus groups with teachers and coaches, and observed classroom quality using the Classroom Assessment Scoring System, CLASS (the measure of teaching quality on which DeParle notes Head Start classrooms nationally have shown large quality improvements). By better understanding the factors that drive quality among grantees and identifying effective practices, we hope to help other programs replicate these exemplars’ results and advance an equity agenda.

As the New York Times front page recently declared, Head Start’s progress offers a ray of hope in a dysfunctional federal political landscape. But there is still room for progress. Looking at what high-performing programs do well can help extend the reach and impact of recent changes to produce even stronger outcomes for young children and their families.

All I Want for Christmas Is for People to Stop Using the Phrase “Education Reform”

In a recent opinion piece at The 74, Robin Lake casts off the label of educator reformer, arguing that “to imply that they are some monolithic group of reformers is ridiculous.” I agree, not so much because education reform has lost its meaning but because it never had a single definition in the first place. At the heart of reform is an acknowledgement that the educational system isn’t serving all kids well, but agreeing that the system could be improved is far from agreeing on how to get there.

definition of educationTo some people “education reform” is about holding schools and districts accountable for student outcomes, which can be viewed as either a means of ensuring that society doesn’t overlook subgroups of students, or as a top-down approach that fails to account for vast differences in school and community resources. To others education reform is shorthand for increasing school choice, or requiring students to meet specific academic standards to be promoted or graduate from high school, or revising school discipline practices that disproportionately impact students of color. Each of these ideas has supporters and detractors, and I suspect that many people who are comfortable with one type of reform vehemently disagree with another.

To take a specific example, consider teacher evaluation reform. One challenge in debating this particular education reform is that there are multiple ways teacher evaluation could change outcomes: one way is by providing feedback and targeted support to educators; another is the identification and removal of low-performing teachers. So even when “education reform” types favor a policy, they might have very different views on the mechanisms through which that policy achieves its goals. In the case of teacher evaluation reform, the dueling mechanisms created trade-offs in evaluation design, as described by my Bellwether colleagues here. (As they note, in redesigning evaluation systems, states tended to focus on the reliability and validity of high-stakes measures and the need for professional development plans for low performing teachers, devoting less attention to building the capacity of school leaders to provide meaningful feedback to all teachers.)

I personally agree with those who argue that teacher evaluation should be used to improve teacher practice, and I have written previously about what that might look like and about the research on evaluation’s role in developing teachers. In a more nuanced conversation, we might acknowledge that there are numerous outcomes we care about, and that even if a given policy or practice is effective at achieving one outcome — say, higher student achievement — it might have unintended consequences on other outcomes, such as school climate or teacher retention.

Instead of broadly labeling people as “education reformers,” we need to clearly define the type of reform we’re discussing, as well as the specific mechanisms through which that reform achieves its intended goals. Doing so provides the basis for laying out the pros and cons of not just the overall idea, but of the policy details that transform an idea into action. Such specificity may help us avoid the straw man arguments that have so often characterized education policy debates.