Tag Archives: Research

Learning from a Missed Opportunity in Massachusetts

If current predictions hold, several states will either set new or stand by current limits on charter school growth and expansion. These limits, called charter school caps, place a ceiling on the number of charter schools or students those schools can enroll. In 2016, Massachusetts did the same thing: Voters rejected Ballot Question 2, which would have raised the cap on charter schools in the state. But research released just last week suggests that Massachusetts’ voters made a mistake. The states currently considering similar legislation should pay attention.

In the study I’m referencing, authors Sarah Cohodes, Elizabeth Setren, and Christopher R. Walters examined the effect of a policy that allowed effective charter schools in Boston to replicate their school models at new locations. They found that these new schools produced large achievement gains that were on par with those of their parent campuses. What’s more, the average effectiveness of charter middle schools in the city increased after the policy reform.

This evidence could, perhaps, be dismissed if the sector saw only a marginal increase in the number of schools; that is, if there were only a few additional charter schools that pulled this off. But that’s not the case: Boston’s charter sector produced these results despite a doubling of the charter market share in the city.

This analysis would be a big deal for any charter sector, but it is particularly meaningful for Boston. As Bellwether found in a recent analysis of the charter sector, Boston has the highest performing urban charter sector in the country. The average child who attended Boston charter schools benefited from basically a full year of additional learning compared to students in traditional public schools: 170 additional days of learning in reading and 233 days of learning in math. And the research suggests that Boston charter schools have strong, positive effects on the learning outcomes of students with disabilities and English-language learners, as well. The implication here is that not only did Boston’s charter schools replicate their impact, they replicated some of the most effective charter schools we’ve ever seen, to the benefit of the thousands of students in Boston who are on charter school waitlists.

The states that are poised to double down on charter caps — such as New York, Maine, and California — shouldn’t make the same mistake as Massachusetts did in 2016. New York, in particular, is at risk here: In our analysis earlier this year, we examined the existing evidence on New York and New York City and found that there, too, charters are more effective than traditional public schools. By committing to the cap, the state is refusing thousands of students the opportunity to attend high-quality schools.

To be sure, there are reasons to question the growth of a charter sector other than whether charters can replicate effectiveness across schools. Charter critics cite, for example, concerns about the effect of charter sector growth on traditional public school enrollment. But, particularly during National Charter Schools Week, states should be skeptical of arguments used to support charter school caps that claim charter schools cannot be replicated effectively.

Why Is There a Disconnect Between Research and Practice and What Can Be Done About It?

What characteristics of teacher candidates predict whether they’ll do well in the classroom? Do elementary school students benefit from accelerated math coursework? What does educational research tell us about the effects of homework?

three interconnected cogs, one says policy, one says practice, one says research

These are questions that I’ve heard over the past few years from educators who are interested in using research to inform practice, such as the attendees of researchED conferences. These questions suggest a demand for evidence-based policies and practice among educators. And yet, while the past twenty years have witnessed an explosion in federally funded education research and research products, data indicate that many educators are not aware of federal research resources intended to support evidence use in education, such as the Regional Education Laboratories or What Works Clearinghouse.

Despite a considerable federal investment in both education research and structures to support educators’ use of evidence, educators may be unaware of evidence that could be used to improve policy and practice. What might be behind this disconnect, and what can be done about it? While the recently released Institute of Education Sciences (IES) priorities focus on increasing research dissemination and use, their focus is mainly on producing and disseminating: the supply side of research. Continue reading

Expand Your Ed Policy Toolkit with Human-Centered Design

Design Methods for Education Policy Website

Design Methods for Education Policy Website

In February, I released a white paper making the case that policy professionals can create better education policies by using human-centered research methods because these methods are informed by the people whose lives will be most affected.

Yesterday, we released a companion website (https://designforedpolicy.org/) that curates 54 human-centered research methods well-suited to education policy into one easy-to-navigate resource. We took methods from organizations like IDEO, Stanford’s Hasso Plattner Institute of Design, and Nesta and organized them by the phases of a typical education policy project. We included brief explanations of how each method might be applied to your current work.

To be sure, you probably already use some human-centered design methods in your work, even if you don’t think of them that way. Interviews and observations are commonplace and provide highly valuable information. What the design world brings is a mindset that explicitly and deeply values the lived experiences of the people who are most impacted by problems and an array of methods to capture and analyze that information. It also adds a heavy dose of creativity to the process of identifying solutions. And despite a common misconception, when done well, human-centered design methods are very rigorous, fact-based, and structured to root out assumptions and biases.

When combined, common policy analysis methods and human-centered design methods can result in a powerful mix of quantitative and qualitative, deductive and inductive, macro and micro, rational and emotional elements. Continue reading

Education Policy, Meet Human-Centered Design

In a lot of ways, the worlds of education policy and human-centered design couldn’t be more dissimilar. The former relies heavily on large-scale quantitative analysis and involves a long, complex public process. The latter is deeply qualitative, fast moving, creative, and generative. Policy professionals come up through the ranks in public agencies, campaigns, and think tanks. Deep issue expertise and sophisticated deductive reasoning are highly valued. Designers come from an array of backgrounds — the more unorthodox the better. Success for them comes from risk-taking, novel ideas, and synthesizing concepts across time, space, and sectors.

figure from Creating More Effective, Efficient, and Equitable Education Policies with Human-Centered Design comparing policy and design methods

figure from Creating More Effective, Efficient, and Equitable Education Policies with Human-Centered Design

I’m fortunate to have spent some time in both worlds. They each appeal to different parts of my personality. Policy analysis affords me order and confidence in answers based on facts. Design lets me flex my creative muscles, fail fearlessly, and have confidence in answers based on experience.

So when a grant from the Carnegie Corporation of New York gave me the opportunity to write a paper about bringing these two worlds together, I jumped at the chance — I knew that each could benefit from the other.

Creating More Effective, Efficient, and Equitable Education Policies with Human-Centered Design makes the case that policy practitioners can use human-centered methods to create better education policies because they are informed by the people whose lives will be most affected by them.

The underpinning hypothesis is that 1) co-designing policies with constituents can generate more accurate definitions of problems and more relevant solutions, 2) human-centered design can generate a wider variety of potential solutions leading to innovation, and 3) the process can mitigate or reverse constituent disenfranchisement with the lawmaking process.

Human-centered policy design is still a new practice, however, and there are still important questions to work out, like how to make sure the process is inclusive and where exactly human-centered design methods can enhance policy research and design.

Luckily, SXSW EDU, a huge national conference focused on innovation in education, is a perfect place to test new ideas. So I reached out to Maggie Powers, director of STEAM Innovation at Agnes Irwin School and member of IDEO’s Teachers Guild, and Matt Williams, vice president of Education at Goodwill of Central Texas, to explore what it would look like to apply human-centered design to policies that affect high school students whose education suffers because of lost credits when they transfer schools. Our session will pressure test some of the ideas that emerged in the paper. The results will inform the next phase of this work, which will help policy practitioners implement human-centered design methods. Keep an ear to the ground for that!

Disproportionate School Discipline Is Not Separate From Justice System Disparities

In December of 2017, the United States Civil Rights Commission held a public briefing addressing the school-to-prison pipeline, paying special attention to students of color and students with disabilities and the impact of school suspensions and expulsions. There’s a debate centering around whether bias is at play in school discipline. (You can watch the archived livestream here.)

As usual, the Commission then opened a window for written public comments. I wrote a memo to the Commission to help place the conversation about disproportionate school discipline into context: school discipline is just one manifestation of a larger and well-studied criminal justice phenomenon. (This blog posts summarizes my comments; if you want to read my full memo, click here.)

Rates of disparate school discipline for students of color and students with disabilities parallel the disparate local and national rates of arrest, incarceration, and executions of people of color and people with disabilities. It is reasonable to infer that that the identified causes of those disparities are likely to be similar to — if not the same as — the differential rates of school-based discipline.

Efforts to claim that questions about school discipline are new and mysterious ignore the wealth of available data and expertise going back as far as the 1950s. None of these questions are novel, and the feigned confusion about how we could possibly know when and where bias against students of color and students with disabilities affects the imposition of punitive discipline are disingenuous.

Within the research, it is undisputed that the juvenile and adult justice systems come into more frequent contact with people of color and people with disabilities than their white and non-disabled counterparts. It is also undisputed that the consequences at each point of the interaction are more severe for people of color and people with disabilities. Here are some examples:

Bias is notoriously difficult to document, particularly where researchers are not recording data themselves but instead relying on the records kept by those whose behavior is under scrutiny. But a study in Cook County, Illinois, for example, found that when controlling for all other variables, judges demonstrated racial bias: “We find evidence of significant interjudge disparity in the racial gap in incarceration rates, which provides support for the model in which at least some judges treat defendants differently on the basis of their race. The magnitude of this effect is substantial.”

It is impossible to find a credible study that concludes that the difficulty of ascertaining the degree to which bias influences disparities means that no further investigation would be appropriate. In fact, those who study the issue consistently conclude that the undisputed statistical disparities point to a need for deeper investigation of specific systems, more complete data collection, and additional targeted research.

An attempt to frame the very same phenomenon when it appears in schools as the result of applying unbiased policies and practices ignores decades of relevant research. Schools are integral to, not separate from, our civic experience. Every person — child and adult — who shows up in a school building also exists outside of that building and within our larger civic context, a context that includes our law enforcement and justice systems. Discussions about when and how statistical evidence of disproportionality should trigger an investigation cannot be had in a vacuum; they should, instead, be grounded in the substantial body of research and evidence outside the schoolhouse walls.

Many of those who believe that the statistical differences in student discipline can be explained away by out-of-school factors or by objectively different student behavior have been pushing to nullify a 2014 guidance letter issued jointly by the Departments of Justice and Education. That letter made clear that significant disproportionality in the administration of suspensions and expulsions could lead to a federal investigation.

Evidence of disproportionality in the administration of punitive discipline strategies — both at school and in the justice system — is not sufficient to identify bias. It is, however, a leading indicator of where bias may be found if one were to investigate. Additionally, all of the existing research shows that a targeted inquiry is the only way to determine whether bias is, or is not, the underlying cause of the disparity.

The Commission is expected to review all of the briefing materials and public comments and release a public report, as it typically does. These reports are non-binding on government agencies but may include commentary about pending legislation or suggest new guidelines. I expect that this report will make a specific recommendation about rescinding or maintaining the 2014 joint guidance package on school discipline. Where bias does lead to differential treatment, federal civil rights protections must be enforced and constitutional and statutory protections against discrimination are implicated.