Category Archives: Research

Education Policy, Meet Human-Centered Design

In a lot of ways, the worlds of education policy and human-centered design couldn’t be more dissimilar. The former relies heavily on large-scale quantitative analysis and involves a long, complex public process. The latter is deeply qualitative, fast moving, creative, and generative. Policy professionals come up through the ranks in public agencies, campaigns, and think tanks. Deep issue expertise and sophisticated deductive reasoning are highly valued. Designers come from an array of backgrounds — the more unorthodox the better. Success for them comes from risk-taking, novel ideas, and synthesizing concepts across time, space, and sectors.

figure from Creating More Effective, Efficient, and Equitable Education Policies with Human-Centered Design comparing policy and design methods

figure from Creating More Effective, Efficient, and Equitable Education Policies with Human-Centered Design

I’m fortunate to have spent some time in both worlds. They each appeal to different parts of my personality. Policy analysis affords me order and confidence in answers based on facts. Design lets me flex my creative muscles, fail fearlessly, and have confidence in answers based on experience.

So when a grant from the Carnegie Corporation of New York gave me the opportunity to write a paper about bringing these two worlds together, I jumped at the chance — I knew that each could benefit from the other.

Creating More Effective, Efficient, and Equitable Education Policies with Human-Centered Design makes the case that policy practitioners can use human-centered methods to create better education policies because they are informed by the people whose lives will be most affected by them.

The underpinning hypothesis is that 1) co-designing policies with constituents can generate more accurate definitions of problems and more relevant solutions, 2) human-centered design can generate a wider variety of potential solutions leading to innovation, and 3) the process can mitigate or reverse constituent disenfranchisement with the lawmaking process.

Human-centered policy design is still a new practice, however, and there are still important questions to work out, like how to make sure the process is inclusive and where exactly human-centered design methods can enhance policy research and design.

Luckily, SXSW EDU, a huge national conference focused on innovation in education, is a perfect place to test new ideas. So I reached out to Maggie Powers, director of STEAM Innovation at Agnes Irwin School and member of IDEO’s Teachers Guild, and Matt Williams, vice president of Education at Goodwill of Central Texas, to explore what it would look like to apply human-centered design to policies that affect high school students whose education suffers because of lost credits when they transfer schools. Our session will pressure test some of the ideas that emerged in the paper. The results will inform the next phase of this work, which will help policy practitioners implement human-centered design methods. Keep an ear to the ground for that!

How an East Coast/West Coast Hip Hop Rivalry Helped Us Find Evaluation’s Middle Ground

Everyone loves a good rivalry. The Hatfields vs. the McCoys. Aaron Burr vs. Alexander Hamilton. Taylor Swift vs. Katy Perry.

As evaluators, we’re partial to Tupac vs. Biggie. For the better part of three decades, these rappers from opposing coasts have remained in the public eye, recently reemerging with the release of a television series about their unsolved murders. Interestingly, their conflict about artistry and record labels mirrors a conflict within evaluation’s own ranks around a controversial question:

Can advocacy be evaluated?

Images via Stanford University, Zennie Abraham, Takeshl, and Harvard University

On the East Coast, Harvard’s Julia Coffman acknowledges that evaluating advocacy can be challenging, thanks to the unique, difficult-to-measure goals that often accompany these efforts. Nevertheless, she contends, these challenges can be mitigated by the use of structured tools. By using a logic model to map activities, strategies, and outcomes, advocates can understand their efforts more deeply, make adjustments when needed, and, overall, reflect upon the advocacy process. This logic model, she claims, can then become the basis of an evaluation, and data collected on the model’s components can be used to evaluate whether the advocacy is effectively achieving its intended impact.

In contrast to the East Coast’s structured take, West Coast academics refer to advocacy as an “elusive craft.” In the Stanford Social Innovation Review, Steven Teles and Mark Schmitt note the ambiguous pace, trajectory, and impact related to the work of changing hearts and minds. Advocacy, they claim, isn’t a linear engagement, and it can’t be pinned down. Logic models, they claim, are “at best, loose guides,” and can even hold advocates back from adapting to the constantly changing landscape of their work. Instead of evaluating an organization’s success in achieving a planned course of action, Teles and Schmitt argue that advocates themselves should be evaluated on their ability to strategize and respond to fluctuating conditions.

Unsurprisingly, the “East Coast” couldn’t stand for this disrespect when the “West Coast” published their work. In the comment section of Teles and Schmitt’s article, the “East Coast” Coffman throws down that “the essay does not cite the wealth of existing work on this topic,” clearly referring to her own work. Teles and Schmitt push back, implying that existing evaluation tools are too complex and inaccessible and “somewhat limited in their acknowledgement of politics.” Them’s fighting words: the rivalry was born.

As that rivalry has festered, organizations in the education sector have been building their advocacy efforts, and their need for evidence about impact is a practical necessity, not an academic exercise. Advocacy organizations have limited resources and rely on funders interested in evidence-based results. Organizations also want data to fuel their own momentum toward achieving large-scale impact, so they need to understand which approaches work best, and why.  

A case in point: In 2015, The Collaborative for Student Success, a national nonprofit committed to high standards for all students, approached Bellwether with a hunch that the teacher advocates in their Teacher Champions fellowship were making a difference, but the Collaborative lacked the data to back this up.

Teacher Champions, with support from the Collaborative, were participating in key education policy conversations playing out in their states. For example, in states with hot debates about the value of high learning standards, several Teacher Champions created “Bring Your Legislator to School” programs, inviting local and state policymakers into their classrooms and into teacher planning meetings to see how high-quality, standards-driven instruction provided for engaging learning opportunities and facilitated collaborative planning.

But neither the Collaborative nor the teachers knew exactly how to capture the impact of this workWith Teacher Champions tailoring their advocacy efforts across 17 states, the fellowship required flexible tools that could be adapted to the varied contexts and approaches. Essentially, they needed an East Coast/West Coast compromise inspired by Tupac and Biggie and anchored by Coffman, Teles, and Schmitt. Continue reading

Two Graphs on Teacher Turnover Rates

I have a new piece up at The 74 this morning arguing that, contrary to popular perception within the education field, we do not have a generic teacher turnover crisis. Why do I say that? Two graphs help illustrate my point.

First, consider this graph from the Bureau of Labor Statistics. It shows job openings rates by industry from 2002 to 2017. I’ve added a red arrow pointing to the line for state local government employees who work in education (this group is predominantly public school teachers). As the graph shows, public education has consistently lower job openings rates than all other industries in our economy.

As I write in my piece today, “public schools have much lower rates of job openings, hire rates, quit rates, and voluntary and involuntary separations than every industry except the federal government. Across all these measures, public schools have employee mobility rates that are roughly half the national averages.”

Instead of having some sort of generic turnover problem that applies to all teachers nationally, we actually have problems that are unique to certain schools, districts, and subject areas. To illustrate this point, take a look at the graph below from the annual “Facts and Figures” report from BEST NC. It maps teacher turnover rates by district in North Carolina. Overall, the state has a teacher turnover rate that’s lower than the national average. But some districts have turnover rates about half of the state average, while others are twice as high as the average.

For more, go read the full piece in The 74 for my thoughts on what this means for the education field.

Disproportionate School Discipline Is Not Separate From Justice System Disparities

In December of 2017, the United States Civil Rights Commission held a public briefing addressing the school-to-prison pipeline, paying special attention to students of color and students with disabilities and the impact of school suspensions and expulsions. There’s a debate centering around whether bias is at play in school discipline. (You can watch the archived livestream here.)

As usual, the Commission then opened a window for written public comments. I wrote a memo to the Commission to help place the conversation about disproportionate school discipline into context: school discipline is just one manifestation of a larger and well-studied criminal justice phenomenon. (This blog posts summarizes my comments; if you want to read my full memo, click here.)

Rates of disparate school discipline for students of color and students with disabilities parallel the disparate local and national rates of arrest, incarceration, and executions of people of color and people with disabilities. It is reasonable to infer that that the identified causes of those disparities are likely to be similar to — if not the same as — the differential rates of school-based discipline.

Efforts to claim that questions about school discipline are new and mysterious ignore the wealth of available data and expertise going back as far as the 1950s. None of these questions are novel, and the feigned confusion about how we could possibly know when and where bias against students of color and students with disabilities affects the imposition of punitive discipline are disingenuous.

Within the research, it is undisputed that the juvenile and adult justice systems come into more frequent contact with people of color and people with disabilities than their white and non-disabled counterparts. It is also undisputed that the consequences at each point of the interaction are more severe for people of color and people with disabilities. Here are some examples:

Bias is notoriously difficult to document, particularly where researchers are not recording data themselves but instead relying on the records kept by those whose behavior is under scrutiny. But a study in Cook County, Illinois, for example, found that when controlling for all other variables, judges demonstrated racial bias: “We find evidence of significant interjudge disparity in the racial gap in incarceration rates, which provides support for the model in which at least some judges treat defendants differently on the basis of their race. The magnitude of this effect is substantial.”

It is impossible to find a credible study that concludes that the difficulty of ascertaining the degree to which bias influences disparities means that no further investigation would be appropriate. In fact, those who study the issue consistently conclude that the undisputed statistical disparities point to a need for deeper investigation of specific systems, more complete data collection, and additional targeted research.

An attempt to frame the very same phenomenon when it appears in schools as the result of applying unbiased policies and practices ignores decades of relevant research. Schools are integral to, not separate from, our civic experience. Every person — child and adult — who shows up in a school building also exists outside of that building and within our larger civic context, a context that includes our law enforcement and justice systems. Discussions about when and how statistical evidence of disproportionality should trigger an investigation cannot be had in a vacuum; they should, instead, be grounded in the substantial body of research and evidence outside the schoolhouse walls.

Many of those who believe that the statistical differences in student discipline can be explained away by out-of-school factors or by objectively different student behavior have been pushing to nullify a 2014 guidance letter issued jointly by the Departments of Justice and Education. That letter made clear that significant disproportionality in the administration of suspensions and expulsions could lead to a federal investigation.

Evidence of disproportionality in the administration of punitive discipline strategies — both at school and in the justice system — is not sufficient to identify bias. It is, however, a leading indicator of where bias may be found if one were to investigate. Additionally, all of the existing research shows that a targeted inquiry is the only way to determine whether bias is, or is not, the underlying cause of the disparity.

The Commission is expected to review all of the briefing materials and public comments and release a public report, as it typically does. These reports are non-binding on government agencies but may include commentary about pending legislation or suggest new guidelines. I expect that this report will make a specific recommendation about rescinding or maintaining the 2014 joint guidance package on school discipline. Where bias does lead to differential treatment, federal civil rights protections must be enforced and constitutional and statutory protections against discrimination are implicated.

Best in Bellwether 2017: Our Most Read Publications and Posts

Below are the most read posts from Ahead of the Heard and our most read publications in 2017! (To read the top posts from our sister site,, click here.)

Top Ten Blog Posts from Ahead of the Heard in 2017

1.) Anything But Equal Pay: How American Teachers Get a Raw Deal
By Kirsten Schmitz

2.) Exciting News
By Mary K. Wells

3.) Some Exciting Hires and Promotions
By Mary K. Wells

4.) Where Are All The Female Superintendents?
By Kirsten Schmitz

5.) An Expanded Federal Role in School Choice? No Thanks.
By Juliet Squire

6.) Teacher Turnover Isn’t Always Negative – Just Look at D.C. Public Schools’ Results
By Kaitlin Pennington

7.) Georgia Addressed Its Teacher Shortages With This One Trick
By Chad Aldeman

8.) A Day in the Life: Bellwether Analyst Andrew Rayner [Andrew’s now over at Promise54!]
By Heather Buchheim & Tanya Paperny

9.) Welcoming Our New Senior Advisers
By Mary K. Wells

10.) How Will States Handle New Title I Powers with Minimal Federal Oversight?
By Bonnie O’Keefe

Top Five Publications & Releases from Bellwether in 2017

1.) An Independent Review of ESSA State Plans
Chad Aldeman, Anne Hyslop, Max Marchitello, Jennifer O’Neal Schiess, & Kaitlin Pennington

2.) Miles to Go: Bringing School Transportation into the 21st Century
Jennifer O’Neal Schiess & Phillip Burgoyne-Allen

3.) Michigan Education Landscape: A Fact Base for the DeVos Debate
Bonnie O’Keefe, Kaitlin Pennington, & Sara Mead

4.) Voices from Rural Oklahoma: Where’s Education Headed on the Plain?
Juliet Squire & Kelly Robson

5.) The Best Teachers for Our Littlest Learners? Lessons from Head Start’s Last Decade
Marnie Kaplan & Sara Mead

To hear more, you can always sign up here to get our newsletter. Thanks for following our work in 2017!