Tag Archives: Continuous Improvement

From pandemic to progress? Yes, now is the time for a national Institute for Education Improvement.

Photo by Allison Shelley/The Verbatim Agency for American Education: Images of Teachers and Students in Action

This guest post is in response to a new series of briefs from Bellwether, From Pandemic to Progress, which puts forth eight ambitious but achievable pathways that leaders and policymakers can follow to rebuild education – and student learning and well-being – as the country begins to emerge from the COVID-19 pandemic. 

As part of Bellwether’s new series of briefs, From Pandemic to Progress, Allison Crean Davis makes the case to establish a national Institute for Education Improvement (an IEI), stating the need for continuous improvement across the American education system. Davis says, “If the U.S. education sector is to dramatically improve outcomes for students, it needs large-scale, consistent, and sustained organizational support for continuous improvement.”

Hopefully soon, our students will be coming back to in-person learning, but now is not the time to come back to the way things were pre-pandemic. We have a new administration in Washington, and with First Lady Dr. Jill Biden, an educator in the White House. Now is the time to stop jumping from one disconnected education policy initiative to the next and focus our national efforts on evidence-based policies and measurable indicators that actually matter for student success.

Continuous improvement is not about creating the next policy but instead focusing on improving what’s happening in the classroom and helping teachers and administrators do their work more effectively.

Continuous improvement is happening in education—look to our cities for lessons learned.

Davis notes that “continuous improvement is not new,” citing decades of continuous improvement in industry and healthcare. Continuous improvement is also happening in education at the city and district levels. Research-practice partnerships all over the country, including our own partnership between the University of Chicago and Chicago Public Schools, have led to policies and practices that build capacity for systemic school improvement.

In 1998, just over half of Chicago Public Schools students graduated from high school. By 2019, the graduation rate increased to 82 percent. A 30-percentage-point increase in graduation rates is an incredible achievement, accomplished through rigorous attention to data and dedication to continuous improvement within the district.

When we started to look at what matters most for high school graduation, there were numerous competing hypotheses, including an assumption that Chicago’s students were academically unprepared for high school. Many of those assumptions turned out to be false. What matters is supporting students through the shifting context and changing responsibilities that occur in the transition to high school. By monitoring progress of grades and attendance during freshman year, using the Freshman OnTrack indicator, we formed a research base that educators could use in practice.

Through annually evaluating rates of Freshman OnTrack, high school graduation, college enrollment, college persistence, and college graduation, we have seen consistent progress in educational attainment. These improvements are possible because schools have access to research on what matters most for high school and college attainment, and data to monitor whether their strategies are working. Supporting more students to earn college degrees must include their entire education experience, and recently we’ve expanded our research-practice partnership to include City Colleges of Chicago to support Chicago’s students from pre-k through post-secondary success.

We’re asking questions and testing assumptions to learn what matters most for college success. For example, there is a misconception that GPAs are inconsistent across high schools, and that standardized test scores, like the ACT, are neutral indicators of college readiness. In fact, we found that students’ high school grades are five times stronger than their ACT scores at predicting college readiness and graduation, regardless of which high school a student attends, while ACT scores have different relationships with college graduation depending on a student’s high school.

Five Lessons We’ve Learned at the City Level

  1. Improvement comes from a back-and-forth between practice and research. Collaboration between research and practice allows practitioners to know what is working and how it’s working, and allows researchers to understand issues in nuanced ways so they can conduct studies that are useful to practice.
  2. Create metrics for things people believe are important but lack the data to measure progress. The Chicago partnership not only led to the development of Freshman OnTrack measures, but annual data on school climate and organization so we know if students feel safe, challenged, and supported, and how those factors are affected by different policies. It led to the creation of a post-secondary tracking system so schools could see whether their efforts to prepare students for college were really working.
  3. Identify where educators DON’T need to put their attention. Educators have a lot on their plates. Instead of piling on more with new policies and programs, it’s vitally important to know what they can take off their plates and what is critical. Many policies and programs do not show benefits for students, even as they increase the burden on educators—spending time preparing students for standardized tests is one example.
  4. Test assumptions to learn what’s most important for student success and what levers schools can use to affect change. Improvement requires change, and change can only occur with evidence that things are not working the way people think they should. When we began developing the Freshman OnTrack indicator, there was an assumption that students were struggling in ninth grade because they were academically unprepared. In reality, some students have difficulty transitioning to high school because it’s a new environment with increased responsibility.
  5. Pay attention to more student outcomes than test scores—students’ work effort, engagement, and experience of school as a learner are much more important for their long-term outcomes. Is this a school environment where students feel their work is meaningful? Is this school a place where students feel that they belong to a community of learners? Do students feel adults in the school believe in their ability? Do students believe they can succeed? Social-emotional factors are critical for students’ long-term outcomes.

Change the narrative we’re telling our students—and our educators.

There is a lot of emphasis right now around the risk of a “lost generation” of students resulting from remote learning during the pandemic. But what might this message inadvertently tell our students about their ability to succeed in the face of the odds? Research tells us that students are resilient, that learning loss may not be as insurmountable as we think, and what students will need when they return to school is a safe, supportive, and challenging environment in which they believe they can succeed.

Elaine Allensworth is Director and Jenny Nagaoka is Deputy Director at the University of Chicago Consortium on School Research.

Building a School Performance Framework for System Management and Accountability? Lessons From Washington, D.C.

At its core, a school performance framework (SPF) is a data-based tool to support local decision making. An SPF designed for system management and accountability provides data and information about system-wide goals to district- or city-level leaders overseeing multiple schools, helps leaders hold schools accountable for student outcomes, allows leaders to understand which schools are performing well and which are not, and informs system-wide improvement strategies and the equitable allocation of resources. 

Our recent publication “School Performance Frameworks: Lessons, Cases, and Purposeful Design,” a website and report available at SchoolPerformanceFrameworks.org, identifies system management and accountability as one of three primary “use cases” that can shape SPF design decisions. A “use case” (a concept borrowed from the field of technology and design) helps designers think through their end users’ needs. Our work imagines local leaders as designers and considers how the choices they make can meet the needs of different end users, including parents, school principals, and district leaders. Among the five long-standing SPFs we looked at in detail for our project, four prioritized the use case of system management and accountability in their SFP design. 

We also found that too many SPFs try to fulfill multiple uses at once, without clearly thinking through priorities and potential tradeoffs. This post is the third in a series that looks at SPFs through the lens of each use case to highlight design considerations and relevant examples.

SPFs built for system management and accountability can inform consequential decisions made at the district level about which schools should be rewarded, replicated, or expanded, and which ones require improvement, intervention, and possibly closure. These SPFs get the most attention when the data they produce result in school closures or other highly visible consequences. While closures may grab headlines and garner resentment for SPFs, a well-designed SPF can actually inject transparency, equity, and fairness into even the most challenging decisions and increase opportunities for students and families by highlighting success and supporting the expansion of quality school options. 

An SPF created for system management and accountability should include:

Continue reading

Don’t Ask if Head Start “Works” – That’s Not the Right Question

Head Start is an $8.5 billion federal program, which means everyone loves asking if it “works.” But that’s a useless question.

We know Head Start produces positive outcomes. There’s a substantial body of evidence showing that Head Start improves children’s learning at school entry. Other research shows that Head Start children are more likely to graduate high school and have better adult outcomes than children who did not. And a growing body of research shows that high-quality preschool programs can produce long-lasting gains in children’s school and life outcomes.

But critics of Head Start cite the same studies I just did to make the opposite argument. They have valid points. Not every Head Start program is high quality, for example, so some programs don’t produce these positive gains for students. And the Head Start Impact Study showed that Head Start’s positive effect on test scores fades as children enter the elementary grades.

Both critics and proponents of Head Start are right – which is why the “Does it work?” question is so useless. We already know the answer, and it’s not a clean yes or no. Taken all together, the available evidence shows that Head Start is a valuable program that can get better. Given, instead of asking if Head Start works, we should be asking a better question: How can policymakers and practitioners make Head Start better for children and families?

That’s the question Sara Mead and I – along with Results for America, the Volcker Alliance, and the National Head Start Association – try to answer in our new report, Moneyball for Head Start. We worked with these organizations to develop a vision for improving Head Start outcomes through data, evidence, and evaluation.

Specifically, we call on local grantees, federal policymakers, the research community, and the philanthropic sector to reimagine Head Start’s continuous improvement efforts.

Local grantees: All Head Start grantees need systems of data collection and analysis that support data-informed, evidence-based continuous improvement, leading to better results for children and families.

Federal oversight: The Office of Head Start (OHS), within the Administration for Children and Families of the U.S. Department of Health and Human Services, needs a stronger accountability and performance measurement system. This would allows federal officials to identify and disseminate effective practices of high-performing grantees, identify and intervene in low-performing grantees, and support continuous improvement across Head Start as a whole.

Research and evaluation: Federal policymakers and the philanthropic sector need to support research that builds the knowledge base of what works in Head Start and informs changes in program design and policies. This will require increasing funding for Head Start research, demonstration, and evaluation from less than 0.25 percent of total federal appropriations to 1 percent, and those funds should focus on research that builds knowledge to help grantees improve their quality and outcomes.

Philanthropy and the private sector: The philanthropic sector, universities and other research institutions, and the private sector should help build grantee capacity and support the development, evaluation, and dissemination of promising practices.

Fully realizing this vision will require a multi-year commitment. There are steps, however, that Congress and the administration can take to make progress towards these goals. In the paper, we propose several recommendations for federal policy. Taken together, these actions can support Head Start grantees in using data, evidence, and evaluation to improve results for children and families.

5 Reasons Getting Rid of Annual Testing is a Dumb Idea

Senator Lamar Alexander (R-TN) and Rep. John Kline (R-MN), the incoming leaders of the Senate and House education committees, both say they are open to an ESEA rewrite that kills the requirement for states to test students annually. Or as I called it, the peel off the party wings approach to reauthorization. This bipartisan coalition bonds over their hatred of statewide annual testing, but not much else. And any bill they produce would be, in essence, a giant finger to the policies of Arne Duncan and Barack Obama–and Margaret Spellings and George W. Bush before them.

Like Mike Petrilli in this Flypaper post, I hope Alexander’s and Kline’s annual testing one-eighty is all just a bluff to try and get Democrats to give in on requiring states to develop teacher evaluations. And I hope they come to their senses and reveal a more centrist reauthorization proposal–with annual statewide testing, and data reporting, and school accountability requirements with teeth.

Because getting rid of annual testing is a dumb idea. I acknowledge (readily) that there are very real problems with today’s tests, accountability systems, teacher evaluations, NCLB waivers, and so on. And these problems are often most acute for those most affected by them–students, families, and teachers, rather than the policymakers that wrote the law and are now responsible for updating it.

But this particular reaction–ending statewide, comparable, annual testing–is an overreaction that creates more problems than it solves. It feeds into the false narrative that testing is only able to punish, rather than inform, support, and motivate. It makes it okay that we haven’t invested nearly enough in building educator capacity to support the students that tests identify as struggling, including significant commitments to overhauling both professional development and teacher preparation. It shies away from, rather than confronts, the hard truths that tests reveal about our education system–the disparate outcomes, and disparate expectations of what students from different backgrounds, ethnicities, and socio-economic conditions can learn.

Still, given the public beating standardized tests have taken over the last decade, and the negative narrative around testing that’s solidified as a result, it remains exceedingly important for those of us that still believe in annual, statewide standardized testing to articulate–again, and again, and again–why it matters. So if the problems above weren’t sufficient to sway you, here are the top five things we lose by giving up on annual testing:

Continue reading

In Defense of Standardized Testing

According to a Gallup poll last fall, one in eight teachers thinks that the worst thing about the Common Core is testing. On the surface, that’s hardly newsworthy. We know states are changing their tests to align to the new standards, and those changes have inevitably bred uncertainty, anxiety, and even hostility, especially when results could carry high stakes someday. But educators surveyed didn’t say they were upset that the tests were changing, or that there could be consequences tied to the results. Rather, they were upset that the tests exist. Specifically, 12 percent of U.S. public school teachers “don’t believe in standardized testing.” Much like the debate over global warming, these non-believers refuse to validate an unassailable fact: standardized testing does have positive– and predictive–value in education and in life, just as the Earth is, indeed, getting warmer.

More specifically, this righteous conviction—“I don’t believe in testing”—is at odds with most policy analysis. Regardless of political or ideological bent, most will admit that NCLB got one thing right: exposing achievement gaps through the disaggregation of student data. Where did that data come from? Standardized tests. Instead of ignoring longstanding disparities in schooling, NCLB’s testing regimen forced states and districts to quantify them, examine them, and most importantly, try to improve them. It gave policymakers, administrators, and educators a common language to talk about student achievement and progress, and evaluate what was working based on evidence, not perception. Sure, standardized testing needed to be refined over the last decade to enhance quality and reduce unintended consequences—and could still use upgrades and be open to further innovation. But the value of standardized testing in terms of better understanding and improving a public education system as vast and fragmented as ours is undeniable, right? Continue reading