Category Archives: Student Assessment

Committing to Continuous Improvement in Schools: A Customizable Workbook

Figure 1: Bellwether’s refined continuous improvement cycle

Bellwether’s Academic and Program Strategy team partnered with K-12 schools in more than a dozen district and charter networks across the country in the 2020-21 academic year to adopt continuous improvement (CI) cycles that diagnose and reverse unfinished student learning through an iterative, evidence-based approach. In this final blog post, the team provides a customizable CI workbook for use in any school context. 

Last week, we unpacked the increasing value of continuous improvement (CI) cycles in education settings and included reflections from four partner schools on what Bellwether’s distinct CI process looks like in practice. 

In Bellwether’s refined CI approach, the technical and adaptive components of the cycle (Figure 1, represented by a circled “T” and “A”) are intentionally blended. This approach enables school leadership to ground CI plans in measurable goal-based data metrics within aligned, agile teams and coalitions focused on supporting seamless execution on behalf of students. 

As schools reopen this fall, leaders and educators will need to get more strategic and efficient about diagnosing and reversing unfinished student learning in their unique school settings. Bellwether’s Continuous Improvement in Schools Workbook provides a customized way to do that. 

We hope this workbook will be a useful tool as school leaders assess and respond to unfinished student learning this fall and beyond.

It’s Time for a New, Refined Commitment to Continuous Improvement in Schools

Figure 1: Bellwether’s refined continuous improvement cycle

Bellwether’s Academic and Program Strategy team partnered with K-12 schools in more than a dozen districts and charter networks across the country in the 2020-21 academic year to adopt continuous improvement (CI) cycles that diagnose and reverse unfinished student learning through an iterative, evidence-based approach. In this first of two blog posts, the team unpacks Bellwether’s comprehensive approach to CI and what each step in the process looks like in K-12 school settings. Next week, stay tuned for a customizable CI workbook for use in any school context. 

The pandemic and its disproportionate impact on students from historically marginalized communities underscores the value of continuous improvement (CI) as a framework for understanding the depth of unfinished learning and responding to it in an urgent, data-driven, and adaptive manner. In the past decade, CI has worked its way into the lexicon of educators, largely due to the Carnegie Foundation’s plan-do-study-act cycle that has been applied to diverse education improvement efforts from implementing ESSA plans to closing achievement and opportunity gaps. This growing education application of CI draws on more than 30 years of CI best practices in improving products, services, or processes through successive, rapid, evidence-based cycles in a range of sectors. 

Since fall 2020, Bellwether has supported more than a dozen districts and charter networks in their CI efforts, within virtual and hybrid settings, and has developed a balanced approach to the process attuned to current realities in the field. Bellwether’s CI cycle (Figure 1) follows a familiar four-step cadence (“Envision-Execute-Examine-Enact”), but builds on prior models by adding a high-impact adaptive leadership action to what’s typically been viewed as a predominantly technical process. This modification — based on 21st century change management research from Chip and Dan Heath, Ronald Heifetz and Marty Linsky, and Dr. John Kotter — is grounded in the idea that while CI’s technical elements are critical to understand what needs to happen, the cycle ultimately doesn’t lead to sustained change without careful consideration of how that change will occur. 

In Bellwether’s refined approach to CI, the technical and adaptive components of the cycle (Figure 1, represented by a circled “T” and “A”) are intentionally blended. This approach enables school leadership to ground CI plans in measurable goal-based data metrics within aligned, agile teams and coalitions focused on supporting seamless execution on behalf of students.

What does this look like in practice?

Four of the schools Bellwether supported this year, each with its own unique context and focus, weigh in:

1. Envision

Achievers Early College Prep Charter School, a public charter middle school in Trenton, New Jersey, built and implemented a new, data-informed intervention program to accelerate the academic growth of its most vulnerable students. The technical work of the CI Envision stage consisted of AECP setting a vision to create a data-driven intervention program that would provide the right content to the right students at the right time. AECP then established a clear goal to leverage its intervention program to have 80% of its highest-need students reach 1.75 to 2 years of academic growth, as measured by the NWEA MAP assessment. Finally, AECP built a progress monitoring system to look at grade level aligned daily exit tickets in intervention and core classes to measure the effectiveness of both prerequisite intervention content and grade level aligned content. On the adaptive side, AECP built a coalition by having a strong eighth grade teacher team pilot this approach in its first CI cycle, enabling teachers to better troubleshoot problems in real time and facilitate training for the sixth and seventh grade teams in future CI cycles.

In AECP’s words: “[This CI cycle] improved our reflection on our targeted areas for improvement. We have been more strategic on creating intervention goals and maintaining strong leadership initiatives throughout our pilot.”

2. Execute

Seguin Independent School District, a K-12 traditional public school district outside of San Antonio, Texas, centered its CI work on developing teacher instructional capacity in a virtual academy. The technical work of the CI Execute stage consisted of a team taking action on its plan by hosting biweekly, district-wide Professional Learning Communities on virtual instruction, facilitating grade level planning time aligned to those instructional moves, and conducting 1:1 observations and coaching for virtual teachers. During this process, the SISD team gathered data and monitored progress on teacher and leader attendance, engagement, and perception of transferability of new strategies to the classroom. On the adaptive side, the team remained focused on designing high-quality supports aligned to the See it. Name it. Do it. Framework and the National Institute for Excellence in Teaching’s Virtual Look-Fors. However, SISD also had to remain agile by adjusting programs, processes, and communications as it responded to an historic set of regional ice storms, ongoing staffing shifts related to virtual instruction, and survey feedback from teachers. 

In SISD’s words: “The structures and logistics were set by the project plan and covered by the central office. This meant we had the capacity and brain space to respond to shifting circumstances and teacher needs as they arose.”

3. Examine

LEEP Dual Language Academy, a K-2 public charter school in Brooklyn, New York, focused on evaluating and coaching effective lesson planning and execution for guided reading in a hybrid setting. On the technical side of the CI Examine stage, LEEP measured impact by analyzing both process and efficacy data for its CI strategy. The team examined process data by analyzing the consistency of its strategy implementation, and dug into efficacy data to see how both teacher practice and student achievement outcomes were impacted. In this stage, the team identified the following key takeaways: (1) they were less consistent in implementing coaching and feedback on lesson execution and would need to make this shift in the second cycle of CI to drive impact, and (2) they saw less reading growth from virtual kindergarten students and identified the schedule, reading group size, and content prioritization as opportunities to address in the second cycle. The team’s adaptive work of celebrating small wins focused on noting the increase of consistency in lesson plan submission and feedback to teachers in guided reading. They also celebrated mid-year growth on the STEP assessment in second grade with 49% of students growing two reading levels or more after one month of implementation. 

In LEEP’s words: “After examining our data, I think that we have remained focused and nimble in our implementation and this has been done through careful data analysis to then inform next steps and any modifications needed to the plan.”

4. Enact

Promise Community School at Baker-Ripley, a small public charter school network in Houston, Texas, piloted a “Just In Time” (JIT) intervention model for elementary math instruction in a hybrid setting. The technical work of the team’s Enact stage centered on translating key takeaways from its first cycle of JIT intervention to make measurable shifts for a second cycle. In the first cycle of implementing the continuous JIT intervention strategy, the team saw a 30%-point increase in mastery for virtual students, however students’ proficiency fluctuated between 50 to 70%. In order to increase consistency of virtual student mastery, the Promise team shifted its data analysis to focus on remote learners by (1) analyzing remote student work and misconceptions, and (2) increasing engagement strategies during small-group virtual instruction. From an adaptive standpoint, the Promise team focused on clearly communicating adjustments for cycle 2, reinvesting the pilot team by including a rationale and updated goals for the shift, and inspiring through a reiteration of the bright spots observed in cycle 1. 

In Promise’s words: “It’s never too late to reset expectations (we reset in January). We use data to help zoom in on places for focus and problem solving, and we need to be flexible and innovative with what works for our kids.”


We hope that Bellwether’s CI cycle framework and glimpses into its application in schools help educators begin to think about how this process could live in their unique school settings. For questions or comments, please feel free to
email us, and stay tuned next week for a customizable CI workbook for use in any school context.

A Teacher’s Perspective on Testing in a Pandemic (and Beyond)

Photo courtesy of Allison Shelley for EDUimages.

When the Biden administration announced required state standardized testing this spring, I was angry. We’re in the middle of a pandemic. A vast majority of students at the charter school I teach at in Boston plan to stay remote the rest of the year. What would testing look like in this context? And what could the results tell us?  

Even pre-pandemic, in my five years as a high school teacher in Massachusetts and North Carolina, state tests were used for labeling schools and little else. Pressure the tests created affected my approach to teaching in ways that didn’t always serve my students. I love algebra and, in my experience, it’s much more engaging when taught experientially. But I saw a tradeoff between the time experiential lessons take and my ability to cover the total volume of material I knew my students were likely to encounter on state tests. I never received detailed student achievement reports for my current class or last year’s data, meaning that state test results didn’t help to inform differentiation in the classroom. To ensure that I provided my students with the support they needed, I relied heavily on formative quizzes and “exit ticket” assessments, but having access to student achievement reports would have exposed gaps that existed before students stepped into my classroom. 

My initial anger subsided as I read more about the arguments for and against testing this spring. An opportunity exists for systemic change thanks to flexibility the Biden administration provides states in how they use test scores. Instead of labeling schools, scores can potentially inform how states and districts spend $123 billion in K-12 funds from the federal American Rescue Plan Act stimulus package. These funds can benefit all students, especially those often left out of the conversation, like students in foster care or the juvenile legal system.

Flexibility can better enable 2020-21 test data to be used productively for students and schools, creating a reset opportunity from the frustrating status quo. In order for leaders and administrators to use the data to equitably support all students, change must follow the intent in five key ways:

Tests should be scored, with reports in teachers’ and families’ inboxes, by mid-summer

Teachers begin planning for the upcoming school year over the summer, often hoping to spiral in content that needs to be remediated the very first week of school. In my experience, reports from state tests are typically not released until several weeks into the school year. Providing detailed score reports to teachers earlier, on both their incoming students and last year’s class, would allow for more practical application of test scores in instruction.

The process of sharing test data across relevant agencies should be smoother

Schools are not the only organizations that can use test data to support students. Other agencies, such as the foster care system and family support services, could use group-level data, or individual student data with appropriate privacy safeguards, to better support students outside of school. But not every state makes this kind of data sharing transparent, consistent, and student-centered. If teachers aren’t getting this data in practice, it’s doubtful that other adults in children’s lives are getting the information they need. Even the fact that a student missed testing might be useful for a social worker or another service provider to know. If other agencies are aware that a child they work with did not attend tests, they can work with the family on a back-to-school preparation plan for the fall.

School leaders and administrators should identify and support students missing from testing

The Biden administration relaxed participation requirements to account for remote schooling and ongoing COVID-19 uncertainty. Education leaders and analysts should consider which populations of students may be absent from testing this year and the potential implications for interpreting results. Some students who aren’t present for testing may need additional support and remediation. Populations with less access to remote learning include students experiencing homelessness, students living in poverty, and students living in multigenerational households. Statistically, these students are more likely to be children of color — a lack of urgency in school administration supports may widen opportunity gaps. 

Test results should inform how schools and districts spend federal stimulus funds

Districts and schools with widening opportunity gaps based on this year’s tests should shape their stimulus spending plans to address those results with research-backed interventions and improvement plans. Identifying populations most in need of support in these schools, and targeting resources accordingly, is critical. For example, if high schoolers underperformed in math, additional funding could go towards hiring in-school math tutors for students in need of additional learning support. 

Academics shouldn’t be the sole focus 

Academic performance is essential. As an algebra teacher, I want to know that my students are leaving my class ready to take on more advanced mathematical analyses. But I also want my students to get more from school than what is reflected in the state standards: I want them to feel safe, engage in deeper thinking, learn how to communicate with their classmates, and build a love of learning. Many students are struggling right now with the disruptions, trauma, and isolation the past year has brought. Remote learning has limited students’ social interaction and, for many, impeded their sense of safety and security. In addition to heeding what we can learn from state standardized tests, administrators should plan for interventions that support and serve students’ mental health and social-emotional needs. 

I’ve moved on from my initial anger at state testing this year and have embraced a wait-and-see mindset. It will be interesting to see whether this year’s results will have a meaningful positive effect on my students and/or signal larger culture shifts around state standardized tests in the long run. Regardless, the most urgent priority for educators and school administrators should be marshalling all resources and information at their disposal to support all students in recovering lost instructional time due to the pandemic. 


Kate Keller completed a project internship at Bellwether Education Partners this spring focused on
educational continuity. She has taught high school for five years and is pursuing her Master’s degree in Human Development and Psychology at the Harvard Graduate School of Education.

How Can Educators Evaluate Virtual Student Engagement During the COVID-19 Pandemic?

Before the COVID-19 pandemic, educators commonly assessed student engagement using student self-report measures, teacher reports, and observational measures. These measures work for in-person learning environments where student participation and connectivity can be easily visible.

Now most teaching and learning occurs in virtual spaces, where instruction is delivered through learning management systems (e.g., Google Classroom) either synchronously or asynchronously. Teachers and school leaders have to engage students — and evaluate their engagement — in a remote environment with little formal or comprehensive training and often a limited ability to even see their students

There has been a great deal of writing recently about improving student engagement, but few resources provide guidance around measuring student engagement during remote learning. 

Based on a close read of the existing research and resources, and our own in-house expertise, Bellwether’s evaluation team is currently helping three clients to design and implement tools to evaluate virtual student engagement. While the measures themselves have not changed much since the pandemic, the uses and evaluation of these measures must adapt to our current realities, like many other approaches in education, to better serve and engage students and families. 

Here are two key considerations for educators and school leaders to keep in mind when evaluating virtual student engagement:

There is no one-size-fits-all model for measurement  Continue reading

Assessment Myth-Busting During COVID-19

Assessments get a bad rap. Many educators express concerns that assessments are just an exercise in compliance, and especially with COVID-related cancellations, some seem ready to throw assessments out the window altogether. But during COVID-19, educators are likely to face classrooms with a wider-than-usual range of academic abilities due to disruptions in learning that occurred in the spring. 

Assessments are actually the best tool to help identify and narrow those gaps between students that have inevitably widened during this pandemic, as they enable us to gather evidence about where students are in the learning progression. In developing our new book, Bridging Research, Theory, and Practice to Promote Equity and Student Learning, my co-editors and I drew on the experiences of educators across the United States to illustrate how assessments can be used to identify where students are with regard to learning goals, communicate with students and families about learning goals, and support student learning. And during COVID-19, it is even more important to take these steps to ensure that students are on the right track.

 Myth Truth Assessments just let you know if you get the content or not Assessments are a tool that can be used to support learning Assessments are just tests Assessments include a variety of different types of evidence of where students are with regard to learning goals Assessments are just used by teachers and administrators Assessments are useful for students and parents as well as teachers and administrators

That said, assessment alone is not sufficient to enable all students to reach their learning goals. We need educators at all levels to use assessment data to inform next steps in instruction (in the classroom) and resource allocation (at the district, state, and federal level) to ensure that every student has the opportunity to meet learning targets.  Continue reading