Category Archives: Student Assessment

Stop Saying “At Least We’re Not Mississippi”: A Q&A With Rachel Canter of Mississippi First

There’s a tired trope in Southern states: “At least we’re not Mississippi.” The implication is that while one’s state may be underperforming on some measure — poverty, rates of uninsured, education outcomes, etc. — Mississippi can always be counted on to look worse. 

Having grown up, taught school, and worked in education policy across the South my whole life (but not in Mississippi), I’ve heard this statement plenty. I heard it as recently as this fall at a conference, leveled by a national thought leader who ought to know better. 

Last spring, Bellwether released “Education in the American South,” a data-filled report which highlighted, among other things, how the national education reform conversation has largely bypassed the South — a conclusion bolstered by the persistence of this Mississippi myth.

Here’s the thing: While many of us look down our noses, Mississippi has been working hard — and it’s been paying off. In the most recent National Assessment of Education Progress (NAEP) scores, Mississippi was the only state to see improvements in reading and had the biggest gains in fourth-grade reading and math. Mississippi’s gains have been nearly continuous over the last 16 years and mostly unmatched in the region.

To dig more deeply into what’s gone right in Mississippi, I talked to Rachel Canter, longtime Mississippian and co-founder and Executive Director of Mississippi First, an education policy, research, and advocacy nonprofit working to ensure that every Mississippi student has access to excellent schools.

This conversation has been lightly edited for length and clarity.

The most recent NAEP results highlight the progress schools and students in MIssissippi have made, but 2019 isn’t the beginning of this story. When did the tide start to turn and why? Continue reading

GreatSchools Ratings Have a Lot in Common with State and Local Ratings — for Better or Worse

Last Thursday the education world was all a-twitter about an article and analysis on GreatSchools, a widely used nonprofit school rating organization whose 1-10 ratings often show up at the top of search results and on popular real estate websites. Their ratings are known to sway families’ decisions on where to live and send their kids to school.

Photo via Justine Warrington on Flickr

The main thrust of Matt Barnum and Gabrielle LaMarr LeMee’s piece in Chalkbeat is that GreatSchools’ substantial reliance on test score proficiency as a measure of school quality favors schools whose students enter already performing at a higher level. Since these students are more likely to be white and high-income, they argue the GreatSchools ratings may end up exacerbating segregation by influencing families’ housing and school decisions. 

These very same criticisms often come up in debates about local or state school ratings and how best to use test scores in general. In the conversation below, the authors of Bellwether’s recent report and website on school performance frameworks (SPFs) discuss the findings of the GreatSchools report, and how the strengths and weaknesses of GreatSchools’ approach compares to state and local school ratings.

Bonnie O’Keefe:

GreatSchools’ data comes from states, and their metrics and methods aren’t too dissimilar from what we see in many local school performance frameworks, state ESSA ratings, and the No Child Left Behind ratings that came before. Much like many states and districts, GreatSchools has changed their rating system over time as more, better data became available. So the idea that ratings based even in part on proficiency disadvantage schools serving higher-need students isn’t unique to GreatSchools. In fact, a nearly identical critique sunk Los Angeles’ proposed school ratings before they were even created. What is unique is how widely used, influential, and maybe misunderstood GreatSchools’ ratings are among families. 

Brandon Lewis:

The biggest difference I see between the GreatSchools’ school rating system and the local school performance frameworks (SPFs) we profiled for our project is that they have different goals and purposes. GreatSchools is a widely viewed public-facing tool designed to communicate that organization’s particular perspective on school quality. Unlike local SPFs, GreatSchools’ ratings are not tied to any specific goals for students or schools and cannot be used to make any district-level decisions. 

Continue reading

A School Performance Framework Could Be Huge for Los Angeles. Why Is the District Backtracking?

This week, Los Angeles Unified School District (LAUSD) could miss a big opportunity for students, families, and district leaders.

Under the Every Student Succeeds Act, states must create a report card for every single one of their schools. Unfortunately, California’s approach to reporting school data under ESSA is both overly complex and lacking in key information. That’s why the LAUSD board took the first steps last year to create its own school performance framework (SPF), which could provide families, educators, and taxpayers more and better information about how well schools are serving students. Unfortunately the board now appears to be backtracking on that commitment.

An SPF is an action-oriented tool that gathers multiple metrics related to school quality and can be used by system leaders, principals, and/or parents to inform important decisions like how to intervene in a low-performing school, where to invest in improvements, and which school to choose for a child.

As my colleagues wrote in their 2017 review of ESSA plans, California’s complicated system relies on “a color-coded, 25-square performance grid for each indicator” and “lacks a method of measuring individual student growth over time.” In 2018, LAUSD board members tried to improve upon the state’s approach by passing a resolution to create their own SPF. In a statement from the board at that time, members intended that LAUSD’s SPF would serve as “an internal tool to help ensure all schools are continuously improving,” and “share key information with families as to how their schools are performing.”

A local SPF could provide a common framework for district leaders and families to understand performance trends across the district’s 1,100 plus schools in a rigorous, holistic way. Without usable information on school quality, families are left to make sense of complex state websites, third party school ratings, and word of mouth. And unlike the state’s current report card, a local report card could include student growth data, one of the most powerful ways to understand a school’s impact on its students. Student-level growth data tells us how individual students are progressing over time, and can control for demographic changes or differences among students. Continue reading

Stop Pitting Personalized Learning Against Academic Rigor: We Need Both

TNTP recently found that in 40% of classrooms serving a majority of students of color, students never received a single grade-level assignment. How can education accelerate learning if grade-appropriate assignments aren’t even being made available?

For several years, education innovators have debated which approach to take in response to this problem: technology-driven learning designed to meet students where they are — or whole-course curriculum that assumes students are already performing at grade-level. To put it more simply: personalized learning versus academic rigor. 

But instead of debating these innovations and their efficacy, the educational equity movement should advance a collective effort to meaningfully lead to equitable outcomes for Black, Latino, and Native students, and students affected by poverty. The reality is that any solution to address learning gaps will require a concerted combination of efforts, not siloed approaches.

Last spring, a team at Bellwether Education Partners deeply researched the shifts that need to occur in the field so that students with significant learning gaps access educational systems, schools, and classrooms that enable rigorous, differentiated learning. 

And in a new resource I co-authored with Lauren Schwartze and Amy Chen Kulesa, we show that there is no silver bullet. It will take time, energy, focus, innovation, and collaborative efforts across the sector that involve: Continue reading

NAEP Results Again Show That Biennial National Tests Aren’t Worth It

Once again, new results from the National Assessment of Educational Progress (NAEP) show that administering national math and reading assessments every two years is too frequent to be useful.

The 2017 NAEP scores in math and reading were largely unchanged from 2015, when those subjects were last tested. While there was a small gain in eighth-grade reading in 2017 — a one-point increase on NAEP’s 500-point scale — it was not significantly different than eighth graders’ performance in 2013.

Many acknowledged that NAEP gains have plateaued in recent years after large improvements in earlier decades, and some have even described 2007-2017 as the “lost decade of educational progress.” But this sluggishness also shows that administering NAEP’s math and reading tests (referred to as the “main NAEP”) every two years is not necessary, as it is too little time to meaningfully change trend lines or evaluate the impact of new policies.

Such frequent testing also has other costs: In recent years, the National Assessment Governing Board (NAGB), the body that sets policy for NAEP, has reduced the frequency of the Long-Term Trends (LTT) assessment and limited testing in other important subjects like civics and history in order to cut costs. NAGB cited NAEP budget cuts as the reason for reducing the frequency of other assessments. However, though NAEP’s budget recovered and even increased in the years following, NAGB did not undo the previously scheduled reductions. (The LTT assessment is particularly valuable, as it tracks student achievement dating back to the early 1970s and provides another measure of academic achievement in addition to the main NAEP test.)

Instead, the additional funding was used to support other NAGB priorities, namely the shift to digital assessments. Even still, the release of the 2017 data was delayed by six months due to comparability concerns, and some education leaders are disputing the results because their students are not familiar enough with using tablets.

That is not to say that digital assessments don’t have benefits. For example, the new NAEP results include time lapse visualizations of students’ progress on certain types of questions. In future iterations of the test, these types of metadata could provide useful information about how various groups of students differ in their test-taking activity.

Animated GIF - Find & Share on GIPHY

However, these innovative approaches should not come at the expense of other assessments that are useful in the present. Given the concerns some have with the digital transition, this is especially true of the LTT assessment. Instead, NAGB should consider administering the main NAEP test less frequently — perhaps only every four years — and use the additional capacity to support other assessment types and subjects.