Category Archives: Student Assessment

GreatSchools Ratings Have a Lot in Common with State and Local Ratings — for Better or Worse

Last Thursday the education world was all a-twitter about an article and analysis on GreatSchools, a widely used nonprofit school rating organization whose 1-10 ratings often show up at the top of search results and on popular real estate websites. Their ratings are known to sway families’ decisions on where to live and send their kids to school.

Photo via Justine Warrington on Flickr

The main thrust of Matt Barnum and Gabrielle LaMarr LeMee’s piece in Chalkbeat is that GreatSchools’ substantial reliance on test score proficiency as a measure of school quality favors schools whose students enter already performing at a higher level. Since these students are more likely to be white and high-income, they argue the GreatSchools ratings may end up exacerbating segregation by influencing families’ housing and school decisions. 

These very same criticisms often come up in debates about local or state school ratings and how best to use test scores in general. In the conversation below, the authors of Bellwether’s recent report and website on school performance frameworks (SPFs) discuss the findings of the GreatSchools report, and how the strengths and weaknesses of GreatSchools’ approach compares to state and local school ratings.

Bonnie O’Keefe:

GreatSchools’ data comes from states, and their metrics and methods aren’t too dissimilar from what we see in many local school performance frameworks, state ESSA ratings, and the No Child Left Behind ratings that came before. Much like many states and districts, GreatSchools has changed their rating system over time as more, better data became available. So the idea that ratings based even in part on proficiency disadvantage schools serving higher-need students isn’t unique to GreatSchools. In fact, a nearly identical critique sunk Los Angeles’ proposed school ratings before they were even created. What is unique is how widely used, influential, and maybe misunderstood GreatSchools’ ratings are among families. 

Brandon Lewis:

The biggest difference I see between the GreatSchools’ school rating system and the local school performance frameworks (SPFs) we profiled for our project is that they have different goals and purposes. GreatSchools is a widely viewed public-facing tool designed to communicate that organization’s particular perspective on school quality. Unlike local SPFs, GreatSchools’ ratings are not tied to any specific goals for students or schools and cannot be used to make any district-level decisions. 

Continue reading

A School Performance Framework Could Be Huge for Los Angeles. Why Is the District Backtracking?

This week, Los Angeles Unified School District (LAUSD) could miss a big opportunity for students, families, and district leaders.

Under the Every Student Succeeds Act, states must create a report card for every single one of their schools. Unfortunately, California’s approach to reporting school data under ESSA is both overly complex and lacking in key information. That’s why the LAUSD board took the first steps last year to create its own school performance framework (SPF), which could provide families, educators, and taxpayers more and better information about how well schools are serving students. Unfortunately the board now appears to be backtracking on that commitment.

An SPF is an action-oriented tool that gathers multiple metrics related to school quality and can be used by system leaders, principals, and/or parents to inform important decisions like how to intervene in a low-performing school, where to invest in improvements, and which school to choose for a child.

As my colleagues wrote in their 2017 review of ESSA plans, California’s complicated system relies on “a color-coded, 25-square performance grid for each indicator” and “lacks a method of measuring individual student growth over time.” In 2018, LAUSD board members tried to improve upon the state’s approach by passing a resolution to create their own SPF. In a statement from the board at that time, members intended that LAUSD’s SPF would serve as “an internal tool to help ensure all schools are continuously improving,” and “share key information with families as to how their schools are performing.”

A local SPF could provide a common framework for district leaders and families to understand performance trends across the district’s 1,100 plus schools in a rigorous, holistic way. Without usable information on school quality, families are left to make sense of complex state websites, third party school ratings, and word of mouth. And unlike the state’s current report card, a local report card could include student growth data, one of the most powerful ways to understand a school’s impact on its students. Student-level growth data tells us how individual students are progressing over time, and can control for demographic changes or differences among students. Continue reading

Stop Pitting Personalized Learning Against Academic Rigor: We Need Both

TNTP recently found that in 40% of classrooms serving a majority of students of color, students never received a single grade-level assignment. How can education accelerate learning if grade-appropriate assignments aren’t even being made available?

For several years, education innovators have debated which approach to take in response to this problem: technology-driven learning designed to meet students where they are — or whole-course curriculum that assumes students are already performing at grade-level. To put it more simply: personalized learning versus academic rigor. 

But instead of debating these innovations and their efficacy, the educational equity movement should advance a collective effort to meaningfully lead to equitable outcomes for Black, Latino, and Native students, and students affected by poverty. The reality is that any solution to address learning gaps will require a concerted combination of efforts, not siloed approaches.

Last spring, a team at Bellwether Education Partners deeply researched the shifts that need to occur in the field so that students with significant learning gaps access educational systems, schools, and classrooms that enable rigorous, differentiated learning. 

And in a new resource I co-authored with Lauren Schwartze and Amy Chen Kulesa, we show that there is no silver bullet. It will take time, energy, focus, innovation, and collaborative efforts across the sector that involve: Continue reading

NAEP Results Again Show That Biennial National Tests Aren’t Worth It

Once again, new results from the National Assessment of Educational Progress (NAEP) show that administering national math and reading assessments every two years is too frequent to be useful.

The 2017 NAEP scores in math and reading were largely unchanged from 2015, when those subjects were last tested. While there was a small gain in eighth-grade reading in 2017 — a one-point increase on NAEP’s 500-point scale — it was not significantly different than eighth graders’ performance in 2013.

Many acknowledged that NAEP gains have plateaued in recent years after large improvements in earlier decades, and some have even described 2007-2017 as the “lost decade of educational progress.” But this sluggishness also shows that administering NAEP’s math and reading tests (referred to as the “main NAEP”) every two years is not necessary, as it is too little time to meaningfully change trend lines or evaluate the impact of new policies.

Such frequent testing also has other costs: In recent years, the National Assessment Governing Board (NAGB), the body that sets policy for NAEP, has reduced the frequency of the Long-Term Trends (LTT) assessment and limited testing in other important subjects like civics and history in order to cut costs. NAGB cited NAEP budget cuts as the reason for reducing the frequency of other assessments. However, though NAEP’s budget recovered and even increased in the years following, NAGB did not undo the previously scheduled reductions. (The LTT assessment is particularly valuable, as it tracks student achievement dating back to the early 1970s and provides another measure of academic achievement in addition to the main NAEP test.)

Instead, the additional funding was used to support other NAGB priorities, namely the shift to digital assessments. Even still, the release of the 2017 data was delayed by six months due to comparability concerns, and some education leaders are disputing the results because their students are not familiar enough with using tablets.

That is not to say that digital assessments don’t have benefits. For example, the new NAEP results include time lapse visualizations of students’ progress on certain types of questions. In future iterations of the test, these types of metadata could provide useful information about how various groups of students differ in their test-taking activity.

Animated GIF - Find & Share on GIPHY

However, these innovative approaches should not come at the expense of other assessments that are useful in the present. Given the concerns some have with the digital transition, this is especially true of the LTT assessment. Instead, NAGB should consider administering the main NAEP test less frequently — perhaps only every four years — and use the additional capacity to support other assessment types and subjects.

Can You Name the Branches of Government? Most Americans Can’t.

Today is Constitution Day, a holiday commemorating the formation and signing of the U.S. Constitution on September 17, 1787 — 230 years ago. As “a nation of immigrants,” America’s national identity is largely tied to our founding documents, endowing the Constitution with a unique importance in American culture. However, many Americans know little about this document that we are supposed to support and defend.

Last week, the Annenberg Public Policy Center (APPC) of the University of Pennsylvania released its Constitution Day Civics Survey, with dismal results. Only one in four respondents were able to name all three branches of government, a 12-point decline since 2011. Shockingly, 33 percent could not name a single branch.

The survey also asked respondents to identify which rights are guaranteed by the First Amendment. While nearly half (48 percent) were able to name “freedom of speech,” only 15 percent could name “freedom of religion.” Even fewer respondents identified the other rights (freedom of the press, right to petition, and right of assembly). Thirty-seven percent couldn’t name any.

Kathleen Hall Jamieson, director of APPC, expressed her concern: “Protecting the rights guaranteed by the Constitution presupposes that we know what they are. The fact that many don’t is worrisome.”

Perhaps, in prior years, this warning may have seemed overblown. But in the Trump era, amid a seemingly constant slew of anti-democratic rhetoric, it feels right on the nose. For example, when asked whether those who are in the country illegally have any rights under the Constitution, 53 percent of APPC’s respondents disagreed. In this context of widespread ignorance and misinformation, the United States has seen an uptick in hate crimes associated with the rise of President Trump, beginning in 2015, persisting into 2016 and 2017, and culminating in the violence of the “Unite the Right” rally of white nationalists in Charlottesville last month.

Luckily, some states are taking action to bolster the civic knowledge of their students. For example, over the past three years, 17 states have adopted a “citizenship test” requirement for high school students. In eight of those states, students must receive a passing score on the test to receive a high school diploma. The questions are drawn from the the United States Citizenship and Immigration Services (USCIS) naturalization civics test, which immigrants must pass to become legal U.S. citizens.

This is a good first step, but it is far from sufficient. The test is not designed to be a high school civic literacy exam. It sets a low bar, with basic multiple-choice questions that ask test-takers to identify one branch of the government, or know how many amendments have been made to the Constitution. The simplicity is reflected in the initial test results, with very high passage rates and few students failing to pass the test after repeated attempts.

However, such a test is only one tool available to policymakers. They can design and administer higher quality civics assessments; implement robust standards and curricula for civics instruction; and provide real-world, project-based opportunities for students to learn about government and civic engagement. For example, New Hampshire passed legislation in 2016 requiring a civics test. But, rather than simply implementing a citizenship test for high school students, the legislation allows for the creation of locally developed assessments that can include a broader range of questions. Additionally, the state created a recognition for students who pass the required test by authorizing school districts to issue civic competency certificates.

New Hampshire Senator Lou D’Allesandro, a former civics teacher who sponsored some of the state’s legislation, summarized the issue well: “We always complain, ‘people don’t know anything about the system, they don’t get involved, they don’t vote.’ Well, they don’t vote because they don’t understand the importance of voting and how meaningful it is to participate in the process.”

If America wants to protect our constitutional rights and democratic ideals, we must ensure that our next generation of citizens are knowledgeable and engaged. That starts in the classroom.