Last Thursday the education world was all a-twitter about an article and analysis on GreatSchools, a widely used nonprofit school rating organization whose 1-10 ratings often show up at the top of search results and on popular real estate websites. Their ratings are known to sway families’ decisions on where to live and send their kids to school.
The main thrust of Matt Barnum and Gabrielle LaMarr LeMee’s piece in Chalkbeat is that GreatSchools’ substantial reliance on test score proficiency as a measure of school quality favors schools whose students enter already performing at a higher level. Since these students are more likely to be white and high-income, they argue the GreatSchools ratings may end up exacerbating segregation by influencing families’ housing and school decisions.
These very same criticisms often come up in debates about local or state school ratings and how best to use test scores in general. In the conversation below, the authors of Bellwether’s recent report and website on school performance frameworks (SPFs) discuss the findings of the GreatSchools report, and how the strengths and weaknesses of GreatSchools’ approach compares to state and local school ratings.
GreatSchools’ data comes from states, and their metrics and methods aren’t too dissimilar from what we see in many local school performance frameworks, state ESSA ratings, and the No Child Left Behind ratings that came before. Much like many states and districts, GreatSchools has changed their rating system over time as more, better data became available. So the idea that ratings based even in part on proficiency disadvantage schools serving higher-need students isn’t unique to GreatSchools. In fact, a nearly identical critique sunk Los Angeles’ proposed school ratings before they were even created. What is unique is how widely used, influential, and maybe misunderstood GreatSchools’ ratings are among families.
The biggest difference I see between the GreatSchools’ school rating system and the local school performance frameworks (SPFs) we profiled for our project is that they have different goals and purposes. GreatSchools is a widely viewed public-facing tool designed to communicate that organization’s particular perspective on school quality. Unlike local SPFs, GreatSchools’ ratings are not tied to any specific goals for students or schools and cannot be used to make any district-level decisions.
Right, and GreatSchools’ ratings are one of several school quality indicators that communities can access. In addition to state report cards, local districts often develop their own systems, like the ones we studied. That’s at least two or possibly three different resources. One issue we raise in our report is that a school performance framework, particularly one intended to communicate with families, should clarify, not complicate, families’ understanding of school quality. When there are multiple ratings available, ideally they should be rowing in the same direction.
An example highlighted in the article illustrates the challenge with multiple ratings systems operating in the same community. Knapp Elementary School in Denver rates a 4 overall on GreatSchools, which most people would interpret as a very low score on a 10-point scale. On Denver’s local SPF, Knapp Elementary is designated as a “green” school, which is the second highest rating of five. In Denver’s system, green is a strong rating. So when these two systems say essentially the opposite thing, based largely on the same data, what messages are Denver parents supposed to take away?
Certainly, systems can be different: they can measure different things reflecting the perspectives and priorities of different audiences and creators. But when those systems are in opposition, it raises questions about credibility. That credibility question is complicated when the systems themselves lack transparency. With both GreatSchools and Denver’s SPF, it’s very difficult to get a clear understanding of how the ratings are calculated even when you know a lot about school data.
As Jenn points out, parallel rating systems can lead to confusion for parents. But is this kind of confusion unavoidable when you’re comparing governmental and non-governmental rating systems?
One takeaway here is that there is a real demand from the public for information about schools and school quality. Families will likely go with the thing that’s easiest to access and understand. Perhaps local district leaders should adapt the communication and messaging strategies of GreatSchools’ when developing public-facing SPFs. Public school systems have a special responsibility to be transparent about data, and investment in strong communication and engagement can help build community trust.
Yep, it’s hard to argue that GreatSchools’ ratings aren’t accessible and easy to understand. As the Chalkbeat article pointed out, their ratings emerge at the top of searches on school quality, and the 10-point scale is a familiar metric. There are analogous arguments in favor of A-to-F ratings in accountability systems, since people know what those ratings intend to signal.
The questions raised in the Chalkbeat article about heavy reliance on test scores and specifically on proficiency measures in GreatSchools’ ratings echo live conversations states and districts are having about what metrics belong in accountability systems, and how they should weigh relative to one another, particularly the tradeoffs between systems that favor grade-level proficiency, growth, or non-test metrics. All the SPFs we looked at included both proficiency and growth, and most included other metrics as well, much like GreatSchools.
Inequities in school systems and in children’s lives are part of the reason behind the correlation between proficiency, race, and income. But that’s a lot to unpack in a school rating.
Grappling with the pros and cons of proficiency and growth and balancing multiple metrics in a system came up as a real pressure point with district leaders in our research. Measuring school quality is complex, and the data are imperfect.
But one thing I’ve seen in the conversation on this issue that’s troubling is the idea that parents shouldn’t have easy access to data that could be misinterpreted or misused. (This argument does not appear in the Chalkbeat piece, but is a discussion in the ether around the article.)
It shouldn’t only be policy wonks who know where to find how many students in a school are performing on grade level. It should be the job of school systems to explain to parents what metrics like that do or don’t mean. That creates an opportunity for engagement.
Disclosure: Our Bellwether colleague Melissa Steel King is a member of GreatSchools’ Board of Directors. Bellwether authors maintained full editorial control of this piece.