A “Dashboard” Accountability System Isn’t Good Enough

In response to the Department of Education’s invitation for advice and recommendations on implementing the Every Student Succeeds Act (ESSA), I submitted the comment below. I chose to focus my comment on the need for states to make information meaningful to parents through some form of summative rating system:  

My comment concerns the “annual meaningful differentiaton” requirement in Section 1111 of the Every Student Succeeds Act. Although some states may push to meet this requirement through a “dashboard” approach where they merely release data in a single place, that approach would violate congressional intent, fail to meet the needs of parents and taxpayers, and ignore the academic research on school accountability. Instead, the federal government should ensure that states create accountability systems that result in a final, annual, and summative rating for each school.

First, a clear rating system follows congressional intent. The law distinguishes between “report cards,” where States must compile and release a wide array of data on school performance and finances, and a “statewide accountability system” with a “state-determined methodology” to identify low-performing schools. A dashboard approach should be considered as one way to satisfy the report card component, but it should not meet the requirement that states create a system for identifying low-performing schools.

Second, parents and taxpayers need a clear, easy way to differentiate among schools. Parents make high-stakes decisions about where to live and where to send their kids to school, and a data dashboard simply throws information at them rather than trying to guide them through those choices. Similarly, taxpayers desire information about the performance of their schools, and neither parents nor the taxpaying public should have to manually sort through data on each individual school (clicking through websites and eye-balling each individual data point) in order to get a sense of their performance. Instead, states should find ways to make meaning of the data through simple and clear accountability systems.

Third, the research on school accountability finds positive benefits on statewide systems of identifying low-performing schools, especially for our nation’s most underserved children. Researchers Martin Carnoy and Susanna Loeb of Stanford University created an index to measure the strength of states’ pre-NCLB accountability systems. They measured each state’s use of high-stakes testing to reward or sanction schools, and developed a zero-to-five scale to rank each state’s system. They found that NAEP math scores rose faster for states that had stronger accountability systems between 1996 and 2000.

Other researchers have exploited the differences among states to identify the influence of various types of accountability systems. Between 1993 and 2002, 43 states adopted some form of accountability system. Fourteen required schools and districts only to report their performance information (“report-card states”). Another 29 states (“consequential”) also included sanctions for poor performance in addition to providing public information.  Researchers Eric Hanushek and Margaret Raymond used fourth- and eighth-grade NAEP math data to compare student performance growth across states by type of accountability system (none, report card, or consequential). After controlling for key variables, including parental education, race/ethnicity, poverty and state spending on education, they found that the consequential accountability systems implemented during the 1990s had a positive impact on student math performance on NAEP.  Data alone was not sufficient to prompt schools to take dramatic action to improve their results.

Researchers have also found positive effects behind the mere act of notifying schools in need of improvement that they faced the potential of sanctions. For example, Thomas Ahn of the University of Kentucky and Jacob Vigdor of Duke University analyzed the impact of NCLB’s accountability sanctions on school performance in North Carolina. They found that the “strongest association between failure to make AYP and subsequent test score performance occurs among those schools not yet exposed to any actual sanctions.”  In this case, the failure to meet AYP and the threat of imminent sanctions was a catalyst for schools to improve. For those schools that failed to make AYP for multiple years and entered NCLB sanctions, researchers found that the threat of the “ultimate penalty” — implementation of a restructuring plan — also had a strong positive impact on test scores.

Similarly, a study out of Texas found that students benefited when their school was at risk of being identified as “Low Performing.” Those benefits included short-term gains on test scores as well as higher college-going rates and higher early-career earnings. In other words, the threat of being identified as low-performing caused schools to change their practices in ways that improved long-term student outcomes.

The “Every Student Achieves Act” has a lofty name and high aspirations. But the only way to live up to such a lofty name is to ensure that states actually identify schools that aren’t raising achievement and boosting the educational outcomes of all students.