Why Aren’t States Innovating in Student Assessments?

Photo courtesy of Allison Shelley/The Verbatim Agency for EDUimages

In the next few weeks, students across the country will begin taking their state’s end-of-year assessment. Despite rhetoric over the years about innovations in assessments and computer-based delivery, by and large, students’ testing experience in 2022 will parallel students’ testing experience in 2002. The monolith of one largely multiple-choice assessment at the end of the school year remains. And so does the perennial quest to improve student tests. 

On Feb. 15, 2022, the U.S. Department of Education released applications for its Competitive Grants for State Assessments program to support innovation in state assessment systems. This year’s funding priorities encourage the use of multiple measures (e.g., including curriculum-embedded performance tasks in the end-of-year assessment) and mastery of standards as part of a competency-based education model. Despite the program’s opportunity for additional funding to develop more innovative assessments, reactions to the announcement ranged from unenthusiastic to crickets. 

One reason for the tepid response is that states are in the process of rebooting their assessment systems after the lack of statewide participation during the past two years of the COVID-19 pandemic. Creating a new assessment — let alone a new, innovative system — takes time and staff resources at the state and district level that aren’t available in the immediate term. Although historic federal-level pandemic funds flowed into states, districts, and schools, political support for assessments is not high, making it difficult for states to justify spending COVID relief funding on developing and administering new statewide assessments.  

Another reason for the lackluster response is the challenges states have in developing an innovative assessment that complies with the Every Student Succeeds Act’s (ESSA) accountability requirements. Like its predecessor, No Child Left Behind, ESSA requires all students to participate in statewide testing. States must use the scores — along with other indicators — to identify schools for additional support largely based on in-state rankings. 

The challenge is that in developing any new, innovative assessment unknowns abound. How can states feel confident administering assessments without a demonstrated track record of student success and school accountability for scores?  

ESSA addresses this issue by permitting states to apply for the Innovative Assessment Demonstration Authority (IADA). Under IADA, qualifying states wouldn’t need to administer the innovative or traditional assessments to all students within the state. However, states would need to demonstrate that scores from the innovate and the traditional assessments are comparable — similar enough to be interchangeable — for all students and student subgroups (e.g., students of different races/ethnicities). The regulations provide examples of methods to demonstrate comparability such as (1) requiring all students within at least one grade level to take both assessments, (2) administering both assessments to a demographically representative sample of students, (3) embedding a significant portion of one assessment within the other assessment, or (4) an equally rigorous alternate method.  

The comparability requirement is challenging for states to meet, particularly due to unknowns related to administering a new assessment and because comparability must be met for all indicators of the state’s accountability system. For instance, one proposal was partially approved pending additional evidence that the assessment could provide data for the state’s readiness “literacy” indicator. To date, only five states have been approved for IADA.  

When Congress reauthorizes ESSA, one option for expanding opportunities for innovative assessments is to waive accountability determinations for participating schools during the assessment’s pilot phase. But this approach omits comparability of scores — the very problem IADA is designed to address and an omission that carries serious equity implications. Comparability of scores is a key component for states to identify districts and schools that need additional improvement support. It’s also a mechanism to identify schools serving students of color and low-income students well to ensure that best practices are replicated in other schools.  

In the meantime, states should bolster existing assessment infrastructure to be better positioned when resources are available to innovate. Specifically, states should:  

  • Improve score reporting to meaningfully and easily communicate results to educators and families. Score reporting is an historical afterthought of testing. A competitive priority for the Competitive Grants for State Assessments is improving reporting, for instance by providing actionable information for parents on the score reports. This provides an opportunity for states to better communicate the information already collected.
  • Increase efforts to improve teacher classroom assessment literacy. End-of-year assessments are just one piece of a larger system of assessments. It’s important that teachers understand how to properly use, interpret, and communicate those scores. And it’s even more important that teachers have additional training in developing the classroom assessments used as part of everyday instruction, which are key to a balanced approach to testing.  

Given the current need for educators and parents to understand their student’s academic progress — especially amid an ongoing pandemic that has upended education and the systematic tracking of student achievement — comparability of test scores may outweigh the advantages of innovative end-of-year assessments. By focusing on comparability, states can better direct resources to the students and schools that need them most.