Tag Archives: Student Assessment

Why Aren’t States Innovating in Student Assessments?

Photo courtesy of Allison Shelley/The Verbatim Agency for EDUimages

In the next few weeks, students across the country will begin taking their state’s end-of-year assessment. Despite rhetoric over the years about innovations in assessments and computer-based delivery, by and large, students’ testing experience in 2022 will parallel students’ testing experience in 2002. The monolith of one largely multiple-choice assessment at the end of the school year remains. And so does the perennial quest to improve student tests. 

On Feb. 15, 2022, the U.S. Department of Education released applications for its Competitive Grants for State Assessments program to support innovation in state assessment systems. This year’s funding priorities encourage the use of multiple measures (e.g., including curriculum-embedded performance tasks in the end-of-year assessment) and mastery of standards as part of a competency-based education model. Despite the program’s opportunity for additional funding to develop more innovative assessments, reactions to the announcement ranged from unenthusiastic to crickets. 

One reason for the tepid response is that states are in the process of rebooting their assessment systems after the lack of statewide participation during the past two years of the COVID-19 pandemic. Creating a new assessment — let alone a new, innovative system — takes time and staff resources at the state and district level that aren’t available in the immediate term. Although historic federal-level pandemic funds flowed into states, districts, and schools, political support for assessments is not high, making it difficult for states to justify spending COVID relief funding on developing and administering new statewide assessments.  

Another reason for the lackluster response is the challenges states have in developing an innovative assessment that complies with the Every Student Succeeds Act’s (ESSA) accountability requirements. Like its predecessor, No Child Left Behind, ESSA requires all students to participate in statewide testing. States must use the scores — along with other indicators — to identify schools for additional support largely based on in-state rankings. 

The challenge is that in developing any new, innovative assessment unknowns abound. How can states feel confident administering assessments without a demonstrated track record of student success and school accountability for scores?  

ESSA addresses this issue by permitting states to apply for the Innovative Assessment Demonstration Authority (IADA). Under IADA, qualifying states wouldn’t need to administer the innovative or traditional assessments to all students within the state. However, states would need to demonstrate that scores from the innovate and the traditional assessments are comparable — similar enough to be interchangeable — for all students and student subgroups (e.g., students of different races/ethnicities). The regulations provide examples of methods to demonstrate comparability such as (1) requiring all students within at least one grade level to take both assessments, (2) administering both assessments to a demographically representative sample of students, (3) embedding a significant portion of one assessment within the other assessment, or (4) an equally rigorous alternate method.  

The comparability requirement is challenging for states to meet, particularly due to unknowns related to administering a new assessment and because comparability must be met for all indicators of the state’s accountability system. For instance, one proposal was partially approved pending additional evidence that the assessment could provide data for the state’s readiness “literacy” indicator. To date, only five states have been approved for IADA.  

When Congress reauthorizes ESSA, one option for expanding opportunities for innovative assessments is to waive accountability determinations for participating schools during the assessment’s pilot phase. But this approach omits comparability of scores — the very problem IADA is designed to address and an omission that carries serious equity implications. Comparability of scores is a key component for states to identify districts and schools that need additional improvement support. It’s also a mechanism to identify schools serving students of color and low-income students well to ensure that best practices are replicated in other schools.  

In the meantime, states should bolster existing assessment infrastructure to be better positioned when resources are available to innovate. Specifically, states should:  

  • Improve score reporting to meaningfully and easily communicate results to educators and families. Score reporting is an historical afterthought of testing. A competitive priority for the Competitive Grants for State Assessments is improving reporting, for instance by providing actionable information for parents on the score reports. This provides an opportunity for states to better communicate the information already collected.
  • Increase efforts to improve teacher classroom assessment literacy. End-of-year assessments are just one piece of a larger system of assessments. It’s important that teachers understand how to properly use, interpret, and communicate those scores. And it’s even more important that teachers have additional training in developing the classroom assessments used as part of everyday instruction, which are key to a balanced approach to testing.  

Given the current need for educators and parents to understand their student’s academic progress — especially amid an ongoing pandemic that has upended education and the systematic tracking of student achievement — comparability of test scores may outweigh the advantages of innovative end-of-year assessments. By focusing on comparability, states can better direct resources to the students and schools that need them most.  

Confused by your child’s state assessment results? You’re not alone.

Photo courtesy of Allison Shelley for EDUimages

As trained psychometricians, my husband and I study how to design student achievement tests and interpret the scores. And if that work wasn’t complicated enough, our son took his first statewide standardized assessment last spring. We thought we were well prepared to review his results, but we were wrong. When we received an email in mid-October from our school district on how to access his results, my husband said to me, “Now I understand why people complain about standardized tests.” 

The process to get our son’s test scores was not at all user friendly, and I can’t imagine that we’re the only parents experiencing this level of confusion as families like ours receive spring 2021 student assessment results.  

First, we had to log into the school’s student information system (e.g., Infinite Campus, PowerSchool) where we could view his scores, proficiency levels (e.g., advanced, proficient, and not proficient), and the number of questions answered correctly for different portions of the test. Because our son had tested in person, there was also a claim code so we could create a separate “Parent Portal” account from the test vendor. If he had tested remotely, the only information that we would have received would have been his scores in the district system. We were instructed to take the scores, open a technical manual that had been linked in the email, and use the manual to find our son’s percentile rank. There was no information provided on how to interpret any of the scores.*  

Although the ongoing COVID-19 pandemic is a likely factor causing confusion, our experience highlights problems with assessment information and transparency. Given calls to eliminate annual testing in schools, it’s increasingly important for states and districts to facilitate the appropriate use and understanding of the test scores so families can understand what these tests do and do not tell us about student learning. The first step is providing parents with information that’s not only timely, but also accessible. Here are a few common issues. 

Achievement Levels 

To help with score interpretation, states are required to create at least three achievement levels. These achievement levels provide a rough indicator of whether or not a student is meeting grade level requirements. However, not much information is given to parents about what these levels actually mean. The descriptions within the score report often use jargon that is likely unfamiliar to parents. For instance, an advanced student in mathematics has “a thorough understanding of Operations and Algebraic Thinking.” To understand the meaning, parents would need to read the detailed performance level descriptors that are in a different manual or read their state’s standards. Another issue is that proficiency can vary from assessment to assessment, and parents are left trying to figure out why their child was designated “Some Risk” on one assessment versus “Proficient” on another. 

Raw Scores 

Raw scores are the number of items that a student answered correctly. Sometimes assessments will report raw scores as a “subscore.” However, these numbers can be misleading without more context. For instance, if there were only four items for a particular subscore and a student missed two of the four, it could look like they were particularly weak in that area when the discrepancy may be an artifact of the test length.  

Changes in the Assessment 

Depending on the testing program, the interpretation of this year’s test scores may be different than previous years and it’s important to communicate the what and why about those differences. For example, percentile ranks are typically based on students who took the assessment during the first test administration. They’re referred to as the norm group, which provides a relatively stable comparison over time. When interpreting the percentile rank, it’s essentially saying that a student at the 50th percentile scored better than 50% of the students in the norm group. Changes to the norm group can make a big difference in terms of the interpretation as we’re changing our reference point. In my state, the first administration of the test was in 2019 but the norm group was updated to students who tested in 2021.   

On the surface, this could be reasonable. Given disruptions in learning, families, teachers, and school leaders may want know how students compare to others who have had similar disruptions to their schooling. But, if a parent wants to know how much learning loss may have occurred and compare their child’s score to peers’ scores pre-pandemic, they’d need to either use the proficiency standards (advanced, proficient, not proficient, which are a fairly rough indicator given the range of scores), or break out the 2019 technical manual and look up their child’s percentile rank. 

These issues may sound minor, but they’re not. And, when poorly communicated, they reinforce the narrative that test scores aren’t useful or important and contribute to increased skepticism about testing. Although some of the shifts are unique to COVID-19, states also change tests, norm groups, and cut scores in non-pandemic times.  

Moving forward, increased transparency is needed to ensure that parents like my husband and me, districts, and policymakers better understand how to interpret and use the scores to track student growth. 

 

(*Our school district has a one-to-one device initiative and provides hotspots to families that don’t have internet access. In other districts, there may be substantial equity issues in distributing student scores through online platforms, as not all families have access to technology.)

 

What Can Spring 2021 Assessments Tell Us About Learning Loss?

Photo courtesy of Allison Shelley for EDUimages

As spring 2021 state assessment results come in across the country, the academic impacts of COVID-19 are no longer theoretical. The preponderance of data points in the same direction: student learning was significantly impacted by the pandemic. States are reporting significant decreases in math, reading, and science proficiency since 2019 — with students of color, English language learners, and students from low-income families among the most impacted.

How did we get here, and what can schools, districts, and policymakers do about it? 

Learning loss is not a new concept in education, although it might go by many names. In its simplest form, it’s the result of a significant disruption in education that can lead students to lose previously acquired knowledge or skills, or shift to a learning trajectory that takes them further from grade level standards. Pre-pandemic studies looked at two kinds of learning loss 1) the “summer slide” or “summer setback” that many students experience between one school year and the next as well as 2) the short- and long-term academic effects of school closures due to weather and natural disasters. 

In the rocky shifts to and from remote learning (and back again) over the past year and a half — often without sufficient support for educators and families — it seemed very likely that students would experience some form of learning loss, perhaps in entirely different ways than previously understood. Emerging studies throughout 2020-21 consistently showed that the negative academic effects of COVID-19 disruptions were real, and were most pronounced among historically marginalized student groups. But the idea of learning loss received surprising pushback, mostly from those who felt the term stigmatized students or blamed educators for circumstances outside of their control. Some claim that learning loss is a “myth” and indicative of “deficit framing” because it ignores the student learning during the pandemic outside of traditional curricula. Examples of non-traditional learning include resiliency, creativity, and technology skills. However, acknowledging the value of non-traditional skills doesn’t erase the importance or urgency of developing academic skills and knowledge that are essential for college and career readiness. 

As states across the country analyze spring 2021 assessments, the results are often startling. Some examples from 2020-21 school year data include:

  • North Carolina, where student scores decreased across all end-of-year assessments. In most cases, fewer than half of students were meeting grade level expectations.
  • Minnesota, with a 7 percentage point decrease in students reading on grade level and an 11 percentage point decrease in on-grade-level math proficiency.
  • Virginia, where the percentage of students passing state tests is down by 28 percentage points in math, 22 percentage points in science, and 9 percentage points in reading.
  • Tennessee, which experienced a drop in overall statewide proficiency of five percentage points — with Nashville and Memphis schools that serve the largest proportions of students of color, economically disadvantaged students, and English language learners seeing an 8 and 11 percentage point decrease, respectively, in overall proficiency in math, social studies, reading, and science. 

There are important caveats to these results at the student, school, and state level, and comparisons to prior years should be made with caution. Students may have also been tested under unusual pandemic conditions and some states shortened or changed their assessments this year with permission from the U.S. Department of Education. Furthermore, some, but not all, states have reported atypically low test participation rates. Federal law usually mandates greater than 95% test participation at the state, district, and school level. North Carolina and Tennessee reported 90% and 95% student participation, respectively, but only 75-80% of students in Virginia and 78% of students in Minnesota took those states’ assessments. 

Even with these caveats, evidence is mounting that learning loss is a real challenge facing schools across the country. Some see these data as representative of “arbitrary” academic standards. While one can reasonably debate the utility of academic standards that align with age-based grade levels, the fact remains that, as education author and commentator Elliot Haspel put it, skills that students would have otherwise learned to a certain level during a normal school year were not learned during the pandemic year. 

It’s time to move beyond the semantics of what to call the problem and instead figure out what we’re going to do about it. Here are four key recommendations for states and local school districts to address learning loss in the current 2021-22 school year:

  • Continue leveraging data to provide targeted academic support by regularly administering interim assessments to monitor student progress and using the data to drive rapid cycles of improvement — where changes in strategy or approach to academic intervention can happen in real-time as needed. 
  • Adopt accelerated learning strategies in lieu of traditional remediation and train teachers on effective accelerated learning pedagogy, which has been found to be more effective than traditional remediation in helping students regain pre-pandemic skills and pick up where they left off — especially for students of color and students from low-income backgrounds. 
  • Supplement increased academic investments with robust mental health supports by providing resources for adequate numbers of trained professional counselors and social workers, wraparound services, and the high-quality delivery of evidenced-based social and emotional learning curricula. 
  • Adopt approaches to intentionally teach and assess non-academic skills in a traditional school setting, recognizing that schools are responsible for teaching students essential life skills such as time management, goal setting, self advocacy, effective communication, and resiliency.

Acknowledging learning loss does not mean that students learned nothing. It does recognize that students’ academic learning experiences were deeply affected by the pandemic in ways that need urgent action. Students of color, English language learners, and students from low-income families have been disproportionately impacted by pandemic learning conditions. 

It’s important that we name the challenge and it’s incumbent upon states and local school districts to invest the resources into addressing this issue, or risk further exacerbating long-standing educational inequities.