Author Archives: Michelle Croft

Understanding Parents Requires More Than a Single Poll Result

In statistics, it’s often said that “all models are wrong, but some are useful.” When it comes to polling parents on K-12 schooling, it’s similarly true that while no single result may be “right” it can be useful –– particularly when considered in the context of other polls. 

It’s always important to consider how new polling data points fit into longer-term trends –– something that’s exceptionally true in public opinion research. Bellwether’s new Parent Perception Barometer aggregates national polling data to provide a more nuanced perspective on parents’ complex opinions. It’s also a tool to mitigate against the temptation to put too much emphasis on the most recent poll.

A recent NPR/Ipsos survey about parents’ thoughts on schools provides an excellent reminder of why context matters when considering the results of new polls. This particular survey asked parents how much they agree with the following statement: “My child has fallen behind in school due to the pandemic.” Thirty-two percent of parents agreed with the statement. 

By just looking at this isolated data point, we may infer that two-thirds of parents don’t think the pandemic has negatively impacted their child’s academic progress. But examining this data in the context of other polls changes its interpretation. 

Recent polls tracked in the Parent Perception Barometer consistently indicate that a majority of parents have been concerned about their child’s academic progress throughout the pandemic. As of March 2022, data from National Parents Union/Echelon showed 66% of parents worry “a lot” or “some” about their child staying on track in school.

Data visualization courtesy of Bellwether’s Parent Perception Barometer.

Using the barometer, we can more easily identify key differences in the phrasing of the NPR/Ipsos poll that help inform how we interpret the its data, along with the results of other polls:

  • Wording matters. A key distinction between the NPR/Ipsos poll and others is the difference between a parents’ “perception of” their child’s academic performance (NPR/Ipsos) and parents’ “general worry or concern about” their child’s academic performance (National Parents Union/Echelon). There are multiple explanations why these two constructs may produce different results. A parent could be concerned about their child’s academic progress while also believing that their child isn’t falling behind. Cognitive biases may also limit parents’ willingness to tell a pollster that their child has fallen behind in school. Examining the nuances in survey item phrasing can help tease out when different polls are testing similar –– or in this case, different –– phenomena.
  • Reference points are important. Survey questions often ask about abstract concepts. For example, asking parents if their children have “fallen behind” or “are off track,” may mean different things to different parents. Should “falling behind” in school be interpreted as a comparison to others in their peer group, to the state’s academic standards, or to where the child would have been academically absent a pandemic? Some polls try to define the reference point by asking “compared to a typical school year” or “ready for the next grade,” but others (like the NPR/Ipsos poll) leave more room for interpretation by respondents, which can muddle results.
  • The timing of surveys can influence responses. In addition to what is asked in a survey, when the survey is administered can influence results. In the chart above, there’s a noticeable trend where parents report less concern about their child’s academic progress during the summer, only for those concerns to rebound during the academic year. A USC poll asked parents about how “concerned” or “unconcerned” they are with the amount their child learned this year compared to a typical school year. In a survey administered in April through May 2021, 64% of parents reported being concerned, compared to only 50% in June through July 2021. National Parents Union/Echelon polls illustrate similar declines over the summer in parent worry. This is less relevant for the NPR/Ipsos poll, but is worth considering as new data are released.

Given these considerations, which poll is “right”? The truth is, absent obvious flaws in the survey design — like biased phrasing or leading questions — most polls provide some useful information. When polls ask slightly different questions on a given topic, understanding the relationships between item phrasing and response data can help analysts derive more robust insights. 

Differing results among polls aren’t a flaw, but a feature. Tools like the Parent Perception Barometer separate the signal from the noise in assessing what parents actually think about K-12 schooling.

Tracking Parents’ Complex Perspectives on K-12 Education

Photo courtesy of Allison Shelley for EDUimages

Every policy wonk loves a good poll, and education policy wonks are no exception. Polls give added depth and dimension to an array of current (and shifting) public opinions, attitudes, and needs. But too often, wonks tend to over-index on the latest, flashiest data point as new polls are released — making it difficult to examine the broader context of other polls analyzing similar data points, or to contextualize prior administrations of the same poll.

The recency bias associated with new polling data is a persistent problem in fully understanding how parents think about K-12 education across the country. Contrary to media-driven hype, parents have diverse viewpoints that don’t fit broad narratives offered by pundits. Just as children and circumstances change over time, so do parents’ opinions on what their child needs. And to say that the COVID-19 pandemic brought change to parents and to their children’s educational needs is an understatement — one that underscores the need for a deeper examination of how parents’ views on K-12 education have (or haven’t) changed since March 2020.

Alex Spurrier, Juliet Squire, Andy Rotherham, and I launched the Parent Perception Barometer to help advocates, policymakers, and journalists navigate the nuance of parents’ opinion about K-12 education. The interactive barometer aggregates nationwide polling and other data on parents’ stated and revealed preferences regarding their children’s education. The first wave of polling data indicates that parents are largely satisfied with their child’s education and school, but many have specific concerns about their child’s academic progress as well as their mental health and well-being. As parent opinions aren’t static, the barometer will be updated on a regular basis with the release of new polling data.

There are multiple benefits of aggregating this polling data in the barometer: 

  • First, it allows us to examine emerging or persistent trends in the data. Looking at the same question asked over multiple time periods as well as similar questions asked from different polls separates signal from noise. 
  • Second, it shapes a holistic consideration of a body of relevant data, tempering the pull of recency bias that comes with each new poll’s release. 
  • Third, by analyzing similar poll questions, we identify data points that may be outliers. For instance, if three polls asking a similar question all indicate that parents strongly favor a particular policy, and a fourth poll indicates otherwise, we may look more closely at that poll’s language wording and be more cautious about the types of statements or conclusions we make.

The Parent Perception Barometer provides several ways to support a comprehensive analysis of parents’ perceptions. For those most interested in exploring data on a single topic across multiple sources, the Data Visualization tab provides a high-level summary of recent trends in parents’ stated and revealed preferences. For those looking for more technical background on the polls and data, information about specific polling questions, possible responses, and administration dates can be found within the Additional Detail tab. The barometer also allows users to view and download underlying source data. 

The Parent Perception Barometer is a valuable resource to ground policy and advocacy conversations in a nuanced, contextual understanding of parents’ opinions — bringing clarity and context to the K-12 education debate.

Why Aren’t States Innovating in Student Assessments?

Photo courtesy of Allison Shelley/The Verbatim Agency for EDUimages

In the next few weeks, students across the country will begin taking their state’s end-of-year assessment. Despite rhetoric over the years about innovations in assessments and computer-based delivery, by and large, students’ testing experience in 2022 will parallel students’ testing experience in 2002. The monolith of one largely multiple-choice assessment at the end of the school year remains. And so does the perennial quest to improve student tests. 

On Feb. 15, 2022, the U.S. Department of Education released applications for its Competitive Grants for State Assessments program to support innovation in state assessment systems. This year’s funding priorities encourage the use of multiple measures (e.g., including curriculum-embedded performance tasks in the end-of-year assessment) and mastery of standards as part of a competency-based education model. Despite the program’s opportunity for additional funding to develop more innovative assessments, reactions to the announcement ranged from unenthusiastic to crickets. 

One reason for the tepid response is that states are in the process of rebooting their assessment systems after the lack of statewide participation during the past two years of the COVID-19 pandemic. Creating a new assessment — let alone a new, innovative system — takes time and staff resources at the state and district level that aren’t available in the immediate term. Although historic federal-level pandemic funds flowed into states, districts, and schools, political support for assessments is not high, making it difficult for states to justify spending COVID relief funding on developing and administering new statewide assessments.  

Another reason for the lackluster response is the challenges states have in developing an innovative assessment that complies with the Every Student Succeeds Act’s (ESSA) accountability requirements. Like its predecessor, No Child Left Behind, ESSA requires all students to participate in statewide testing. States must use the scores — along with other indicators — to identify schools for additional support largely based on in-state rankings. 

The challenge is that in developing any new, innovative assessment unknowns abound. How can states feel confident administering assessments without a demonstrated track record of student success and school accountability for scores?  

ESSA addresses this issue by permitting states to apply for the Innovative Assessment Demonstration Authority (IADA). Under IADA, qualifying states wouldn’t need to administer the innovative or traditional assessments to all students within the state. However, states would need to demonstrate that scores from the innovate and the traditional assessments are comparable — similar enough to be interchangeable — for all students and student subgroups (e.g., students of different races/ethnicities). The regulations provide examples of methods to demonstrate comparability such as (1) requiring all students within at least one grade level to take both assessments, (2) administering both assessments to a demographically representative sample of students, (3) embedding a significant portion of one assessment within the other assessment, or (4) an equally rigorous alternate method.  

The comparability requirement is challenging for states to meet, particularly due to unknowns related to administering a new assessment and because comparability must be met for all indicators of the state’s accountability system. For instance, one proposal was partially approved pending additional evidence that the assessment could provide data for the state’s readiness “literacy” indicator. To date, only five states have been approved for IADA.  

When Congress reauthorizes ESSA, one option for expanding opportunities for innovative assessments is to waive accountability determinations for participating schools during the assessment’s pilot phase. But this approach omits comparability of scores — the very problem IADA is designed to address and an omission that carries serious equity implications. Comparability of scores is a key component for states to identify districts and schools that need additional improvement support. It’s also a mechanism to identify schools serving students of color and low-income students well to ensure that best practices are replicated in other schools.  

In the meantime, states should bolster existing assessment infrastructure to be better positioned when resources are available to innovate. Specifically, states should:  

  • Improve score reporting to meaningfully and easily communicate results to educators and families. Score reporting is an historical afterthought of testing. A competitive priority for the Competitive Grants for State Assessments is improving reporting, for instance by providing actionable information for parents on the score reports. This provides an opportunity for states to better communicate the information already collected.
  • Increase efforts to improve teacher classroom assessment literacy. End-of-year assessments are just one piece of a larger system of assessments. It’s important that teachers understand how to properly use, interpret, and communicate those scores. And it’s even more important that teachers have additional training in developing the classroom assessments used as part of everyday instruction, which are key to a balanced approach to testing.  

Given the current need for educators and parents to understand their student’s academic progress — especially amid an ongoing pandemic that has upended education and the systematic tracking of student achievement — comparability of test scores may outweigh the advantages of innovative end-of-year assessments. By focusing on comparability, states can better direct resources to the students and schools that need them most.  

Confused by your child’s state assessment results? You’re not alone.

Photo courtesy of Allison Shelley for EDUimages

As trained psychometricians, my husband and I study how to design student achievement tests and interpret the scores. And if that work wasn’t complicated enough, our son took his first statewide standardized assessment last spring. We thought we were well prepared to review his results, but we were wrong. When we received an email in mid-October from our school district on how to access his results, my husband said to me, “Now I understand why people complain about standardized tests.” 

The process to get our son’s test scores was not at all user friendly, and I can’t imagine that we’re the only parents experiencing this level of confusion as families like ours receive spring 2021 student assessment results.  

First, we had to log into the school’s student information system (e.g., Infinite Campus, PowerSchool) where we could view his scores, proficiency levels (e.g., advanced, proficient, and not proficient), and the number of questions answered correctly for different portions of the test. Because our son had tested in person, there was also a claim code so we could create a separate “Parent Portal” account from the test vendor. If he had tested remotely, the only information that we would have received would have been his scores in the district system. We were instructed to take the scores, open a technical manual that had been linked in the email, and use the manual to find our son’s percentile rank. There was no information provided on how to interpret any of the scores.*  

Although the ongoing COVID-19 pandemic is a likely factor causing confusion, our experience highlights problems with assessment information and transparency. Given calls to eliminate annual testing in schools, it’s increasingly important for states and districts to facilitate the appropriate use and understanding of the test scores so families can understand what these tests do and do not tell us about student learning. The first step is providing parents with information that’s not only timely, but also accessible. Here are a few common issues. 

Achievement Levels 

To help with score interpretation, states are required to create at least three achievement levels. These achievement levels provide a rough indicator of whether or not a student is meeting grade level requirements. However, not much information is given to parents about what these levels actually mean. The descriptions within the score report often use jargon that is likely unfamiliar to parents. For instance, an advanced student in mathematics has “a thorough understanding of Operations and Algebraic Thinking.” To understand the meaning, parents would need to read the detailed performance level descriptors that are in a different manual or read their state’s standards. Another issue is that proficiency can vary from assessment to assessment, and parents are left trying to figure out why their child was designated “Some Risk” on one assessment versus “Proficient” on another. 

Raw Scores 

Raw scores are the number of items that a student answered correctly. Sometimes assessments will report raw scores as a “subscore.” However, these numbers can be misleading without more context. For instance, if there were only four items for a particular subscore and a student missed two of the four, it could look like they were particularly weak in that area when the discrepancy may be an artifact of the test length.  

Changes in the Assessment 

Depending on the testing program, the interpretation of this year’s test scores may be different than previous years and it’s important to communicate the what and why about those differences. For example, percentile ranks are typically based on students who took the assessment during the first test administration. They’re referred to as the norm group, which provides a relatively stable comparison over time. When interpreting the percentile rank, it’s essentially saying that a student at the 50th percentile scored better than 50% of the students in the norm group. Changes to the norm group can make a big difference in terms of the interpretation as we’re changing our reference point. In my state, the first administration of the test was in 2019 but the norm group was updated to students who tested in 2021.   

On the surface, this could be reasonable. Given disruptions in learning, families, teachers, and school leaders may want know how students compare to others who have had similar disruptions to their schooling. But, if a parent wants to know how much learning loss may have occurred and compare their child’s score to peers’ scores pre-pandemic, they’d need to either use the proficiency standards (advanced, proficient, not proficient, which are a fairly rough indicator given the range of scores), or break out the 2019 technical manual and look up their child’s percentile rank. 

These issues may sound minor, but they’re not. And, when poorly communicated, they reinforce the narrative that test scores aren’t useful or important and contribute to increased skepticism about testing. Although some of the shifts are unique to COVID-19, states also change tests, norm groups, and cut scores in non-pandemic times.  

Moving forward, increased transparency is needed to ensure that parents like my husband and me, districts, and policymakers better understand how to interpret and use the scores to track student growth. 

 

(*Our school district has a one-to-one device initiative and provides hotspots to families that don’t have internet access. In other districts, there may be substantial equity issues in distributing student scores through online platforms, as not all families have access to technology.)