Tag Archives: annual testing

NAEP Results Again Show That Biennial National Tests Aren’t Worth It

Once again, new results from the National Assessment of Educational Progress (NAEP) show that administering national math and reading assessments every two years is too frequent to be useful.

The 2017 NAEP scores in math and reading were largely unchanged from 2015, when those subjects were last tested. While there was a small gain in eighth-grade reading in 2017 — a one-point increase on NAEP’s 500-point scale — it was not significantly different than eighth graders’ performance in 2013.

Many acknowledged that NAEP gains have plateaued in recent years after large improvements in earlier decades, and some have even described 2007-2017 as the “lost decade of educational progress.” But this sluggishness also shows that administering NAEP’s math and reading tests (referred to as the “main NAEP”) every two years is not necessary, as it is too little time to meaningfully change trend lines or evaluate the impact of new policies.

Such frequent testing also has other costs: In recent years, the National Assessment Governing Board (NAGB), the body that sets policy for NAEP, has reduced the frequency of the Long-Term Trends (LTT) assessment and limited testing in other important subjects like civics and history in order to cut costs. NAGB cited NAEP budget cuts as the reason for reducing the frequency of other assessments. However, though NAEP’s budget recovered and even increased in the years following, NAGB did not undo the previously scheduled reductions. (The LTT assessment is particularly valuable, as it tracks student achievement dating back to the early 1970s and provides another measure of academic achievement in addition to the main NAEP test.)

Instead, the additional funding was used to support other NAGB priorities, namely the shift to digital assessments. Even still, the release of the 2017 data was delayed by six months due to comparability concerns, and some education leaders are disputing the results because their students are not familiar enough with using tablets.

That is not to say that digital assessments don’t have benefits. For example, the new NAEP results include time lapse visualizations of students’ progress on certain types of questions. In future iterations of the test, these types of metadata could provide useful information about how various groups of students differ in their test-taking activity.

Animated GIF - Find & Share on GIPHY

However, these innovative approaches should not come at the expense of other assessments that are useful in the present. Given the concerns some have with the digital transition, this is especially true of the LTT assessment. Instead, NAGB should consider administering the main NAEP test less frequently — perhaps only every four years — and use the additional capacity to support other assessment types and subjects.

Three Reasons to Expect Little on Innovative Assessments — and Why That’s Not Such a Bad Thing

Photo by Josh Davis via Flickr

Next week is the deadline for states to submit an application for the innovative assessment pilot to the U.S. Department of Education (ED). If you missed this news, don’t worry, you haven’t missed much. The Every Student Succeeds Act (ESSA) allows ED to grant assessment flexibility to up to seven states to do something different from giving traditional end-of-year standardized tests. The best example of an innovative state assessment system is New Hampshire, which allows some districts to give locally designed performance-based assessments. These assessments look more like in-class activities than traditional standardized tests, and are developed and scored by teachers.

Two years ago, Education Week called the innovative assessment pilot “one of the most buzzed-about pieces” of ESSA because it could allow states to respond to testing pushback while still complying with the new federal law. But now only four states have announced they will apply, and expectations are subdued at best.

Why aren’t more states interested an opportunity to get some leeway on testing? Here are three big reasons:

  1. Most states are playing it safe on ESSA and assessments are no exception

When my colleagues at Bellwether convened an independent review of ESSA state plans with 45 education policy experts, they didn’t find much ambition or innovation in state plans — few states went beyond the requirements of the law, and some didn’t even do that. Even Secretary of Education Betsy DeVos, who has approved the majority of state plans, recently criticized states for plans that “only meet the bare minimum” and don’t take full advantage of the flexibility offered in the law.

Several states responded that they were actually doing more than they had indicated in their plans. As my colleague Julie Squire pointed out last year, putting something extra in an ESSA plan could limit a state’s options and bring on more federal monitoring. If most states were fairly conservative and compliance-based with their big ESSA plans, there’s little reason to think they’ll unveil something new and surprising in a small-scale waiver application.

Additionally, the law includes several requirements for an innovative assessment that might be difficult for states to meet. For example, innovative tests have to be comparable across school districts, they have to meet the needs of special education students and English learners, and the pilot programs have to be designed to scale up statewide. If states have any doubts they can meet that bar, they probably won’t apply. Continue reading

Donald Trump’s Election is a “Sputnik Moment” for Civics Education

Last week, the American Enterprise Institute hosted an event discussing the failings of civics education in America. The panelists referred to the dismal state of civics literacy as a “Sputnik moment” – a reference to when the Soviet Union successfully launched the world’s first satellite in 1957, stirring the United States to create the National Aeronautics and Space Administration (NASA) and dramatically increase its space exploration efforts.

Nothing illustrates this comparison better than the election of Donald Trump. As Trump has demonstrated time and time again, he knows little about governing or policy – instead relying on divisive rhetoric and petulant Twitter tantrums. His most recent gaffe: at a White House convening of the nation’s governors, Trump said that “nobody knew health care could be so complicated.” As it turns out, many people knew.

However, if Trump can name all three branches of government, that alone would put him ahead of nearly three quarters of Americans. According to a 2016 survey conducted by the Annenberg Public Policy Center, only 26 percent of respondents could name all three branches, and 31 percent could not name a single one.

Data from the National Assessment of Educational Progress (NAEP) also show poor results. In 2014 – the most recent NAEP civics assessment – only 23 percent of eighth grade students scored at or above the proficient level. The same is true of older students getting ready to vote. In 2010, when NAEP last tested high school seniors, only 24 percent scored at or above the proficient level. Neither of these results has changed significantly since 1998.

At the same time, faith in many of America’s institutions are at historic lows – even before Trump’s election. And it’s likely that his constant attacks on various institutions will only serve to worsen these numbers. This crisis of confidence only feeds into the growing level of polarization, making it nearly impossible to govern effectively. It’s no wonder that recent congresses have been arguably some of the least productive ever.

Confidence in Institutions

Despite these difficulties, the American people seem well aware of the problem at hand. According to the 2016 PDK poll of the public’s attitudes toward the public schools, 82 percent of Americans believe preparing students to be good citizens is very or extremely important. At the same time, only 33 percent think the public schools in their communities are doing that job very or extremely well.

So what is to be done? Continue reading

States Need to Get Real on Testing Tradeoffs Before Making Another Big Switch

risksignJust a few years ago, it seemed like most of the country was heading towards common state assessments in math and reading. Two groups of states won federal grant funds to create higher-quality tests; these became the PARCC and Smarter Balanced test consortia. Now, despite the demonstrated rigor and academic quality of those tests, the testing landscape is almost as fractured as it was before, with states pursuing a variety of assessment strategies. Some states in the consortia are still waffling. Others that have left are already scrapping the tests they made on their own with no idea of what they’ll do next.

States should think carefully before going it alone or introducing a new testing overhaul without strong justification. There are some big tradeoffs at play in the testing world, and a state might spend millions on an “innovative” new test from an eager-to-please vendor only to find that it has the same, or worse, issues as the “next generation” tests they tossed aside.

Continue reading

We Have to Improve the School Improvement Process

It’s September 1. School is back in session in many places. And yet, state test results from last spring are still trickling out. Colorado’s are out today. The District of Columbia’s results officially came out on Tuesday. California’s results came out August 24th.

These results are too late for schools to do much with. Principals are busy running their schools, and teachers are busy in their classrooms. There’s no time for schools to draft improvement plans in response to results, let alone implement those plans in time to affect students. It’s no surprise that teachers and school leaders might not value a school improvement plan that’s drafted well into the school year, yet we’ve been repeating this cycle over and over again.

Ten years ago, I was a graduate assistant for College of William & Mary professor Paul Manna. We compiled every state’s Adequate Yearly Progress (AYP) determinations for the 2005-6 school year, and we found that most states were releasing results in August or September, well past the time when they could be most helpful to school improvement planning. The graph below shows what we found. Each dot represented one state, plotted based on when they released their school results.

State test results timing_Manna graph

This was 10 years ago, and a lot has changed since then. In 2006, most states were in their first years of statewide testing programs. NCLB was in its infancy, and states were just starting up their accountability systems. They barely had processes in place to compile the results and make them public. Computers were a lot less powerful back then, and every state was testing its students using paper-and-pencil tests.

States have been doing all this for 10 years now. And most states have now moved their testing systems online. Theoretically at least, we should be able to get results back much faster than we were in the past. But that doesn’t seem to be happening. I’m afraid that if we created the same graph today as we did in 2006, it would look nearly identical.

These delays represent a big kink in the theory of action behind school accountability. Without timely information, states can’t identify which schools need to improve and why. We can’t dump information on teachers and principals right in the middle of back-to-school season and expect they’ll be able to do anything meaningful with it. It’s too late to design a school improvement plan, and it’s too late to tell parents and families, “Welp, that school we assigned your child to is no good. Too bad they already started 4th grade there.” If we want to help schools improve, we have to improve the school improvement process.