Tag Archives: annual testing

States Need to Get Real on Testing Tradeoffs Before Making Another Big Switch

risksignJust a few years ago, it seemed like most of the country was heading towards common state assessments in math and reading. Two groups of states won federal grant funds to create higher-quality tests; these became the PARCC and Smarter Balanced test consortia. Now, despite the demonstrated rigor and academic quality of those tests, the testing landscape is almost as fractured as it was before, with states pursuing a variety of assessment strategies. Some states in the consortia are still waffling. Others that have left are already scrapping the tests they made on their own with no idea of what they’ll do next.

States should think carefully before going it alone or introducing a new testing overhaul without strong justification. There are some big tradeoffs at play in the testing world, and a state might spend millions on an “innovative” new test from an eager-to-please vendor only to find that it has the same, or worse, issues as the “next generation” tests they tossed aside.

Continue reading

We Have to Improve the School Improvement Process

It’s September 1. School is back in session in many places. And yet, state test results from last spring are still trickling out. Colorado’s are out today. The District of Columbia’s results officially came out on Tuesday. California’s results came out August 24th.

These results are too late for schools to do much with. Principals are busy running their schools, and teachers are busy in their classrooms. There’s no time for schools to draft improvement plans in response to results, let alone implement those plans in time to affect students. It’s no surprise that teachers and school leaders might not value a school improvement plan that’s drafted well into the school year, yet we’ve been repeating this cycle over and over again.

Ten years ago, I was a graduate assistant for College of William & Mary professor Paul Manna. We compiled every state’s Adequate Yearly Progress (AYP) determinations for the 2005-6 school year, and we found that most states were releasing results in August or September, well past the time when they could be most helpful to school improvement planning. The graph below shows what we found. Each dot represented one state, plotted based on when they released their school results.

State test results timing_Manna graph

This was 10 years ago, and a lot has changed since then. In 2006, most states were in their first years of statewide testing programs. NCLB was in its infancy, and states were just starting up their accountability systems. They barely had processes in place to compile the results and make them public. Computers were a lot less powerful back then, and every state was testing its students using paper-and-pencil tests.

States have been doing all this for 10 years now. And most states have now moved their testing systems online. Theoretically at least, we should be able to get results back much faster than we were in the past. But that doesn’t seem to be happening. I’m afraid that if we created the same graph today as we did in 2006, it would look nearly identical.

These delays represent a big kink in the theory of action behind school accountability. Without timely information, states can’t identify which schools need to improve and why. We can’t dump information on teachers and principals right in the middle of back-to-school season and expect they’ll be able to do anything meaningful with it. It’s too late to design a school improvement plan, and it’s too late to tell parents and families, “Welp, that school we assigned your child to is no good. Too bad they already started 4th grade there.” If we want to help schools improve, we have to improve the school improvement process.

Candidates Think We Can’t Handle the Complex Truth About Education

The Learning Landscape

We need a nuanced education conversation based on data, not polarizing rhetoric. That’s why we built this new resource: www.thelearninglandscape.org/

Depending on whom you ask, charter schools represent either the best of things or the worst of things in the modern education system. This binary hero-villain dialogue plays out time and again among education advocates. It’s so pervasive that it even managed to infiltrate a presidential election that has otherwise been light on K-12 education talk.

Bernie Sanders declared his support for public charter schools, but not private ones in a CNN town hall event last March — belying a fundamental confusion about what charter schools actually are. Last year Hillary Clinton disparaged charter schools with a blanket statement suggesting that they reject serving students who are the “hardest to teach.” And while decrying the federal footprint in education, Donald Trump said he wants more charter schools because “they work, and they work very well.”

The primary flaw with all of these statements is that each one lacks nuance and ignores what is true, what we know, and what we don’t know about charter schools. After all, one of the hallmarks of political campaigns is the reduction of complex issues to simple binaries. Candidates harp on divisive issues and ask voters to pick a side — for or against, good or bad. While this strategy makes for rousing stump, it misleads and under-informs voters about critical policy issues.

Sanders’ confusion about whether charter schools are public or private schools is not uncommon, but it’s easy to clear up. Charter schools are public schools. They are publicly-funded, and they provide education free of charge. The confusion arises because they are often operated by private organizations (a mix of non-profit and for-profit). Some of these private organizations are very good at running schools that achieve amazing outcomes with kids. Some of them are not as good.

Similarly, by painting all charter schools with the same brush, either negatively or positively, both Clinton and Trump ignore the complex reality of what we know about charter schools. (Clinton, I should note, told the NEA convention earlier this month that we should seek to learn from the many good charter schools – that common sense statement drew boos from the crowd).

In practice, who is served best and most often by charter schools varies significantly from state to state and city to city. And the overall quality of charter schools varies, too. In some cities, like Washington DC, charter schools produce an average of 101 days of additional learning in math compared to the surrounding district schools. That’s a tremendous difference. But in Fort Worth, Texas, charter schools underperform district schools on average.

Attempting to define the whole notion of charter schools as either good or bad encourages us to continue to focus on the existential question of whether we should have charter schools at all. And that is simply the wrong question. Continue reading

“High-Stakes” Tests are Hard to Find

young students working at computersThis spring, in schools across the country, standardized testing season is in full swing, and opponents are once again crying out against “high-stakes testing.” But that phrase can be misleading. In many states the stakes are much lower than you might think for students, teachers, and schools, and they’re likely to stay that way for a while.

Student consequences tied to tests are fairly low or nonexistent in most states. Graduation requirements and grade promotion policies tied to tests vary greatly between states and most have more holes than Swiss cheese. As of 2012, half of states had some sort of exit exam as a graduation requirement, but almost all these states had exceptions and alternate routes to a diploma if students didn’t pass the exam on the first try. Tying grade promotion to tests is less common, though some states have emulated Florida’s 3rd grade reading retention policy.  Now, just as tests become more rigorous, states are rolling back their graduation and promotion requirements tied to those tests, or offering even more flexibility if requirements are technically still in effect:

  • California eliminated graduation requirements tied to their exit exam in fall 2015.
  • Arizona repealed graduation requirements tied to testing in spring 2015 prior to administering the new AzMERIT tests.
  • Georgia waived their grade promotion requirement tied to new tests in grades three, five, and eight for the 2015-16 school year.
  • Ohio created new safe harbor policies this school year, which, among other things, prevents schools from using test results in grade promotion or retention until 2017-18 (except in the case of third grade reading tests).
  • New Jersey has had exit exams since 1982, but students can now fulfill the requirement using multiple exams, including the SAT, ACT and PARCC, and a proposed bill would pause the requirement altogether until 2021.

Continue reading

A Wonky But Important Argument for Annual Statewide Testing

In Saturday’s New York Times, I wrote a defense of annual statewide testing in reading and math. In the piece, I used data from the District of Columbia to illustrate that withdrawing from annual statewide testing would make it nearly impossible to hold schools accountable for the performance of specific groups of students. That’s a problem, because NCLB’s emphasis on historically disadvantaged groups forced schools to pay attention to these groups and led to real achievement gains. Today, 4th and 8th grade reading and math scores for black, Hispanic, and low-income students have never been higher.

To see how a move away from annual testing would affect subgroup accountability in other cities, I pulled data from Providence, Rhode Island and Richmond, Virginia. The results confirm that a move away from annual testing would leave many subgroups and more than 1 million students functionally “invisible” to state accountability systems.  Continue reading