Category Archives: Accountability

What High School Applications and Acceptance Offers Tell Us About Chicago’s System of Schools

Before digging into the research on Chicago’s education system and talking to many of the city’s leaders for a current project at Bellwether, I categorized the district as largely traditional with a decent sized charter sector. What I learned was that Chicago has more school types and school choice than I realized, especially at the high school level. It turns out that while most of the headlines regarding the district have been about scandals and violence, a lot of people have been focused on making sure more kids go to better schools faster.

That fact was reinforced when I looked at a recent report from the University of Chicago Consortium on School Research on the first round of applications and offers from Chicago’s brand new high school unified enrollment system. Neerav Kingsland provides a good take on the results. I just want to reiterate one point and add a few more observations.

Screenshot of GoCPS

Screenshot via https://go.cps.edu/

Neerav points out that Chicago Public Schools’ (CPS) school information and application website GoCPS is easy to use. I can’t reiterate that enough. It’s insanely intuitive and informative. When I was making school choice decisions for my son in San Francisco earlier this year, I had to toggle between Google Maps, a PDF with school information from the district, and school performance information that I collected and analyzed myself. GoCPS has a map-based interface that provides the all information parents need, and it would have given me everything in one place. Why don’t more cities with school choice have a similar platform?

On a different note, the Consortium report notes that CPS has approximately 13,000 surplus seats in the district, an oversupply in other words, which might lead to more school closures and mergers. In addition to creating easier and more equitable enrollment processes for a district, unified enrollment systems provide detailed information about parent demand (and lack thereof). School closures have been painful for CPS in the past. The unified enrollment system now gives CPS CEO Janice Jackson more information to make changes that reflect the preferences of the district’s families and, hopefully, make difficult decisions a bit easier.

Neerav also points out that families rarely rank the lowest performing schools as a first choice — a fact he interprets as families making choices based on school performance. I agree but I see something troubling in the same graph:

Students' Top-Ranked Program by School's SQRP Rating

https://consortium.uchicago.edu/publications/gocps-first-look-applications-and-offers

Low-income students, low-performing students, English Language Learners, students with special needs, and African American students ranked the top-performing schools lower than other subgroups. The Consortium researchers made the same observation and call for more research. I agree. It’ll be important to know whether this difference is because of inadequate communication about school choices or quality, parents preferencing lower rated schools closer to home, or some other reason altogether. (The question is ripe for human-centered investigation.) The answer will help system administrators decide how to allocate scarce resources.

I can’t say this enough: the University of Chicago’s Consortium for School Research is a remarkable institution providing high-quality, actionable, relevant, and timely research for Chicago’s education leaders to use while making high-stakes strategic decisions. Every big city should have a similar outfit.

Preparing for Dynamic Systems of Schools

While traditional school districts are characterized by a relatively unchanging stock of schools, performance-based systems with effective parental choice mechanisms and rigorous school oversight are defining the changes taking place in places like New Orleans, DC, and Denver. These systems have one unique common denominator: dynamism, a central concept in modern economics that explains how new, superior ideas replace obsolete ones to keep a sector competitive.

The process happens through the entry and exit of firms and the expansion and contraction of jobs in a given market. As low-performing firms cease to operate, their human, financial, and physical capital are reallocated to new entrants or expanding incumbents offering better services or products.

Too little dynamism and underperformers continue to provide subpar services and consume valuable resources that could be used by better organizations. Too much dynamism creates economic instability and discourages entrepreneurs from launching new ventures and investors from funding them.

Dynamism, however, rarely comes up in discussions about education policy despite a growing number of urban education systems closing chronically underperforming schools and opening new, high-potential schools as a mechanism for continuous systemic improvement.

New Orleans’ system of schools has operated in this reality since Hurricane Katrina. And others like Denver and DC are implementing their own versions of dynamic, performance-based systems. To illustrate, below is a graph of charter school dynamism in DC between 2007 and 2018.

But it’s a novel study on Newark’s schools that provide the field’s best research on a dynamic system in action. Continue reading

NAEP Results Again Show That Biennial National Tests Aren’t Worth It

Once again, new results from the National Assessment of Educational Progress (NAEP) show that administering national math and reading assessments every two years is too frequent to be useful.

The 2017 NAEP scores in math and reading were largely unchanged from 2015, when those subjects were last tested. While there was a small gain in eighth-grade reading in 2017 — a one-point increase on NAEP’s 500-point scale — it was not significantly different than eighth graders’ performance in 2013.

Many acknowledged that NAEP gains have plateaued in recent years after large improvements in earlier decades, and some have even described 2007-2017 as the “lost decade of educational progress.” But this sluggishness also shows that administering NAEP’s math and reading tests (referred to as the “main NAEP”) every two years is not necessary, as it is too little time to meaningfully change trend lines or evaluate the impact of new policies.

Such frequent testing also has other costs: In recent years, the National Assessment Governing Board (NAGB), the body that sets policy for NAEP, has reduced the frequency of the Long-Term Trends (LTT) assessment and limited testing in other important subjects like civics and history in order to cut costs. NAGB cited NAEP budget cuts as the reason for reducing the frequency of other assessments. However, though NAEP’s budget recovered and even increased in the years following, NAGB did not undo the previously scheduled reductions. (The LTT assessment is particularly valuable, as it tracks student achievement dating back to the early 1970s and provides another measure of academic achievement in addition to the main NAEP test.)

Instead, the additional funding was used to support other NAGB priorities, namely the shift to digital assessments. Even still, the release of the 2017 data was delayed by six months due to comparability concerns, and some education leaders are disputing the results because their students are not familiar enough with using tablets.

That is not to say that digital assessments don’t have benefits. For example, the new NAEP results include time lapse visualizations of students’ progress on certain types of questions. In future iterations of the test, these types of metadata could provide useful information about how various groups of students differ in their test-taking activity.

Animated GIF - Find & Share on GIPHY

However, these innovative approaches should not come at the expense of other assessments that are useful in the present. Given the concerns some have with the digital transition, this is especially true of the LTT assessment. Instead, NAGB should consider administering the main NAEP test less frequently — perhaps only every four years — and use the additional capacity to support other assessment types and subjects.

Three Reasons to Expect Little on Innovative Assessments — and Why That’s Not Such a Bad Thing

Photo by Josh Davis via Flickr

Next week is the deadline for states to submit an application for the innovative assessment pilot to the U.S. Department of Education (ED). If you missed this news, don’t worry, you haven’t missed much. The Every Student Succeeds Act (ESSA) allows ED to grant assessment flexibility to up to seven states to do something different from giving traditional end-of-year standardized tests. The best example of an innovative state assessment system is New Hampshire, which allows some districts to give locally designed performance-based assessments. These assessments look more like in-class activities than traditional standardized tests, and are developed and scored by teachers.

Two years ago, Education Week called the innovative assessment pilot “one of the most buzzed-about pieces” of ESSA because it could allow states to respond to testing pushback while still complying with the new federal law. But now only four states have announced they will apply, and expectations are subdued at best.

Why aren’t more states interested an opportunity to get some leeway on testing? Here are three big reasons:

  1. Most states are playing it safe on ESSA and assessments are no exception

When my colleagues at Bellwether convened an independent review of ESSA state plans with 45 education policy experts, they didn’t find much ambition or innovation in state plans — few states went beyond the requirements of the law, and some didn’t even do that. Even Secretary of Education Betsy DeVos, who has approved the majority of state plans, recently criticized states for plans that “only meet the bare minimum” and don’t take full advantage of the flexibility offered in the law.

Several states responded that they were actually doing more than they had indicated in their plans. As my colleague Julie Squire pointed out last year, putting something extra in an ESSA plan could limit a state’s options and bring on more federal monitoring. If most states were fairly conservative and compliance-based with their big ESSA plans, there’s little reason to think they’ll unveil something new and surprising in a small-scale waiver application.

Additionally, the law includes several requirements for an innovative assessment that might be difficult for states to meet. For example, innovative tests have to be comparable across school districts, they have to meet the needs of special education students and English learners, and the pilot programs have to be designed to scale up statewide. If states have any doubts they can meet that bar, they probably won’t apply. Continue reading

What Does it Take to Be a Quality Authorizer?

The autonomy-for-accountability bargain at the heart of the charter movement rests, crucially, on the effectiveness of the entities — known as authorizers — that have the ability to approve charter schools and the responsibility for holding them accountable. If authorizers are lax in their responsibilities — approving weak applications, failing to effectively monitor or assess school performance, or refusing to close low-performing schools — the accountability part of the bargain isn’t held up. But if they overstep their bounds, by limiting the kinds of schools they will approve, being overly prescriptive about requirements for school approval, or trying to micromanage schools they oversee, the autonomy part of the bargain goes missing. Getting the right balance between holding schools accountable and protecting their autonomy is a crucial question, both for authorizers and the charter movement as a whole, and since the start of the charter movement, it’s been the subject of heated debate — one that has intensified in recent years.

Continue reading