Category Archives: Accountability

How Do We Incentivize Charter Authorizers to Approve More High-Quality Alternative Schools? A Q&A With Colorado’s Antonio Parés.

Antonio Parés headshot via Twitter

Antonio Parés via Twitter

“Alternative education” is a catch-all term used to describe education programs for students who have not been well-served by traditional classroom environments. It can refer to computer-based rapid credit accrual opportunities, supportive programs for students who are pregnant or parenting, intensive English-language programs for students who have come to the United States with substantial education histories in another language, “second chance” placements for students expelled from traditional public schools, and everything in between. Precise definitions vary by state and school district.

While traditional public school districts have historically offered these alternative programs for their students, more and more state or local charter schools are beginning to offer similar programs. Charter statutes often allow the flexibility that makes room for innovation, which is needed to operate programs that meet the specific needs of some of our most vulnerable students. Yet ensuring appropriate accountability for alternative charter schools — crucial to fulfilling the other side of the autonomy-for-accountability bargain — has proven challenging.

Forward-thinking charter authorizers are contemplating the policies and institutional practices that create strong authorizing and accountability incentives for alternative programs. The right mix of flexibility, autonomy, rigor, and relevance can both ensure that authorizers do not just enable the existence of more alternative schools but that the schools they authorize provide the highest quality programs that best meet the needs of the students they serve. Good authorizing practices can also prevent schools that provide alternative programs from simply relaxing their standards and becoming a catch basin for low performing students.

A primary challenge for authorizers is that accountability metrics typically used to measure the performance of charter schools — such as student achievement or growth on state standardized assessments, student attendance, and four-year graduation rates — may not accurately apply. Alternative charter schools often serve students who enter with unique educational and life challenges or who are already far below grade level because of gaps in their prior schooling. Applying these measures rigidly can create disincentives for operators to open, or authorizers to approve, alternative school models. Conversely, some states create loopholes that allow alternative schools and their authorizers to evade accountability altogether. Some intrepid authorizers have invested significant time and resources in developing fair and accurate ways to measure the performance of diverse alternative schools, however, state laws and regulations do not always align with such approaches.

Colorado has begun a process of convening a cross-agency task force of leaders, experts, and policymakers to modify its authorizing system by improving the rigor and relevance of performance metrics for the state’s alternative education campuses (AECs). 

Antonio Parés, a partner at the Donnell-Kay Foundation, is a board member of the Colorado Charter School Institute (CSI), which convened the AEC task force. CSI is Colorado’s only statewide charter school authorizer, and it currently authorizes 39 schools serving over 17,500 PK-12 students across the state. We recently caught up with Antonio to talk about the unique needs of AECs and what that means for authorizers and state education policy.

This interview has been edited for length and clarity.

You’ve been working with a task force in Colorado to improve the ways that the state holds charter authorizers accountable for the success of their alternative education campuses. Can you tell us about that process and the challenges you’re facing?

Every year or two, CSI works with our alternative education campuses to identify “alternative measurements” for each or all of the schools. Alternative measurements include student perception surveys, in-house assessments such as NWEA or MAPS, or alternative post-secondary paths. CSI convened a statewide taskforce to review and collaborate on best practices when it comes to accountability measurements and outcomes for our alternative education campuses, schools typically serving under-credited and at-risk students. We were trying — and continue to try — to balance both the unique nature of each campus and their student population with the need for consistent, longitudinal, and comparable data points. Our goal was — and continues to be — to develop the best performance metrics and frameworks for every school. Continue reading

What High School Applications and Acceptance Offers Tell Us About Chicago’s System of Schools

Before digging into the research on Chicago’s education system and talking to many of the city’s leaders for a current project at Bellwether, I categorized the district as largely traditional with a decent sized charter sector. What I learned was that Chicago has more school types and school choice than I realized, especially at the high school level. It turns out that while most of the headlines regarding the district have been about scandals and violence, a lot of people have been focused on making sure more kids go to better schools faster.

That fact was reinforced when I looked at a recent report from the University of Chicago Consortium on School Research on the first round of applications and offers from Chicago’s brand new high school unified enrollment system. Neerav Kingsland provides a good take on the results. I just want to reiterate one point and add a few more observations.

Screenshot of GoCPS

Screenshot via https://go.cps.edu/

Neerav points out that Chicago Public Schools’ (CPS) school information and application website GoCPS is easy to use. I can’t reiterate that enough. It’s insanely intuitive and informative. When I was making school choice decisions for my son in San Francisco earlier this year, I had to toggle between Google Maps, a PDF with school information from the district, and school performance information that I collected and analyzed myself. GoCPS has a map-based interface that provides the all information parents need, and it would have given me everything in one place. Why don’t more cities with school choice have a similar platform?

On a different note, the Consortium report notes that CPS has approximately 13,000 surplus seats in the district, an oversupply in other words, which might lead to more school closures and mergers. In addition to creating easier and more equitable enrollment processes for a district, unified enrollment systems provide detailed information about parent demand (and lack thereof). School closures have been painful for CPS in the past. The unified enrollment system now gives CPS CEO Janice Jackson more information to make changes that reflect the preferences of the district’s families and, hopefully, make difficult decisions a bit easier.

Neerav also points out that families rarely rank the lowest performing schools as a first choice — a fact he interprets as families making choices based on school performance. I agree but I see something troubling in the same graph:

Students' Top-Ranked Program by School's SQRP Rating

https://consortium.uchicago.edu/publications/gocps-first-look-applications-and-offers

Low-income students, low-performing students, English Language Learners, students with special needs, and African American students ranked the top-performing schools lower than other subgroups. The Consortium researchers made the same observation and call for more research. I agree. It’ll be important to know whether this difference is because of inadequate communication about school choices or quality, parents preferencing lower rated schools closer to home, or some other reason altogether. (The question is ripe for human-centered investigation.) The answer will help system administrators decide how to allocate scarce resources.

I can’t say this enough: the University of Chicago’s Consortium for School Research is a remarkable institution providing high-quality, actionable, relevant, and timely research for Chicago’s education leaders to use while making high-stakes strategic decisions. Every big city should have a similar outfit.

Preparing for Dynamic Systems of Schools

While traditional school districts are characterized by a relatively unchanging stock of schools, performance-based systems with effective parental choice mechanisms and rigorous school oversight are defining the changes taking place in places like New Orleans, DC, and Denver. These systems have one unique common denominator: dynamism, a central concept in modern economics that explains how new, superior ideas replace obsolete ones to keep a sector competitive.

The process happens through the entry and exit of firms and the expansion and contraction of jobs in a given market. As low-performing firms cease to operate, their human, financial, and physical capital are reallocated to new entrants or expanding incumbents offering better services or products.

Too little dynamism and underperformers continue to provide subpar services and consume valuable resources that could be used by better organizations. Too much dynamism creates economic instability and discourages entrepreneurs from launching new ventures and investors from funding them.

Dynamism, however, rarely comes up in discussions about education policy despite a growing number of urban education systems closing chronically underperforming schools and opening new, high-potential schools as a mechanism for continuous systemic improvement.

New Orleans’ system of schools has operated in this reality since Hurricane Katrina. And others like Denver and DC are implementing their own versions of dynamic, performance-based systems. To illustrate, below is a graph of charter school dynamism in DC between 2007 and 2018.

But it’s a novel study on Newark’s schools that provide the field’s best research on a dynamic system in action. Continue reading

NAEP Results Again Show That Biennial National Tests Aren’t Worth It

Once again, new results from the National Assessment of Educational Progress (NAEP) show that administering national math and reading assessments every two years is too frequent to be useful.

The 2017 NAEP scores in math and reading were largely unchanged from 2015, when those subjects were last tested. While there was a small gain in eighth-grade reading in 2017 — a one-point increase on NAEP’s 500-point scale — it was not significantly different than eighth graders’ performance in 2013.

Many acknowledged that NAEP gains have plateaued in recent years after large improvements in earlier decades, and some have even described 2007-2017 as the “lost decade of educational progress.” But this sluggishness also shows that administering NAEP’s math and reading tests (referred to as the “main NAEP”) every two years is not necessary, as it is too little time to meaningfully change trend lines or evaluate the impact of new policies.

Such frequent testing also has other costs: In recent years, the National Assessment Governing Board (NAGB), the body that sets policy for NAEP, has reduced the frequency of the Long-Term Trends (LTT) assessment and limited testing in other important subjects like civics and history in order to cut costs. NAGB cited NAEP budget cuts as the reason for reducing the frequency of other assessments. However, though NAEP’s budget recovered and even increased in the years following, NAGB did not undo the previously scheduled reductions. (The LTT assessment is particularly valuable, as it tracks student achievement dating back to the early 1970s and provides another measure of academic achievement in addition to the main NAEP test.)

Instead, the additional funding was used to support other NAGB priorities, namely the shift to digital assessments. Even still, the release of the 2017 data was delayed by six months due to comparability concerns, and some education leaders are disputing the results because their students are not familiar enough with using tablets.

That is not to say that digital assessments don’t have benefits. For example, the new NAEP results include time lapse visualizations of students’ progress on certain types of questions. In future iterations of the test, these types of metadata could provide useful information about how various groups of students differ in their test-taking activity.

Animated GIF - Find & Share on GIPHY

However, these innovative approaches should not come at the expense of other assessments that are useful in the present. Given the concerns some have with the digital transition, this is especially true of the LTT assessment. Instead, NAGB should consider administering the main NAEP test less frequently — perhaps only every four years — and use the additional capacity to support other assessment types and subjects.

Three Reasons to Expect Little on Innovative Assessments — and Why That’s Not Such a Bad Thing

Photo by Josh Davis via Flickr

Next week is the deadline for states to submit an application for the innovative assessment pilot to the U.S. Department of Education (ED). If you missed this news, don’t worry, you haven’t missed much. The Every Student Succeeds Act (ESSA) allows ED to grant assessment flexibility to up to seven states to do something different from giving traditional end-of-year standardized tests. The best example of an innovative state assessment system is New Hampshire, which allows some districts to give locally designed performance-based assessments. These assessments look more like in-class activities than traditional standardized tests, and are developed and scored by teachers.

Two years ago, Education Week called the innovative assessment pilot “one of the most buzzed-about pieces” of ESSA because it could allow states to respond to testing pushback while still complying with the new federal law. But now only four states have announced they will apply, and expectations are subdued at best.

Why aren’t more states interested an opportunity to get some leeway on testing? Here are three big reasons:

  1. Most states are playing it safe on ESSA and assessments are no exception

When my colleagues at Bellwether convened an independent review of ESSA state plans with 45 education policy experts, they didn’t find much ambition or innovation in state plans — few states went beyond the requirements of the law, and some didn’t even do that. Even Secretary of Education Betsy DeVos, who has approved the majority of state plans, recently criticized states for plans that “only meet the bare minimum” and don’t take full advantage of the flexibility offered in the law.

Several states responded that they were actually doing more than they had indicated in their plans. As my colleague Julie Squire pointed out last year, putting something extra in an ESSA plan could limit a state’s options and bring on more federal monitoring. If most states were fairly conservative and compliance-based with their big ESSA plans, there’s little reason to think they’ll unveil something new and surprising in a small-scale waiver application.

Additionally, the law includes several requirements for an innovative assessment that might be difficult for states to meet. For example, innovative tests have to be comparable across school districts, they have to meet the needs of special education students and English learners, and the pilot programs have to be designed to scale up statewide. If states have any doubts they can meet that bar, they probably won’t apply. Continue reading