Category Archives: Federal Education Policy

Do Incarcerated Youth Have Equal Access to Education? Let’s Look at the Data.

Although we regularly assess student learning and evaluate the effectiveness of teachers in traditional schools, there is almost no hard data on the quality of education in the schools that serve students held in juvenile justice facilities. These facilities tend to only collect data focused on safety and security. What kind of education do these students receive?

Based on the first year of available data from the U.S. Civil Rights Data Collection, we conducted a national analysis to answer some simple questions:

  1. How many youth are enrolled in juvenile justice schools across the U.S.?
  2. To what degree do they have access to math and science courses (the only courses on which we have data)?
  3. How often do they enroll in these courses?

What we encountered on the way – before even answering the latter two questions – was troubling.

At the start of our analysis, we needed to set up a rudimentary fact base. How many juvenile justice schools are there in each state, and how many kids are enrolled in each? Basic questions, it would seem. Thankfully, the U.S. Department of Education collects public school enrollment nationally. In the 2013-14 data set, the first one made available, they decided to include juvenile justice schools in their definition of “public.” After adding up the number of students in juvenile justice schools for each state, we found that the number was suspiciously low. For example, Arkansas reported only six students enrolled in one juvenile justice school – in the entire state. South Carolina reported no juvenile justice schools at all.

We found it hard to believe that only six students were incarcerated in all of Arkansas, so we compared the enrollment data to another data set – the number of incarcerated youth in each state for the year 2013. If all was well in the world of data quality and educational access, we would expect the data sets to somewhat align, meaning the number of enrolled youth would account for about 100% of incarcerated youth. That, in turn, would give us a fairly accurate picture of educational opportunity for incarcerated youth in each state.

However, we found that in the majority of states, the enrollment numbers of juvenile justice schools didn’t remotely match up with the number of incarcerated youth for the same time frame. In only 18 states did the number of enrolled students somewhat account for the number of youth in placement (that is account for 70% – 130% of youth). In the other states, that alignment ranged from 0% (South Carolina) to 940% (Delaware). 940% means that way, way more youth were reported enrolled in juvenile justice schools than actually incarcerated. What seems mathematically impossible is more likely the result of schools being mislabeled as serving incarcerated youth or schools reporting cumulative enrollment (how many kids enrolled in a year) instead of snapshot enrollment (how many kids were attending school on one day).

Without accurate data, it’s hard to make state-by-state comparisons about access to education in these facilities. Good data matters. Without it, we don’t know whether the thousands of kids who are reported as incarcerated, but not enrolled in a school, are actually getting an education. They deserve better.

Check out our other findings in the full slide deck, Measuring Educational Opportunity in Juvenile Justice Schools.

Alexander Brand was an intern at Bellwether in the spring of 2018.

Students Served by Multiple Systems of Care Deserve Better

At any given point in time, about 5 million kids are served in one or more of our nation’s child service agencies. These young people are living through traumatic and disruptive experiences ranging from homelessness to foster care placement to incarceration.

As I wrote in this piece nearly two years ago, these children are navigating a fragmented world of adults, programs, and agencies, often operating as the only central point among all of the services.

In our latest publication, Continuity Counts, Hailly Korman and I offer our recommendations for addressing this fragmentation and improving cross-agency coordination. However, our project differs significantly from most other policy papers because we approached our research using human-centered design. This means that we started by talking to the very people who are impacted by agency fragmentation: the children and youth served by these agencies. We also talked to the direct-care providers working in various agencies. The goal of these interviews was to better understand the needs, wants, and constraints of both the youth and the care providers, in order to build a set of recommendations that addresses the challenges they face.

Through our human-centered design approach, we identified two key levers for change: continuity of people and continuity of information. By identifying a single adult to operate like a child’s “chief of staff,” we can mitigate the need for a child to interact with a myriad of adults. By improving data collection, sharing, and storage, we can reduce the burdens on youth and their caregivers that result from missing or incorrect information.

The silos that exist among agencies did not appear overnight and will not disappear quickly. However, just because agencies have always operated in relative isolation from one another does not mean it must always be like this. Eliminating, or at least substantially reducing, the fragmentation that exists among schools, government agencies, nonprofits, and community-based organizations is possible with deliberate and concerted effort over a long period of time. And doing so is necessary if we ever hope to provide youth with a cohesive, streamlined system of support throughout their education trajectories.

Read our full report here or our op-ed in The 74 here.

NAEP Results Again Show That Biennial National Tests Aren’t Worth It

Once again, new results from the National Assessment of Educational Progress (NAEP) show that administering national math and reading assessments every two years is too frequent to be useful.

The 2017 NAEP scores in math and reading were largely unchanged from 2015, when those subjects were last tested. While there was a small gain in eighth-grade reading in 2017 — a one-point increase on NAEP’s 500-point scale — it was not significantly different than eighth graders’ performance in 2013.

Many acknowledged that NAEP gains have plateaued in recent years after large improvements in earlier decades, and some have even described 2007-2017 as the “lost decade of educational progress.” But this sluggishness also shows that administering NAEP’s math and reading tests (referred to as the “main NAEP”) every two years is not necessary, as it is too little time to meaningfully change trend lines or evaluate the impact of new policies.

Such frequent testing also has other costs: In recent years, the National Assessment Governing Board (NAGB), the body that sets policy for NAEP, has reduced the frequency of the Long-Term Trends (LTT) assessment and limited testing in other important subjects like civics and history in order to cut costs. NAGB cited NAEP budget cuts as the reason for reducing the frequency of other assessments. However, though NAEP’s budget recovered and even increased in the years following, NAGB did not undo the previously scheduled reductions. (The LTT assessment is particularly valuable, as it tracks student achievement dating back to the early 1970s and provides another measure of academic achievement in addition to the main NAEP test.)

Instead, the additional funding was used to support other NAGB priorities, namely the shift to digital assessments. Even still, the release of the 2017 data was delayed by six months due to comparability concerns, and some education leaders are disputing the results because their students are not familiar enough with using tablets.

That is not to say that digital assessments don’t have benefits. For example, the new NAEP results include time lapse visualizations of students’ progress on certain types of questions. In future iterations of the test, these types of metadata could provide useful information about how various groups of students differ in their test-taking activity.

Animated GIF - Find & Share on GIPHY

However, these innovative approaches should not come at the expense of other assessments that are useful in the present. Given the concerns some have with the digital transition, this is especially true of the LTT assessment. Instead, NAGB should consider administering the main NAEP test less frequently — perhaps only every four years — and use the additional capacity to support other assessment types and subjects.

Three Reasons to Expect Little on Innovative Assessments — and Why That’s Not Such a Bad Thing

Photo by Josh Davis via Flickr

Next week is the deadline for states to submit an application for the innovative assessment pilot to the U.S. Department of Education (ED). If you missed this news, don’t worry, you haven’t missed much. The Every Student Succeeds Act (ESSA) allows ED to grant assessment flexibility to up to seven states to do something different from giving traditional end-of-year standardized tests. The best example of an innovative state assessment system is New Hampshire, which allows some districts to give locally designed performance-based assessments. These assessments look more like in-class activities than traditional standardized tests, and are developed and scored by teachers.

Two years ago, Education Week called the innovative assessment pilot “one of the most buzzed-about pieces” of ESSA because it could allow states to respond to testing pushback while still complying with the new federal law. But now only four states have announced they will apply, and expectations are subdued at best.

Why aren’t more states interested an opportunity to get some leeway on testing? Here are three big reasons:

  1. Most states are playing it safe on ESSA and assessments are no exception

When my colleagues at Bellwether convened an independent review of ESSA state plans with 45 education policy experts, they didn’t find much ambition or innovation in state plans — few states went beyond the requirements of the law, and some didn’t even do that. Even Secretary of Education Betsy DeVos, who has approved the majority of state plans, recently criticized states for plans that “only meet the bare minimum” and don’t take full advantage of the flexibility offered in the law.

Several states responded that they were actually doing more than they had indicated in their plans. As my colleague Julie Squire pointed out last year, putting something extra in an ESSA plan could limit a state’s options and bring on more federal monitoring. If most states were fairly conservative and compliance-based with their big ESSA plans, there’s little reason to think they’ll unveil something new and surprising in a small-scale waiver application.

Additionally, the law includes several requirements for an innovative assessment that might be difficult for states to meet. For example, innovative tests have to be comparable across school districts, they have to meet the needs of special education students and English learners, and the pilot programs have to be designed to scale up statewide. If states have any doubts they can meet that bar, they probably won’t apply. Continue reading

Three Potential Risks of New Federal Weighted Student Funding Pilot

The education field widely acknowledges that some students may need additional support to thrive in school and beyond because of challenging life circumstances, specific learning needs, or other factors. And, in fact, the structure of federal funding programs like Title I and the design of many state school funding formulas recognize this principle and provide targeted support and differentiated funding based on specific student needs.

However, this idea is rarely reflected at the local district and school level, where budgets are more commonly based on inputs like staffing ratios and salary schedules that are not directly linked to the needs of students served in a given school. But a new federal pilot program authorized under the Every Student Succeeds Act, 2015, (ESSA) seeks to change that by incentivizing more districts to redesign their school funding methods around students.

School districts’ applications to participate in ESSA’s weighted student funding pilot program are due to Secretary DeVos today. And while these funding models could theoretically increase equity, the devil is in the details. The Department, advocates, and ed-watchers should be on the lookout for both the potential rewards and the risks of these district proposals.

Under a weighted student funding model (WSF), districts fund schools in whole or in part through a formula that considers the total number of students served in each school and specific student characteristics linked to higher costs. These types of formulas assign greater funding weight to students with such characteristics, sending more money to the schools serving them.

Well-designed WSF systems can counter the unfortunate result of common funding distribution methods currently in practice in many districts, where input-driven funding methods often result in higher funding levels in schools that serve fewer high-need students. As such, in theory, encouraging more districts to implement funding allocations that shift resources toward student need should be a boon to equity — a potentially big “reward.”

To date, districts that have implemented WSF, such as Boston, Denver, and Indianapolis, have limited these allocation methodologies to state and local funds. Federal funds have been left out of the mix primarily because federal regulatory and reporting requirements make it complicated and burdensome to mingle federal, state, and local resources in a single, unified WSF formula.

This ESSA pilot could change that by waiving many federal requirements and permitting approved districts to combine funds and allocate them to schools under locally determined WSF formulas. In exchange, these formulas must provide “substantially more” funding to low-income students and English language learners compared with other students. Continue reading