Category Archives: Federal Education Policy

Since Janus Isn’t a Simple “Win/Lose,” What Else Are the Justices Deciding?

current U.S. Supreme Court Justices

via Franz Jantzen, Collection of the Supreme Court of the United States

The Supreme Court has yet to announce its decision in Janus v AFSCME, the case that will decide the fate of agency fees — fees paid to unions by non-members to support collective bargaining activities. So while you’re waiting (and studying up on the history of unions using our recently released slide deck), here are three things you need to know about the Court’s decision-making process:

  1. There is a range of possible rulings.

The Supreme Court’s decision is not going to be a simple “win/lose.” While Janus will, in fact, either “win” or “lose” his case, the Court’s written interpretation is what will shape future law and policy. And this written interpretation could be very narrow, quite broad, or fall somewhere in the middle. A very narrow finding, for example, could be to affirm the lower court’s ruling. Under this ruling, nothing would change. On the other end of the spectrum, the Court could go beyond the agency fee question presented in the case and find more broadly that exclusive representation is also unconstitutional.

  1. In its decision, the Court will likely reference a long history of precedents on agency fees and free speech.

The Court has been ruling on the issue of agency fees for decades. Analysts and commentators most frequently cite the 1977 Abood case, which endorsed the current agency-fee arrangement. But there are others cases that could be just as important. For example, the 1968 Pickering v Board of Education case dealt with a teacher who was fired after writing a letter to a local newspaper that was critical of some of his school board’s financial decisions. The Court found in Pickering’s favor that his right to freedom of speech was violated when he was fired for writing this letter. In making its decision, the Court had to balance the interests of Pickering, who was a citizen speaking on matters of public concern, and those of the government (in the case, the school board) as an employer seeking to provide efficient public services. This balancing of interests has become known as the Pickering test.

The Court could apply the Pickering test to Illinois’ law, which would require them to balance the interests of Janus speaking as a citizen on a matter of public concern with those of the government as an employer. The Court could find either that the state’s interests as an employer outweigh Janus’ free speech interest (meaning that Janus would lose) or that Janus is speaking as a citizen on a matter of public concern and that this free speech interest outweighs the state’s interests as his employer (meaning that Janus would win).

Another case the Court may reference is the 1991 Lenhert v Ferris Faculty Association case, which defined the activities for which unions can compel agency fees from non-members. These activities must 1) be “germane” to collective bargaining, 2) be justified by the government’s interest in maintaining labor peace, and 3) not add to the burdening of free speech.

The Court could decide that agency fees are legal, however it could revisit the definition of the expenses for which unions can charge non-members.

  1. The Court will avoid a constitutional question whenever possible.

Canons of construction are principles that provide guidance to the courts as they interpret statutes. One of these principles is to “first ascertain whether a construction of the statute is fairly possible by which [a constitutional] question may be avoided.” In other words, if there is a reasonable interpretation of the statute that does not conflict with the Constitution, the Court will adopt this interpretation.

This could be the case for Janus: The Court could find that there is an equally reasonable interpretation of Illinois’ law that does not raise a First Amendment free speech issue. The Court would have to adopt this interpretation, and Janus would lose.

While the Court’s decision is expected in the coming days, there’s no way to predict what it will be. So in the meantime, check out our deck on the history of unions and the implications of the Janus decision here.

Do Incarcerated Youth Have Equal Access to Education? Let’s Look at the Data.

Although we regularly assess student learning and evaluate the effectiveness of teachers in traditional schools, there is almost no hard data on the quality of education in the schools that serve students held in juvenile justice facilities. These facilities tend to only collect data focused on safety and security. What kind of education do these students receive?

Based on the first year of available data from the U.S. Civil Rights Data Collection, we conducted a national analysis to answer some simple questions:

  1. How many youth are enrolled in juvenile justice schools across the U.S.?
  2. To what degree do they have access to math and science courses (the only courses on which we have data)?
  3. How often do they enroll in these courses?

What we encountered on the way – before even answering the latter two questions – was troubling.

At the start of our analysis, we needed to set up a rudimentary fact base. How many juvenile justice schools are there in each state, and how many kids are enrolled in each? Basic questions, it would seem. Thankfully, the U.S. Department of Education collects public school enrollment nationally. In the 2013-14 data set, the first one made available, they decided to include juvenile justice schools in their definition of “public.” After adding up the number of students in juvenile justice schools for each state, we found that the number was suspiciously low. For example, Arkansas reported only six students enrolled in one juvenile justice school – in the entire state. South Carolina reported no juvenile justice schools at all.

We found it hard to believe that only six students were incarcerated in all of Arkansas, so we compared the enrollment data to another data set – the number of incarcerated youth in each state for the year 2013. If all was well in the world of data quality and educational access, we would expect the data sets to somewhat align, meaning the number of enrolled youth would account for about 100% of incarcerated youth. That, in turn, would give us a fairly accurate picture of educational opportunity for incarcerated youth in each state.

However, we found that in the majority of states, the enrollment numbers of juvenile justice schools didn’t remotely match up with the number of incarcerated youth for the same time frame. In only 18 states did the number of enrolled students somewhat account for the number of youth in placement (that is account for 70% – 130% of youth). In the other states, that alignment ranged from 0% (South Carolina) to 940% (Delaware). 940% means that way, way more youth were reported enrolled in juvenile justice schools than actually incarcerated. What seems mathematically impossible is more likely the result of schools being mislabeled as serving incarcerated youth or schools reporting cumulative enrollment (how many kids enrolled in a year) instead of snapshot enrollment (how many kids were attending school on one day).

Without accurate data, it’s hard to make state-by-state comparisons about access to education in these facilities. Good data matters. Without it, we don’t know whether the thousands of kids who are reported as incarcerated, but not enrolled in a school, are actually getting an education. They deserve better.

Check out our other findings in the full slide deck, Measuring Educational Opportunity in Juvenile Justice Schools.

Alexander Brand was an intern at Bellwether in the spring of 2018.

Students Served by Multiple Systems of Care Deserve Better

At any given point in time, about 5 million kids are served in one or more of our nation’s child service agencies. These young people are living through traumatic and disruptive experiences ranging from homelessness to foster care placement to incarceration.

As I wrote in this piece nearly two years ago, these children are navigating a fragmented world of adults, programs, and agencies, often operating as the only central point among all of the services.

In our latest publication, Continuity Counts, Hailly Korman and I offer our recommendations for addressing this fragmentation and improving cross-agency coordination. However, our project differs significantly from most other policy papers because we approached our research using human-centered design. This means that we started by talking to the very people who are impacted by agency fragmentation: the children and youth served by these agencies. We also talked to the direct-care providers working in various agencies. The goal of these interviews was to better understand the needs, wants, and constraints of both the youth and the care providers, in order to build a set of recommendations that addresses the challenges they face.

Through our human-centered design approach, we identified two key levers for change: continuity of people and continuity of information. By identifying a single adult to operate like a child’s “chief of staff,” we can mitigate the need for a child to interact with a myriad of adults. By improving data collection, sharing, and storage, we can reduce the burdens on youth and their caregivers that result from missing or incorrect information.

The silos that exist among agencies did not appear overnight and will not disappear quickly. However, just because agencies have always operated in relative isolation from one another does not mean it must always be like this. Eliminating, or at least substantially reducing, the fragmentation that exists among schools, government agencies, nonprofits, and community-based organizations is possible with deliberate and concerted effort over a long period of time. And doing so is necessary if we ever hope to provide youth with a cohesive, streamlined system of support throughout their education trajectories.

Read our full report here or our op-ed in The 74 here.

NAEP Results Again Show That Biennial National Tests Aren’t Worth It

Once again, new results from the National Assessment of Educational Progress (NAEP) show that administering national math and reading assessments every two years is too frequent to be useful.

The 2017 NAEP scores in math and reading were largely unchanged from 2015, when those subjects were last tested. While there was a small gain in eighth-grade reading in 2017 — a one-point increase on NAEP’s 500-point scale — it was not significantly different than eighth graders’ performance in 2013.

Many acknowledged that NAEP gains have plateaued in recent years after large improvements in earlier decades, and some have even described 2007-2017 as the “lost decade of educational progress.” But this sluggishness also shows that administering NAEP’s math and reading tests (referred to as the “main NAEP”) every two years is not necessary, as it is too little time to meaningfully change trend lines or evaluate the impact of new policies.

Such frequent testing also has other costs: In recent years, the National Assessment Governing Board (NAGB), the body that sets policy for NAEP, has reduced the frequency of the Long-Term Trends (LTT) assessment and limited testing in other important subjects like civics and history in order to cut costs. NAGB cited NAEP budget cuts as the reason for reducing the frequency of other assessments. However, though NAEP’s budget recovered and even increased in the years following, NAGB did not undo the previously scheduled reductions. (The LTT assessment is particularly valuable, as it tracks student achievement dating back to the early 1970s and provides another measure of academic achievement in addition to the main NAEP test.)

Instead, the additional funding was used to support other NAGB priorities, namely the shift to digital assessments. Even still, the release of the 2017 data was delayed by six months due to comparability concerns, and some education leaders are disputing the results because their students are not familiar enough with using tablets.

That is not to say that digital assessments don’t have benefits. For example, the new NAEP results include time lapse visualizations of students’ progress on certain types of questions. In future iterations of the test, these types of metadata could provide useful information about how various groups of students differ in their test-taking activity.

Animated GIF - Find & Share on GIPHY

However, these innovative approaches should not come at the expense of other assessments that are useful in the present. Given the concerns some have with the digital transition, this is especially true of the LTT assessment. Instead, NAGB should consider administering the main NAEP test less frequently — perhaps only every four years — and use the additional capacity to support other assessment types and subjects.

Three Reasons to Expect Little on Innovative Assessments — and Why That’s Not Such a Bad Thing

Photo by Josh Davis via Flickr

Next week is the deadline for states to submit an application for the innovative assessment pilot to the U.S. Department of Education (ED). If you missed this news, don’t worry, you haven’t missed much. The Every Student Succeeds Act (ESSA) allows ED to grant assessment flexibility to up to seven states to do something different from giving traditional end-of-year standardized tests. The best example of an innovative state assessment system is New Hampshire, which allows some districts to give locally designed performance-based assessments. These assessments look more like in-class activities than traditional standardized tests, and are developed and scored by teachers.

Two years ago, Education Week called the innovative assessment pilot “one of the most buzzed-about pieces” of ESSA because it could allow states to respond to testing pushback while still complying with the new federal law. But now only four states have announced they will apply, and expectations are subdued at best.

Why aren’t more states interested an opportunity to get some leeway on testing? Here are three big reasons:

  1. Most states are playing it safe on ESSA and assessments are no exception

When my colleagues at Bellwether convened an independent review of ESSA state plans with 45 education policy experts, they didn’t find much ambition or innovation in state plans — few states went beyond the requirements of the law, and some didn’t even do that. Even Secretary of Education Betsy DeVos, who has approved the majority of state plans, recently criticized states for plans that “only meet the bare minimum” and don’t take full advantage of the flexibility offered in the law.

Several states responded that they were actually doing more than they had indicated in their plans. As my colleague Julie Squire pointed out last year, putting something extra in an ESSA plan could limit a state’s options and bring on more federal monitoring. If most states were fairly conservative and compliance-based with their big ESSA plans, there’s little reason to think they’ll unveil something new and surprising in a small-scale waiver application.

Additionally, the law includes several requirements for an innovative assessment that might be difficult for states to meet. For example, innovative tests have to be comparable across school districts, they have to meet the needs of special education students and English learners, and the pilot programs have to be designed to scale up statewide. If states have any doubts they can meet that bar, they probably won’t apply. Continue reading