Don’t Hold Preparation Programs Accountable for Inputs – But Outcomes Aren’t Much Better

Chad Aldeman and I released two papers on teacher preparation this morning. Both papers look at efforts to improve the quality of educator preparation programs and, consequently, future educators. il_570xN.836150536_3mo4

Some background context: to date, states have tried to affect teacher preparation in one of two ways.

  • Regulating inputs. States impose rules for teacher candidates and the preparation programs they attend, because they assume inputs – like admission GPA, type of coursework, and certification requirements – serve as a proxy for teacher quality.
  • Monitoring outcomes. These states also regulate inputs, but their focus is on a teacher’s performance after she leaves the preparation program. These states measure certain outcomes of teacher performance – like impact on student learning, job placement, retention, and evaluation rating – and link those outcomes back to the preparation program.

Currently, most states regulate inputs, while a handful monitor outcomes. That could soon change. A pending federal regulation would require that all states hold programs accountable based on the outcomes of their completers by 2019.

In our first paper, Peering Around the Corner, Chad and I look at 11 states that link teachers back to the preparation programs that train them. For each state, we look at how they’re measuring and defining outcomes, how they’re sharing that information with the public, and what, if any, accountability they’re attaching to the results.

The second paper, No Guarantees, is our attempt to take a step back and look at what we know about preparing teachers and helping them improve. We discovered that every year, preparation programs produce new teachers who have invested, collectively, $4.85 billion and 302 million hours on their preparation — but there is little evidence that any of it matters very much. Unfortunately, we also found bad news about using outcomes for accountability. A growing body of evidence suggests that completer outcomes may not differentiate preparation programs as distinctly as hoped. In fact, the differences between programs are very small — practically indistinguishable — and almost all of the variation in teacher preparation occurs within programs.

Taken together, these papers tell two sides of the same story. No Guarantees makes the case that using outcomes for accountability purposes will likely be very difficult for most programs, and describes one possible solution. Peering Around the Corner acknowledges the interest in tracking preparation program results on their completer outcomes, and creates a roadmap of the tradeoffs and decisions states will have to make in doing so.