October 20, 2016

Third Presidential Debate Recap: The American Electorate is Left Guessing on K-12 Education Policy


Illustration by VectorOpenStock.com

The third and final presidential debate is over. Viewers and the media agree that while the last square-off between Clinton and Trump had its expected off-topic and personal exchanges, it was the most substantive of the three debates. Yet, once again, the candidates did not debate education policy.

To her credit, Clinton did mention education. Like in the past debates, the topic came up when she touted her economic plan. “I feel strongly we have to have an education system that starts with preschool and goes through college,” she said. “That’s why I want more technical education in high schools and community colleges, real apprenticeships to prepare people for the real jobs of the future.”

Clinton took a page from her running mate Tim Kaine’s book when mentioning career and technical education, a policy area near and dear to his heart (though he did not mention it during the vice presidential debate). She then went on to mention her plan of making college debt free for families earning less than $125,000 — a plan she worked on with Bernie Sanders, and one of the education topics she often mentions during public speaking events.

But those hoping to hear Clinton talk about her plans for students in elementary and middle school were left disappointed. Both Clinton and Trump finished the debate cycle with negligible mentions of K-12 policy.

That leaves the education community guessing at what K-12 policy might look like under Clinton or Trump. If the candidates themselves or their running mates won’t talk about the issue, the next best place to look is their advisers and surrogates. Continue reading

Go Forth and Improve, Teacher Preparation Programs. But Don’t Ask How.


Image by Kevin Dooley via Flickr

A few weeks ago, former Secretary of Education Arne Duncan wrote an open letter calling out education schools. In it, he made several blunt remarks about the quality of teacher preparation programs, including that current teacher training “lacks rigor, is out of step with the times, and […] leaves teachers unprepared and their future students at risk.”

What the former Secretary’s letter didn’t include, however, were specifics on how preparation programs should improve. He talked a lot about grades, and about holding teachers to high standards, but that’s it.

At this point, you may be thinking: “You can’t expect him to get into the nitty gritty! The letter was more an op-ed than a policy brief.”

Sure. But then last week, the Department of Education released the final version of its long-awaited teacher preparation regulations. The regulations are an effort to hold teacher preparation programs accountable for the performance of the teachers they train after those teachers enter the classroom. Using teacher performance data, the regulations require states to create a system that rates programs as effective, at-risk, or low-performing.

Like the open letter, these regulations are devoid of specifics for how programs should improve. They say that states need to provide technical assistance for low-performing programs, for example, but don’t hint at what that support should look like. When the regulations were out for public comment, which were due in February 2015, several commenters suggested that the regulations should include specific prescriptions for what states need to do to support programs — but the Department declined, saying instead that states have “the discretion to implement technical assistance in a variety of ways.”

Why do both of these documents — representing the past and future of the highest education office — say practically nothing about how preparation programs can get better?

The answer is depressing: As a field, we don’t know how to build a better teacher preparation program.

That’s what Melissa Steel King and I found in our latest paper, A New Agenda: Research to Build a Better Teacher Preparation Program. There’s half a century of research on what makes a good teacher, but that research provides only the barest outlines of what an effective preparation program should look like. So much of teacher prep research asks “Does it work?” when really we need to be asking, “How well does it work, for whom, and under what circumstances?” Continue reading

October 18, 2016

Bringing Evidence to the Early Childhood Conversation: A Timely Issue

Behavioral Science & Policy AssociationImproving access to quality early childhood education is increasingly a priority for policymakers at all levels of government. But smart policies to expand early learning opportunities need to be based on research and evidence. A newly released feature in the Behavioral Science & Policy Journal seeks to provide an overview of relevant research, and includes a piece from me and my colleague Ashley LiBetti Mitchel.

The issue looks at what we’ve learned from recent policy developments and research on home visiting programs, state pre-k programs, and Head Start. Ron Haskins, who edited the series, provides an overview of the current landscape of early childhood education programs. Cynthia Osborne offers four lessons policymakers should take from research on home visiting. Dale Farran and Mark Lipsey and Christina Weiland offer differing takes on the potential to scale high-quality preschool. Ashley LiBetti Mitchel and I describe recent research and policy developments related to Head Start. And Ajay Chaudry and Jane Waldfogel outline a vision for a much more robust system of early care and education policies to improve results for American kids.

In our piece, Ashley and I argue that, while research demonstrates Head Start’s positive impacts on participating children, it also suggests that Head Start’s results vary widely across grantees and do not match those of the most successful early childhood programs. Given this evidence, we argue that the relevant question for policymakers is not whether Head Start works but how to increase the number of Head Start centers that work as well as the most effective Head Start centers and state-funded pre-K programs. We review the effect of recent policy initiatives that have sought to do this, and offer recommendations for future policies to further support improvements in Head Start quality and outcomes.

You can read our piece, as well as the entire issue, here.


October 17, 2016

Should a Pro Football Player Endorse For-Profit Colleges?

If you’ve watched the Arizona Cardinals play during this year’s NFL season, you may have seen a commercial for the University of Phoenix featuring Pro Bowl receiver Larry Fitzgerald, who earned his bachelor’s degree from the school earlier this year.

It’s a powerful commercial, both heartwarming and melancholy. However, should Fitzgerald — widely regarded as one of the better role models in sports today — really be supporting a for-profit college?

These types of institutions, which offer flexible course schedules and career-oriented education, are uniquely suited to professional athletes, many of whom opted out of finishing college to pursue their athletic careers.

As a result, Fitzgerald isn’t the only star athlete who’s become a University of Phoenix graduate during his career. For example, Arizona Diamondback’s All-Star first baseman Paul Goldschmidt completed his degree in 2013.

However, for-profit colleges have faced criticism for misleading and defrauding students, leaving them with large amounts of debt and little to show for it. Amid the collapse of industry giants like Corinthian Colleges and ITT Technical Institutes, the U.S. Department of Education (ED) has even created a “Student Aid Enforcement Unit.” Continue reading

October 14, 2016

Reactions to the U.S. Education Innovation Index

One of the main goals of creating and publishing the U.S. Education Innovation Index Prototype and Report was to stimulate evidence-based conversations about innovation in the education sector and push the field to consider more sophisticated tools, methods, and practices. Since its release three weeks ago at the Digital Promise Innovation Clusters convening in Providence, the index has been met with an overwhelmingly positive reception.

I’m grateful for the many fruitful one-on-one conversations that have pushed my thinking, raised interesting questions, and provoked new ideas.

Here are a few takeaways on the report itself:

People love radar charts. And I’m one of those people. In the case of the innovation index, radar charts were a logical choice for visualizing nine dimensions and a total score. Here they are again in all their glory.

City Comparisons

Readers weren’t always clear on the intended audience or purpose. This concern came up often and hit close to home as someone who strives to produce work that is trusted, relevant, timely, and useful. One of the benefits of the prototype is that we can test the tool’s utility before expanding the scope of the project to more cities or an even more complicated theoretical framework. So far the primary audience for the index funders, policy makers, superintendents, education leaders, and city leaders have demonstrated interest in learning more about the thinking behind the index and how it can be applied to their work. Ultimately I hope it will influence high-stakes funding, policy, and strategic decisions.

The multidimensionality of innovation challenges assumptions. When I explain that we measured the innovativeness of education sectors in four cities New Orleans, San Francisco, Indianapolis, and Kansas City, MO inevitably, the next question I get is “how do they rank?” Instead of answering, I ask my conversant for his/her rankings. I’ve had this exchange dozens of times, and in almost every case, New Orleans topped the list because of the unique charter school environment. When I then explain that the index was sector agnostic it doesn’t give preference to charter, district, or private schools people immediately reconsider and put San Francisco in the number one slot. What this tells me is that many people associate innovation with one approach rather than treating it as the multidimensional concept that it is. This misperception has real policy and practice implications, and I hope the index provides nuance to the thinking of decision makers.

Dynamism” and “district deviance” are intriguing but need more research. Two of the measures that I’m most excited about are also ones that have invited scrutiny and criticism: dynamism and district deviance. Dynamism is the entry and exit of schools, nonprofits, and businesses from a city’s education landscape. Too much dynamism can destabilize communities and economies. Too little can keep underperforming organizations operating at the expense of new and potentially better ones. In the private sector, healthy turnover rates are between five and 20 percent, depending on the industry. We don’t know what that number is for education yet, but it’s likely on the low end of the range. More research is needed. Our district deviance measure assumes that districts that spend their money differently compared to their peers and are trying new things, which is good. It’s a novel approach, but its accuracy is vulnerable if the assumptions don’t pan out. Again more research is needed.

Measure more cities! Everyone wants to see more cities measured with the index for one of two reasons. The first is that they want to know how their city is doing on our nine dimensions. The second is that they want to compare cities to each other. Both make my heart sing. Knowing how a specific city measures up is the first step to improving it. Knowing how it compares to others is the first step to facilitate knowledge transfer and innovation diffusion.
Continue reading