It’s an unfortunately familiar story. You’re invited to a convening on a topic that you’re interested in. When you get the agenda, you notice that the day starts with an 8 a.m. breakfast and keynote speaker, which wouldn’t be so bad if it wasn’t 5 a.m. your time. After that it’s back-to-back sessions, some of which are good. In the others, the topics aren’t relevant or facilitation is shoddy. A working lunch, reception (with speaker), and dinner are all mandatory, forcing you to choose between skipping out for a break, catching up with colleagues, or giving away every minute of your day to the organizers. When you get back to your office and reflect on the experience, you want to connect what you’ve learned to your daily work but “135 unread emails” is screaming out in bold font.
Why does this happen? The impulse to program every minute of every day surely stems from the desire to take advantage of the unique time together, but it often ends up backfiring. People check email, skip out on sessions to talk with colleagues, or are too mentally and physically fatigued to fully engage with the content.
In a recent series of convenings I organized focused on increasing multiagency coordination and effectiveness, my team and I tried to design the kind of convenings we’d like to attend. That is to say, ones where the content was timely, relevant, and rich. The agendas took into consideration human needs such as movement, rest, and nourishment. The schedules balanced deep learning, reflection, peer-to-peer sharing, and direct application to daily work.
Here are five lessons that we’ve learned creating adult learning environments where critical work can get done: Continue reading →
The word “hope” may appear on the Rhode Island state flag, but it’s in short supply in Providence Public Schools. A recent report from researchers at Johns Hopkins University reveals that students are exposed to “an exceptionally low level of academic instruction” and in some cases, they have to attend school in dangerous buildings with lead paint and asbestos. At fault are byzantine rules and convoluted governance arrangements, the authors argue. Piecemeal reform efforts have not been enough to overcome ossified institutions, leaving unsafe buildings, low-quality instruction, and sub-par teachers shuffling between schools in a “dance of the lemons.”
The situation in Providence is dire, but it’s an important moment to make real, lasting changes as the spotlight is aimed on their dysfunction. Leaders in Providence — and Rhode Island at large — must focus on systemic change to provide students with safe learning environments and high-quality, rigorous instruction. Reforming an entire school system is a tall order, but other districts with similar challenges show that change is possible. One such example is just 191 miles down I-95: Newark, New Jersey.
Newark’s school system was in serious distress in ways that mirror Providence today: high poverty, dysfunctional bureaucracy, crumbling school buildings, and abysmal student outcomes. A voluminous report detailing the crisis in Newark’s public schools ultimately led to a state takeover in 1995.
Under state management, Newark’s school system was governed by the New Jersey Continue reading →
Research tells us that, overall, Head Start has positive effects on children’s health, education, and economic outcomes. But there is wide variability in quality from program to program — and, as a field, we don’t understand why.
Earlier this year, Sara Mead and I tried to figure that out. We published an analysis, conducted over three years, of several of the highest performing Head Start programs across the country. We specifically looked at programs that produce significant learning gains for children. Our goal was to understand what made them so effective.
As part of this project, we provided detailed, tactical information about exemplars’ design and practices. We hope to serve as a resource and starting point for other Head Start programs interested in experimenting with something new and, potentially, more effective.
Here are three action steps that Head Start programs can take right now to improve their practice: Continue reading →
My colleague Chad Aldeman and I have a new opinion piece out in The 74 Million. In it, we argue that many states are simply ill-equipped to address their rising teacher pension costs and mounting unfunded liabilities. We propose the federal government has a role to play here, by providing financial assistance in exchange for critical pension reforms:
…The federal government could offer states pension bailouts in exchange for changes that address longer-term systemic issues, such as meeting actuarially required contributions, using more conservative investment assumptions and implementing a risk-sharing pool for underfunded pension plans.
Read the full op-ed here. And check out our new report on teacher pension reform in West Virginia here.
Journalists, as a general rule, use accessible language. Researchers, as a general rule, do not. So journalists who write about academic research and scholarship, like the reporters at Chalkbeat who cover school spending studies, can help disseminate research to education leaders since they write more plainly.
But the danger is that it’s easy for research to get lost in translation. Researchers may use language that appears to imply some practice or policy causes an outcome. Journalists can be misled when terms like “effect size” are used to describe the strength of the association even though they are not always causal effects.
To help journalists make sense of research findings, the Education Writers Association recently put together several excellent resources for journalists exploring education research, including 12 questions to ask about studies. For journalists (as well as practitioners) reading studies that imply that some program or policy causes the outcomes described, I would add one important consideration (a variation on question 3 from this post): if a study compares two groups, how were people assigned to the groups? This question gets at the heart of what makes it possible to say whether a program or policy caused the outcomes examined, as opposed to simply being correlated with those outcomes.
Randomly assigning people creates a strong research design for examining whether a policy or program causes certain outcomes. Random assignment minimizes pre-existing differences among the groups, so that differences in the outcomes can be attributed to the treatment (program or policy) instead of different characteristics of the people in the groups. In the image below, random assignment results in having similar-looking treatment and control groups. Continue reading →