Tag Archives: evidence-informed policy

Evaluators Bring Superpowers to Your Federal Grant Application

Yesterday, my colleague Lina Bankert wrote about three new federal grant competitions that have just been posted. Those who are new to federal grant competitions may find the evaluation requirements and research-design options (explained below) overwhelming. Federal grant applications typically require:

  • An evidence-based rationale for the proposed project’s approach, such as a logic model
  • Citations of prior research that support key components of a project’s design and meet specific thresholds for rigor specified by the What Works Clearinghouse
  • Expected outcomes and how applicants will measure those with valid and reliable instruments
  • Explanation of how the proposed project will be studied to understand its impact

Proposals may be scored by two kinds of individuals: reviewers with programmatic expertise and reviewers with evaluation expertise. Sections of the grant are allocated a certain number of points, all of which total to a final score that drives which proposals receive awards. The evaluation section of these proposals can represent up to 25% of the total points awarded to applicants, so having a strong one can make or break an application. 

red letters that say "KAPOW" coming out of a blue and yellow comic-style explosion

Image by Andrew Martin from Pixabay

Writing these sections requires a sophisticated understanding of research methodology and analytical techniques in order to tie the application together with a consistent and compelling evidence base. Our evaluation team at Bellwether has partnered with a number of organizations to help them design programmatic logic models, shore up their evidence base, and write evaluation plans that have contributed to winning applications to the tune of about $60 million. This includes three recent partnerships with Chicago International Charter School, Citizens of the World Charter Schools, and Grimmway Schools — all winners in the latest round of Charter School Program (CSP) funding for replication and expansion of successful charter networks.

Continue reading

“Quiet Rooms” and Other Forms of Exclusionary Discipline Are Not Evidence-Based Practices

Every time a reformer proposes a new idea in education, critics and skeptics demand evidence. Our state and federal laws prefer evidence-based practices and reward the adoption of practices backed by valid and reliable research. But when defending the status quo, no one ever seems interested in the evidence. 

Last week’s Chicago Tribune piece on the disturbing use of “quiet rooms” as a behavior management strategy indicated that these euphemistically named rooms are in use across the state of Illinois. Children are routinely placed into isolation when they misbehave, under the pretense of behavior management or time to reflect. These rooms are isolation masquerading as quasi-in-school suspension, and there is, of course, no evidence to support them. In fact, the evidence runs in the opposite direction: “time-outs” actively harm children. That doesn’t seem to stop schools from using them.

A student in Utah sits alone outside his classroom

A student in Utah sits alone outside his classroom. From Bellwether’s Rigged series.

Beyond the extreme example of Illinois’ “quiet rooms,” isolation and other exclusionary discipline practices are pervasive and, for many, noncontroversial. This includes suspensions and expulsions, which enjoy mainstream support from teachers and policymakers. Stories of suspension and expulsion don’t carry the same visceral horror as these examples from Illinois, but they’re all based on the same fundamentally flawed premise: that you can compel any individual to behave well by demanding obedience through force and deprivation.

The problem with our easy comfort with exclusionary discipline is that it doesn’t work. It doesn’t work in schools — and it doesn’t work in any other context either.  Continue reading

Correlation is Not Causation and Other Boring but Important Cautions for Interpreting Education Research

Journalists, as a general rule, use accessible language. Researchers, as a general rule, do not. So journalists who write about academic research and scholarship, like the reporters at Chalkbeat who cover school spending studies, can help disseminate research to education leaders since they write more plainly.

But the danger is that it’s easy for research to get lost in translation. Researchers may use language that appears to imply some practice or policy causes an outcome. Journalists can be misled when terms like “effect size” are used to describe the strength of the association even though they are not always causal effects.

To help journalists make sense of research findings, the Education Writers Association recently put together several excellent resources for journalists exploring education research, including 12 questions to ask about studies. For journalists (as well as practitioners) reading studies that imply that some program or policy causes the outcomes described, I would add one important consideration (a variation on question 3 from this post): if a study compares two groups, how were people assigned to the groups? This question gets at the heart of what makes it possible to say whether a program or policy caused the outcomes examined, as opposed to simply being correlated with those outcomes.

Randomly assigning people creates a strong research design for examining whether a policy or program causes certain outcomes. Random assignment minimizes pre-existing differences among the groups, so that differences in the outcomes can be attributed to the treatment (program or policy) instead of different characteristics of the people in the groups. In the image below, random assignment results in having similar-looking treatment and control groups. Continue reading

Which Aspects of the Work Environment Matter Most for New Teachers?

As a member of Bellwether’s evaluation practice, there’s nothing I love more than connecting research with policy and practice. Fortunately, I’m not alone: The National Center for Analysis of Longitudinal Data in Education Research (CALDER) has launched several initiatives to succinctly describe empirical research on contemporary topics in education and encourage evidence-based policymaking.

At CALDER’s recent 12th annual conference, I had the opportunity to serve as a discussant in a session on the career trajectories of teachers. The papers in this session illustrated the potential for research to inform policy and practice, but also left me wondering about the challenges policymakers often face in doing so.

Taking their First Steps: The Distribution of New Teachers into School and Classroom Contexts and Implications for Teacher Effectiveness and Growth” by Paul Bruno, Sarah Rabovsky, and Katharine Strunk uses data from Los Angeles Unified School District to explore how classroom and school contexts, such as professional interactions, are related to teacher quality and teacher retention. Their work builds on prior research that suggests school contexts are associated with the growth and retention of new teachers. As my Bellwether colleagues have noted, to ensure quality teaching at scale, we need to consider how to restructure initial employment to support new teachers in becoming effective.

In “Taking their First Steps,” the researchers developed four separate measures to understand the context in which new teachers were operating.  The measure of “instructional load” combined twelve factors, including students’ prior-year performance, prior-year absences, prior-year suspensions, class size, and the proportion of students eligible for free or reduced-price lunch, eligible for special education services, or classified as English learners. “Homophily” was measured by a teacher’s similarity to students, colleagues, and administrators in terms of race and gender. “Collegial qualifications” consisted of attributes such as years of experience, National Board certification, and evaluation measures. “Professional culture” was a composite of survey responses regarding the frequency and quality of professional interactions at teachers’ school sites.

Which of these factors had impact on teachers’ observation ratings and teacher attendance? As seen in the figures below, instructional load had a significant negative relationship with teachers’ observation ratings, meaning teachers with higher instructional loads (such as students with lower prior performance, more prior absences and suspensions, or larger class sizes) received lower ratings. On the other hand, professional culture had a significant positive impact on observation ratings, meaning that in schools where teachers had more and higher-quality professional interactions, new teachers received higher observation ratings. Instructional load also had a strong negative relationship with attendance rates, meaning teachers with higher instructional loads took more personal days or used more sick leave.

Figure based on Katharine Strunk’s presentation from January 31, 2019.

Continue reading