Yesterday, my colleague Lina Bankert wrote about three new federal grant competitions that have just been posted. Those who are new to federal grant competitions may find the evaluation requirements and research-design options (explained below) overwhelming. Federal grant applications typically require:
- An evidence-based rationale for the proposed project’s approach, such as a logic model
- Citations of prior research that support key components of a project’s design and meet specific thresholds for rigor specified by the What Works Clearinghouse
- Expected outcomes and how applicants will measure those with valid and reliable instruments
- Explanation of how the proposed project will be studied to understand its impact
Proposals may be scored by two kinds of individuals: reviewers with programmatic expertise and reviewers with evaluation expertise. Sections of the grant are allocated a certain number of points, all of which total to a final score that drives which proposals receive awards. The evaluation section of these proposals can represent up to 25% of the total points awarded to applicants, so having a strong one can make or break an application.
Writing these sections requires a sophisticated understanding of research methodology and analytical techniques in order to tie the application together with a consistent and compelling evidence base. Our evaluation team at Bellwether has partnered with a number of organizations to help them design programmatic logic models, shore up their evidence base, and write evaluation plans that have contributed to winning applications to the tune of about $60 million. This includes three recent partnerships with Chicago International Charter School, Citizens of the World Charter Schools, and Grimmway Schools — all winners in the latest round of Charter School Program (CSP) funding for replication and expansion of successful charter networks.