There’s been a lot of discussion of gender and women’s issues in our society. In fact, many foundations and grant funders are working earnestly to...
Debunking the 5 Myths of Impact Evaluation
Be the first to know about new Fluxx grants management resources, blog articles and podcasts.
There are many misconceptions that obfuscate the impact evaluation landscape, leaving many foundations feeling disempowered and paralyzed. By busting a few of these myths, we hope to clear the way for grantmakers to shake that paralysis and take the first steps on a meaningful impact evaluation journey.
Myth No. 1 The only credible impact evaluation is a randomized control trial (RCT).
RCTs are considered the gold standard of impact evaluation methods. They’re undoubtedly important, but they aren’t ideal for every program at every foundation. For example, they’re well-fit for studies of the impact of specific drugs or health treatments because individuals can be effectively randomized and conditions for the delivery and observation of the treatment group can be controlled. But in the social sector, the circumstances associated with a program, participation, and context are generally much more complex and influenced by unwieldy forces such as climate, public policy, and human nature.
Myth No. 2 Only quantitative data demonstrates impact.
In many cases, collecting both qualitative and quantitative data yields the most insightful information about your impact. For instance, a purely qualitative evaluation can provide valuable information about specific factors associated with the program or individual participants, but it will not explain how common these factors are across programs. Similarly, a purely quantitative evaluation will allow you to determine whether you have a positive or negative impact, but it may not indicate how or why you received the impact you did.
Myth No. 3 We don’t have the money for impact evaluation.
While you can easily drop a pretty penny on an impact evaluation, it’s not always necessary. The cost of an impact evaluation should be determined by your program’s framework, its size, and the context in which it’s implemented. You should also consider where you are in your implementation. If you’re still in your learning stages, your costs will differ from an organization looking to scale up.
Myth No. 4 Positive results mean our program is working.
It’s difficult to attribute a positive effect to your program with an impact evaluation alone. Keep that in mind if you intend to move into impact evaluation without first conducting developmental and implementation evaluations. You may capture important information about your program within a specific setting, but without a validated framework and making sense of the information, gauging your grantmaking’s effect will be difficult and unreliable at best.
A developmental evaluation ensures a program and its process for change are feasible and appropriate.
An implementation evaluation determines whether program activities are accurate “in practice” and can be implemented as intended.
Myth No. 5 Our impact evaluation is complete. Now we’re done!
You’ve trudged your way through the development and implementation evaluations, conducted a meaningful impact evaluation, achieved (hopefully!) great results, and now you just want to relax and forget about impact and evaluations for a little while.
You should give yourself a break. You deserve it! But use that break to gear up for what’s next. An impact evaluation that’s not tied to a learning plan is a missed opportunity. As you move forward through programmatic framework development and implementation, use the information you’ve gathered to guide your future decision-making and strategies.
These five myths have been stumbling blocks for countless foundations, but they don’t have to be. Now that you can identify these misconceptions, you can overcome them and move forward with an effective evaluation and fine-tune your strategy to maximize your impact. For more detailed information on the 5 Myths of Impact Evaluation download the complete white paper here.