Impact Evaluation Guidance Note and Webinar Series
With financial support from the Rockefeller Foundation, InterAction developed a four-part series of guidance notes and webinars on impact evaluation. The purpose of the series is to build the capacity of NGOs (and others) to demonstrate effectiveness by increasing their understanding of and ability to conduct high quality impact evaluation.
The four guidance notes in the series are:
Each guidance note is accompanied by two webinars. In the first webinar, the authors present an overview of their note. In the second webinar, two organizations - typically NGOs - present on their experiences with different aspects of impact evaluation. In addition, each guidance note has been translated into several languages, including Spanish and French. Webinar recordings, presentation slides and the translated versions of each note are provided on the website.
- Introduction to Impact Evaluation, by Patricia Rogers, Professor in Public Sector Evaluation, RMIT University
- Linking Monitoring & Evaluation to Impact Evaluation, by Burt Perrin, Independent Consultant
- Introduction to Mixed Methods in Impact Evaluation, by Michael Bamberger, Independent Consultant
- Use of Impact Evaluation Results, by David Bonbright, Chief Executive, Keystone Accountability
This blog is intended as a home to some musings about M&E, the challenges that I face as an evaluator and the work that I do in the field of M&E.Often times what I post here is in response to a particularly thought-provoking conversation or piece of reading. This is my space to "Pause and Reflect".
Wednesday, May 28, 2014
Impact Evaluation Guidance for Non-profits
Interaction has this lovely Guidance note and Webinar Series on Impact Evaluation available on their website.
Wednesday, May 21, 2014
Resources on Impact Evaluation
This post consolidates a list of impact evaluation resources that I usually refer to when I am asked about impact evaluations.
This cute video explains the factors that distinguishes impact evaluation from other kinds of evaluation, in two minutes. Of course randomization isn't the only way of credibly attributing causes and effects - and this is a particularly hot evaluation methodology debate. For an example of why this is sometimes an irrelevant debate - see this write up on parachutes and Chris Lysy's cartoons on the topic.
The Impact Evaluation debate flared up after this report, titled "When will we ever learn" was released in 2006. In the States there also was a prominent funding mechanism which required programmes to include experimental evaluation methods in their design, or not get funding (from about 2003 or so).
The bone of contention was that Randomized Control Trials (RCTs) and Experimental methods (and to some extent Quasi Experimental Designs) were held up as the "gold standard" in evaluation.
Which, in my opinion, is nonsense. So the debate about what
counts as evidence started again. The World Bank and big corporate
donors were perceived to push for Experimental Methods, Evaluation
Associations (with members committed to mixed methods) pushed back
saying methods can't be determined without knowing what the questions are. And others pushed back saying that RCTs are probably applicable in only about 5% of the cases in which evaluation is necessary.
The methods debate in Evaluation is really an old debate. Some really prominent evaluators decided to leave the AEA because they embarked on a position that they equated with "The flat earth movement" in geography. Here is a nice overview article, (The 2004 Claremont Debate: Lipsey vs. Scriven. DeterminingCausality in Program Evaluation and Applied Research: Should ExperimentalEvidence Be the Gold Standard?) to summarise some of it.
The methods debate in Evaluation is really an old debate. Some really prominent evaluators decided to leave the AEA because they embarked on a position that they equated with "The flat earth movement" in geography. Here is a nice overview article, (The 2004 Claremont Debate: Lipsey vs. Scriven. DeterminingCausality in Program Evaluation and Applied Research: Should ExperimentalEvidence Be the Gold Standard?) to summarise some of it.
The Network of Networks in Impact evaluation then sought to write a guidance document, but even after this was released, there was a feeling that not enough was said to counter the "gold standard" mentality. This document, titled "Designing impact evaluations, different perspectives" provides a bit more information on the "other views".
Literature on Impact Evaluation Methods
If you are interested in literature on Evaluation Methods, look at Better Evaluation to get a quick overview.
I like Cook, Campbell and Shadish to understand
experimental and quasi experimental methods, but this online knowledge base resource is good too.
For some resources on other more mixed methods approaches to impact evaluation, you need to look at Realist Synthesis, General Elimination Method, Theory Based Evaluation, and something that I think has potential, the Collaborative Outcomes Reporting approach.
The South African Department for Performance Monitoring and Evaluation's guideline on Impact Evaluation is also relevant if you are interested in work in the South African Context.
The South African Department for Performance Monitoring and Evaluation's guideline on Impact Evaluation is also relevant if you are interested in work in the South African Context.
Subscribe to:
Posts (Atom)