Monday, January 28, 2013

A new start for 2013

I recently took on a long term development project that involves a certain baby with beautiful blue eyes, so the blogging had to move to the back burner. But here is a fresh contribution for this month.



As part of the M&E I do for educational programmes, I frequently suggest to clients that they not only consider the question: “Did the project produce the anticipated gains?” but that they also answer the question “Was the project implemented as planned? This is because sub-optimal implementation is, in my experience, almost always to blame for negative outcomes of the type of initiatives tried out in education. 

Consider the example of a project which aims to roll out extra computer lessons in Maths and Science in order to improve learner test scores in Maths and Science.  We not only do pre-and post-testing of the learner test scores in the participating schools, but we also track how many hours of exposure the kids got, what content they covered, how they reacted to the content, etc. And we attend the project progress meetings where the project implementer reflects on the implementation of the project.  Where we eventually don’t see the kind of learning gains anticipated, we are then able to pinpoint what went “wrong” with the implementation – frequently we can predict what the outcome will be based on what we know from the implementations. This manual on implementation research outlines a more systematic approach to figuring out how best to implement an initiative – written with the health sector in mind.  

Of course implementation success and the final outcomes of the project is only worth investigating for interventions where there is some evidence that the kind of intended changes are possible. If there is no evidence of this kind, we sometimes conduct a field trial with a limited number of kids, on limited content, over a short period of time in an implementation context similar to the one designed for the bigger project.  This helps us to answer the question “Under ideal circumstances, can the initiative make a difference in test scores?

What a client chooses to include in an evaluation is always up to them, but let this be a cautionary tale: A client recently declined to include a monitoring / evaluation component that considered programme fidelity, on the basis that it would make the evaluation too expensive. When we started collecting post-test data for the evaluation, we discovered a huge discrepancy between what happened on the ground, and what was initially planned–  Leaving the donor, the evaluation team and the implementing agency with a situation that has progressed too far to fix easily.  Perhaps if there was better internal monitoring this situation could have been prevented. But involving the evaluators in some monitoring would have definitely helped too!

No comments: