As part of the M&E I do for educational programmes, I
frequently suggest to clients that they not only consider the question: “Did the project produce the anticipated
gains?” but that they also answer the question “Was the project implemented as planned?” This is because sub-optimal
implementation is, in my experience, almost always to blame for negative
outcomes of the type of initiatives tried out in education.
Consider the example of a project which aims to roll out
extra computer lessons in Maths and Science in order to improve learner test
scores in Maths and Science. We not only
do pre-and post-testing of the learner test scores in the participating schools,
but we also track how many hours of exposure the kids got, what content they
covered, how they reacted to the content, etc. And we attend the project
progress meetings where the project implementer reflects on the implementation
of the project. Where we eventually don’t
see the kind of learning gains anticipated, we are then able to pinpoint what
went “wrong” with the implementation – frequently we can predict what the
outcome will be based on what we know from the implementations. This manual on
implementation research outlines a more systematic approach to figuring out how
best to implement an initiative – written with the health sector in mind.
Of course implementation success and the final outcomes of
the project is only worth investigating for interventions where there is some
evidence that the kind of intended changes are possible. If there is no evidence of this kind, we sometimes conduct
a field trial with a limited number of kids, on limited content, over a short
period of time in an implementation context similar to the one designed for the
bigger project. This helps us to answer
the question “Under ideal circumstances,
can the initiative make a difference in test scores?”
No comments:
Post a Comment