(Click on the pic for a larger version).
In my experience the "synthesize findings across evaluations"-bit gets neglected. In my work as an evaluator contracted to many corporate donors, I am usually required to submit an evaluation report for use by the client. I often have to sign a confidentiality agreement that prohibits me from doing any formal synthesis and sharing, even if I am doing similar work for different clients. Informally, I do share from my experience, but the communication is based on my anecdotal retellings of evidence that has been integrated in a very patchy manner.I try to push and prod clients into talking to each other about common issues, but this rarely results in a formal synthesis.
It is not always feasible for the clients who commission evaluations to do this kind of synthesis. Their in-house evaluation capacity rarely includes the meta-analysis skill, and even if they contract a consultant to conduct a meta-analysis based on a variety of their own evaluations, there are some problems: Aggregating findings from a range of evaluations that do not pay attention to the possibility that a meta-analysis will be done somewhere in the future, requires a bit of a “fruit-salad approach” where apples and oranges, and even some peas and radishes, are thrown together. Another obvious problem is that donors who do not care to share the good, bad and ugly of their programs with the entire world, would be hesitant to make their evaluations available for a meta-analysis conducted by another donor.
Perhaps we require a “harmonization” effort among the corporate donors working in the same area?
No comments:
Post a Comment