Friday, December 21, 2007

M&E for Philanthropy

The following shows what makes it so difficult to work with charities!

2 Young Hedge-Fund Veterans Stir Up the World of Philanthropy
Published: December 20, 2007
Holden Karnofsky and Elie Hassenfeld rank charities by analyzing the numbers in much the same way they did at their investment management company
http://www.nytimes.com/2007/12/20/us/20charity.html?ex=1355893200&en=a0d9a701ad60ffd0&ei=5124&partner=permalink&exprod=permalink

In the fall of 2006, they and six colleagues created what Mr. Karnofsky calls a “charity club.” Each member was assigned to research charities working in a specific field and report back on those that achieved the best results. They were stunned by the paucity of information they could collect.

“I got lots of marketing materials from the charities, which look nice, you know, pictures of sheep looking happy and children looking happy, but otherwise are pretty useless,” said Jason Rotenberg, a former member of the club and now a $50,000 donor to the Clear Fund. “It didn’t seem like a reasonable way of deciding between one charity and another.”

GiveWell’s findings are available on the Internet, without charge, at www.givewell.net. In evaluating charities, Mr. Karnofsky and Mr. Hassenfeld press them for information, analyzing the numbers in much the same way they did at Bridgewater. The Smile Train, for instance, a charity that repairs cleft palates, was asked how much it spent in each region and each country to treat how many patients in each.Many in the field question how long GiveWell can survive. While 34 percent of wealthy donors who responded to a survey sponsored by the Bank of America said they wanted more information on nonprofits, almost three-quarters said they would give more if charities spent less on administration. And collecting information is costly.

As a result, most philanthropic advisory services like GiveWell have a hard time raising money. The Clear Fund has raised $300,000 since its inception this year, about half of which has gone to operating GiveWell.

The problem is that you want charities to measure impact. Although the cost of measuring is sometimes prohibitive, the potential cost of not measuring should be enough reason to make sure that you do measure.

Monday, November 12, 2007

Probability Sampling Approaches

Probability Sampling Approaches

Probability sampling approaches allow you to generalize to the full population, since it ensures that special random characteristics are likely to be distributed evenly across the units included and excluded in / from the sample. It therefore is likely to yield a less biased sample and the results could be said to apply to the full population (if the appropriate sample size was selected). Different kinds of probability sampling approaches are possible.

The figures below demonstrate the different approaches. Assume each number is a unique member of the population, assume that each group consists of discreet mutually exclusive members of the population (In columns) and assume that each cluster (delineated by a block) is a group of members in the same geographic area.

With simple random sampling the sample is selected from the whole population using a table of numbers. Note that this does not necessarily ensure balanced representation amongst different groups.

With stratified random sampling, a set number of participants from each group can be selected. Note that this does not necessarily ensure that the most economical approach is used. In the example some cases from almost all of the geographic clusters are included.

With cluster sampling, a set number of clusters are randomly selected (in this case 4) with a set number of randomly selected units within each cluster (in this case 5). Although this will be more economical in terms of fieldwork costs because travel to different clusters have been limited, it does not necessarily guarantee equal representation of groups.

With systematic sampling, a set pattern is systematically applied to select participants. In the case of the example above, every 11th member of the population were selected. Note that it did not require a random table of numbers, but were still subject to the same limitations as the simple random sample.

Type of Probability Samples

When is it applicable

Drawbacks

Simple Random Sampling (I.e.randomly select 50 schools off a list with all schools in the country)

It is ideal for statistical purposes

· It may be difficult to achieve in practice

· It requires a precise list of the whole population

· It is costly to conduct as those sampled may be spread over a wide area.

Stratified Random Sampling (I.e. Randomly select 50 schools per strata such as province)

· It ensures better coverage of the population than simple random sampling.

· It is administratively more convenient to stratify a sample – interviewers can be specifically trained to manage particular strata (e.g. age, gender, ethnic or language groups).

· Difficulty in identifying appropriate strata.

· More complex to organize and analyse results

Cluster Sampling (I.e. split the schools in a province up in geographical clusters, select 10 clusters randomly, and then proceed to visit 20 schools within each cluster)

More cost effective in terms of travel, thereby producing a reduction in the overall cost

· Units in a cluster may be very similar and therefore are less likely to represent the whole population

· Cluster sampling has a larger sampling error than simple random sampling.

Systematic Sampling (i.e. a set pattern is applied to the data set, e.g. every 11th member is selected)

It spreads the sample more uniformly over the population and is easier to conduct than simple random sampling.

The system may interact with a concealed pattern in the population.

Thursday, November 08, 2007

Local monitoring of Public Service Delivery

I find the readings on the pelican listserve (Pelican Initiative: Platform for Evidence-based Learning & Communications for Social Change) always interesting. Today the message below was posted.

I find the description of Social Auditing quite interesting. At first I thought it might be similar to Social Accounting - the move of private companies to also account on the triple bottom line of Environmental, Social and Economical impacts (costs and benefits) in order to promote corporate responsibility. We've actually done a couple of sustainability reports using this framework. The focus, however is more on accountability than it is on changing things for improvement. Hope it is not the same for this kind of local government monitoring.

I have also previously posted something on the MSC techniques described here. I think it is quite useful if there are no clearly defined objectives to start off with, or if the social reality is very complex and requires more of a systems look at the effects. Any case. You decide for yourself!

PS. the AEA conference is on in Baltimore this week, and although I am not able to attend, my business partner is. It sounds like it is interesting and stimulating as always!

Ciao

B

**********************************************

Last month, a number of useful documents and experiences were shared by Gilles Mersadier, who is the coordinator of the FIDAfrique network (http://www.fidafrique.net/article413.html). Among the material that he shared, he referred to a methodology for capitalisation (the process of sharing experiences among and across organisations) that is being used in the context of 25 West-African rural development projects. To date, 20 of these projects have published different types of documents that are disseminated throughout the FIDAfrique network. The production of these documents, which are available on the network's website, are supported by a methodological guide on the capitalisation process (in French and
English):
http://www.fidafrique.net/article467.html?var_recherche=capitalization

In the past few weeks, different contributions have been sent in the context of the discussion around local monitoring of public service delivery. One of the questions which we posed in the beginning of this discussion focused on the types of approaches that are being used for local monitoring purposes. In this message, I would like to briefly describe two of such approaches:

1: Social Auditing
2: Most Significant Change


**** 1: Social Auditing

The Social Auditing method has been developed and used for participatory monitoring of public service delivery by the organisation CIET (for more info on the organisation, which started in 1985 in Mexico and developed into an international network, please visit http://www.ciet.org/en/aboutciet/ ).

The method's primary aim is to 'increase the informed interaction between communities and public services'. The impact, coverage and costs of public services are examined through a combination of quantitative (survey) and qualitative (key informant and focus groups) evidence. The civil society plays a central role in interpreting this evidence, and through this process contributes to the creation of local solutions. The use of both 'hard' and 'soft' evidence helps to provide a strong and accurate underpinning to the locally defined ideas and solutions, and as such strengthens the legitimacy of the solutions that are developed.

A report that was published in 2005 on the use of the method for assessing 'governance and delivery of public services' in Pakistan lists the following seven stages of a Social Audit cycle:
(1) Clarify the strategic focus;
(2) Design sample and instruments, pilot testing;
(3) Collect information from households on use and perception of public services;
(4) Link this with information from the public services;
(5) analyse the findings in a way that points to action;
(6) Take findings back to the communities for their views about how the improve the situation;
(7) Bring evidence and community voice into discussions between service providers, planners and community representatives to plan and implement changes.

The CIET website features an extensive library section where the reports of previous social audits can be accessed, together with other experiences relating to the network's central focus on the 'socialisation of
evidence':
http://www.ciet.org/en/browse/librarydocs/


**** 2: Most Significant Change

Central in the Most Significant Change (MSC) technique is - as the name suggests - the collection of significant change stories that emerge from the field level. Following the collection of these stories, those stories which are considered most significant are selected by panels of designated stakeholders or staff. The collection, discussion and further selection of the stories revolves around 'domains': the areas that the stakeholders collectively decide on as the focus of the monitoring. Provided that it is clear who is involved at selecting the stories at the different levels, the use of the domains makes for a transparent monitoring process. Given these characteristics, and the fact that it is relatively easy to learn how to use it, the technique can be useful in the context of local monitoring of public service delivery.

While an English guide about the technique has been available since 2005
(see: http://www.mande.co.uk/docs/MSCGuide.htm ), efforts have also been made to translate the guide into different international and local languages, including Spanish, French, Russian, Tamil and Indonesian. Rick Davies has set up a specific Web Log to make available these translations, and to allow users to share suggestions to further improve the quality of the translations. You can access this website here:

http://mscguide-translations.blogspot.com/


Our current discussion on the topic of local monitoring of public service delivery is moving towards an end, so it would be great if some of you could still share some experiences and ideas on this topic in case you did not yet have the time to do so. We will aim to send around a summary of the key points that have been contributed sometime next week.

Best wishes,
Niels

Thursday, August 16, 2007

What is an Evaluator?

Do you find it difficult to explain to people what you do? Those magical two sentences that will get people to go "Aaaaaah, now I get what you do?" Unfortunately I have not been able to come up with something concrete yet. But I am still trying.

In a television interview to publicise the SAMEA conference, the DDG from the PSC Mr. Mash Dipofu tried to explain it with an example. He asked the TV presenter if he knew what the viewers thought of his programme, how it can be improved and how many people actually watches it. He explained that by answering these questions, you are doing what an evaluator would be doing and answering the underlying question: Does what I am doing have value?

Which got me thinking. So much of what we do as evaluators are also done by other professionals.

  • We are a little like investigative journalists: We talk to people and ask questions and gather information to make an argument for or argainst something. Sometimes to inform readers of some wrong doing... Sometimes we celebrate what has been achieved.
  • Then we are also a little like the weather guy. We collect numbers over a long period of time and by applying some statistical techniques we can start predicting what will happen in future.
  • Another way of looking at our job is to compare it to that of teachers. We guide people to learn from their environments - assuming that they need to be taught how to use the information at their disposal to make intelligent choices. We check with tests whether the intended result has been achieved... much like teachers check whether their students have mastered a skill or knowledge component.
  • And then of course evaluators are also a little like an auditor in the way that we try to prove to people that money has been well spent.

The problem with explaining to people what we do, is probably because people tend to confuse it with research and planning and implementation and all kinds of other things. I have also spent some time to think about how being an evaluator is different from being a researcher, a planner and an implementer.

  • Because we use research techniques to collect evidence in order to evaluate, the difference between being a researcher (who asks questions in a specific way to gather evidence) and an evaluator (who asks questions in a specific way to gather evidence to then make a value judgment about the evaluand) is sometimes a little difficult to explain. But there is a difference!
  • Planning, on the other hand comes quite naturally when you are an evaluator. After delivering an evaluation, I frequently get asked to assist in planning processes - if people value what you produced in the evaluation they want to make sure that they plan to implement the recommendations made. Being a weather guy and an auditor makes it easier to plan because you can draw info together to make predictions, and you know that you will have to explain to people why you chose to spend their money in a particular way.
  • I think evaluators will probably make terrible implementers. As an evaluator you are constantly asking questions: Is this the best way to do things? Will we achieve results? How would we know that we added value? How do we know that this is the best way forward? To implement, however, you sometimes have to say "Well I don't know all the answers but I am making a decision to do ABC in the following way and that is the way it is!"
Despite having useful analogies to explain what evaluators do and don't do, I think that we are at risk if we, as practitioners of a scientific metadiscipline, don't understand how the ideology underlying evaluation is different from other those informing other jobs. Evaluators might be sharing some commonalities with teachers, weather people, auditors and journalists, but we have different values and assumptions guiding our work. We cannot forget that our work probably has a deeply political nature because we have to choose at some stage whose questions we will have to answer.

I found the following useful bit about evaluation approaches and the underlying philosophy, epistemology and ontology at http://www.recipeland.com/facts/Evaluation

Classification of approaches

Two classifications of evaluation approaches by House House, E. R. (1978). Assumptions underlying evaluation models. Educational Researcher. 7(3), 4-12. and Stufflebeam & Webster Stufflebeam, D. L., & Webster, W. J. (1980). An analysis of alternative approaches to evaluation. Educational Evaluation and Policy Analysis. 2(3), 5-19. can be combined into a manageable number of approaches in terms of their unique and important underlying principles.

House considers all major evaluation approaches to be based on a common ideology, liberal democracy. Important principles of this ideology include freedom of choice, the uniqueness of the individual, and empirical inquiry grounded in objectivity. He also contends they all are based on subjectivist ethics, in which ethical conduct is based on the subjective or intuitive experience of an individual or group. One form of subjectivist ethics is utilitarian, in which “the good” is determined by what maximizes some single, explicit interpretation of happiness for society as a whole. Another form of subjectivist ethics is intuitionist / pluralist, in which no single interpretation of “the good” is assumed and these interpretations need not be explicitly stated nor justified.

These ethical positions have corresponding epistemologies—philosophies of obtaining knowledge. The objectivist epistemology is associated with the utilitarian ethic. In general, it is used to acquire knowledge capable of external verification (intersubjective agreement) through publicly inspectable methods and data. The subjectivist epistemology is associated with the intuitionist/pluralist ethic. It is used to acquire new knowledge based on existing personal knowledge and experiences that are (explicit) or are not (tacit) available for public inspection.

House further divides each epistemological approach by two main political perspectives. Approaches can take an elite perspective, focusing on the interests of managers and professionals. They also can take a mass perspective, focusing on consumers and participatory approaches.

Stufflebeam and Webster place approaches into one of three groups according to their orientation toward the role of values, an ethical consideration. The political orientation promotes a positive or negative view of an object regardless of what its value actually might be. They call this pseudo-evaluation. The questions orientation includes approaches that might or might not provide answers specifically related to the value of an object. They call this quasi-evaluation. The values orientation includes approaches primarily intended to determine the value of some object. They call this true evaluation.

http://www.recipeland.com/facts/Evaluation



Wednesday, July 11, 2007

Rant for today*

A potential client sends out Terms of Reference requesting potential service providers to submit a quotation for an (5 maybe 10 person day) evaluation engagement. Note, they did not ask for a proposal, they asked for a quotation. Given the limited scope of the project, a quotation makes sense. So that is what I submit.

Then potential client reads through the proposals they received and *Horror* *shock* discovers that there isn’t enough information in the quotations to make a transparent decision. (Maybe there was a little problem with the Terms of Reference?) They then set up a meeting in which they expect potential service providers to present to a panel. “For the purposes of meeting with the potential evaluators and ensuring the selection process is fair and transparent”. (Have I mentioned that this is a small job?)

So I go and have a wonderful meeting with the potential client, but in the end do not get the job. My heart isn’t broken or anything. It would’ve been nice to get the job but… ah well, I think they looked for a content specialist rather than an evaluation specialist in the first place.

But hey, I am an evaluator, and evaluative thinking requires me to find out why I was not successful. So I write the “Thanks for the notification, we are disappointed, could you please tell us where our submission was weak… bla-di-blah” email. To which I don’t receive a response. I get my office manager to follow up and get a response of the kind: “We liked your presentation / Sorry we don’t have time to provide feedback / now please just leave us alone”.

What happened to the transparency and fairness thing? Am I unreasonable to think that a two line explanatory email is not much to ask after all of the trouble they put me through?

Ahem… The UK evaluation Society has some guidelines for persons when they commission evaluations at:

http://www.evaluation.org.uk/Pub_library/PRO2907%20-%20UKE.%20A5%20Guideline.pdf

Maybe more people should read that?

*Because this is my blog I get to complain here every now and again. I promise that this will not turn into those rant-upon-rant blogs, but I really need to get the following off my chest.

Whew! Its been like how long?

I see I haven't posted anything here since April. That is probably because I have nothing to write about when I have time, and when I have time to write, I can't think of anything to write about. So here are a list of things I would like to address at some stage:

* What people think a focus group is and what it really is
* Rapid assessment methods - differences in approaches (I need to draw up a table based on that AJE article I read yesterday)
* When people should consider appointing an evaluation specialist rather than a content specialist for certain evaluations.
* If evaluators do strategic planning, what is it that they should know about planning AND if planners or anybody else does evaluations what is it they need to know about evaluations
* People say evaluation is a meta-discipline. Why do they say that.
*All this talk about accreditation is just confusing everybody. What does it mean and what about the international body of work that has been done on this aspect?

Wednesday, April 25, 2007

Handy Publication: New Trends in Evaluation

You know how you always come back with a stack of stuff to read when you've attended an evaluation conference? It most cases the material just gets added to my ever growing "to read" pile. It is only once I start searching for something or decide to spring-clean, that I actually sit down and read some of the stuff. This morning I came across a copy of UNICEF / CEE/CIS and IPEN's New Trends in Evaluation.

What a delightfully simple straightforward publication - yet it packs so much relevant information between its two covers. I wish I had remembered about it last week when I lectured to students at UJ. Before I was able to get on with the lecture on Participatory M&E, I first had to explain how M&E is different and similar to Social Impact Assessments (In the sense of ex-ante Environmental Impact Assessment type assessment). I think it would have been a very handy introductory source to have.

The table of contents looks as follow:

1. Why Evaluate?
The evolution of the evaluation function
The status of the evaluation function worldwide
The importance of Evaluation Associations and Networks
THe oversight and M&E function
2. How to Evaluate?
Evaluation culture: a new approach to learning and change
Democracy and Evaluation
Democratic Approach to Evalution
3. Programme Evaluation Development in the CEE/CIS

But what is really useful is the Annexures:
Annex 1: Internet Bades Discussion Groups Relevant to Evaluation
Annex 2: Internet Websites Relevant to Evaluation
Annex 3: Evaluation Training and Reference Sources Available Online
Annex 4-1: UNEG Standards for Evaluation in the UN System
Annex 4-2: UNEG Norms for Evaluation in the UN System
Annex 5: What goes into a Temrs of Reference; UNICEF evaluation Technical Notes, Issue No.2

The good thing about this publication is that you can download it for free off the internet at

http://www.unicef.org/ceecis/New_trends_Dev_EValuation.pdf

An introductory blurb and a presentation is also available from the IOCE website.

http://ioce.net/news/news_articles/061023_unicef-ipen.shtml

Tuesday, April 10, 2007

Presentation Presented at the SAMEA conference 26 - 30 March 2007

Based on some of the ideas in previous blog entries, I presented the following presentation at the recent SAMEA conference.

Setting Indicators and Targets for Evaluation of Education Initiatives

Introduction

  • Good Evaluation Indicators and Targets are usually an important part of a robust Monitoring and Evaluation system.
  • Although evaluation indicators are usually considered as important, all evaluations do not have to make use of a set of pre-determined indicators and targets.
  • The most significant change (MSC) technique, for example, looks for stories of significant change amongst the beneficiaries of a programme, and after the fact uses a team of people to determine which of these stories represent MSC and real impact.
  • You have to include the story around the indicators in your evaluation reports in order to learn from the findings.
What do we mean?
  • The definition of an Indicator is: “A qualitative or quantitative reflection of a specific dimension of programme performance that is used to demonstrate performance / change”
  • It is distinguished from a Target which: “Specifies the milestones / benchmarks or extent to which the programme results must be achieved”
  • And it also different from a Measure which is: “The Tool / Protocol / Instrument / Gauge you use to assess performance”
Types of Indicators
  • The reason for using indicators is to feel the pulse of a project as it moves towards meeting its objectives or to see the extent to which it has been achieved. There are different types of indicators:
  • Risk/enabling indicators – external factors that contribute to a project’s success or failure. They include socio-economic and environmental factors, the operation and functioning of institutions, the legal system and socio-cultural practices.
  • Input indicators – also called ‘resource’ indicators, they relate to the resources devoted to a project or programme. Whilst they can flag potential challenges, they cannot, on their own determine whether a project will be a success or not.
  • Process indicators – also called ‘throughput’ or ‘activity’ indicators. They reflect delivery of resources devoted to a programme or project on an ongoing basis. They are the best indicators of implementation and are used for project monitoring.
  • Output indicators –indicates whether activities have taken place by considering the outputs from the activities.
  • Outcome indicators - indicates whether your activities delivered a positive outcome of some kind.
  • Impact indicators – Concerns the effectiveness, usually long term, of a programme or project as judged by the measurable achieved in improving the quality of life of beneficiaries or other similar impact level result.
Good Indicators
  • Good Performance Indicators should be
  • Direct (Does it measure Intended Result?)
  • Objective (Is it ambiguous?)
  • Adequate (Are you measuring enough?)
  • Quantitative (Numerical comparisons are less open to interpretation)
  • Disaggregated (Split up by gender, age, location etc.)
  • Practical (Can you measure it timeously and at reasonable cost?)
  • Reliable (How confidently can you make decisions about it?) (USAID, 1996)
SMART Indicators
Most people have also heard about SMART indicators:
  • Specific
  • Measurable
  • Action Oriented
  • Realistic
  • Timed
How we use indicators
  • For many of the evaluation initiatives that we help to plan M&E systems for, we usually work with the managers to set indicators that they understand and can use.
  • Although the issue of data availability and data quality is usually a big concern, it is often the indicators and targets that are set that could make or break an evaluation.
Case Study
  • Implementers of a teacher training initiative wants to know if their project is making a difference in the maths and science performance of learners.
Pitfalls
  • Alignment between Indicators & Targets (If the indicator says something about a number, then the target must also be couched in terms of a number, and not a percentage)
  • Averaging out things that do not belong together (i.e. maths and science) does not make sense at all.
  • Not disaggregating enough (Are you interested in all learners, or is it important to look at disaggregating your data by age group, gender, educator)
  • Assuming that all targets should be about an increase: (Sometimes a trend in the opposite direction exists and it is expected that your programme will only mediate the effects)
  • Assuming that an increase from 20% to 50% is the same as an average increase of 50% to 80%. (Psychometrists have used the standardised gain statistic for a very long time. It is interesting that we don’t see more of it in our programmes.)
  • Ignoring the statistics you will use in analysis: (In some cases you are using a sample and averages. This means an average increase might just look like an increase, but when you test for statistical significance it is actually not an increase)
  • Setting indicators that require two measurements where one would be enough (Are you interested in an average increase, or just the % of people that make some minimum standard.)
  • Ignoring other research done on the topic (If a small effect size is generally reported for interventions of these kinds, isn’t an increase of 30% over baseline a little ambitious?)
  • If you don’t have other research on the topic, it should be allowable to adjust the indicators.
  • Setting an indicator and target that assumes direct causality between the project activity and the anticipated outcome (Even if you have brilliant teachers, how must the learners perform if learners have nowhere to do homework, School discipline is non-existent and after learners have accumulated 10 years of conceptual deficits in their education?)
  • Ignoring Relevance, efficiency, sustainability, and equity considerations. (Is educators training really going to solve the most pressing need?If your programme makes a difference, is it at the same cost as training an astronaut?What will happen if the trained educator leaves?Does the educator training benefit rural learners in the same way in which it would benefit urban learners?)
Ways to address the pitfalls
  • Do a mock data exercise to see how your indicator and target could play out.
  • This will help you think through the data sources, the statistics, and the meaning of the indicator
  • Read extensively about similar projects to determine what the usual effect size is.
  • When you do your problem analysis, be sure to include other possible contributing factors, and don’t try to attribute change if it is not justifiable.
  • Look at examples of other indicators for similar programmes
  • Keep at it and work with someone who would be able to check your proposed indicators with a fresh eye.
Where to look
Example Indicators can be found in:
  • Project / Programme Evaluation reports from multi-lateral donor agencies
  • UNESCO Education For All Indicators
  • Long term donor-funded projects such as DDSP, QIP.
  • StatsSA publications and statistical extracts about the education sector.
  • Government M&E indicators.

Wednesday, March 07, 2007

Announcement from AEA about benefits

The AEA announced that two more Journals are available to members!

Announcement: American Evaluation Association Expands Online Journal Access

AEA Members now receive electronic access to two additional journals - Evaluation and the Health Professions and Evaluation Review - in addition to continued access to AEA's own American Journal of Evaluation and New Directions for Evaluation, as part of membership benefits.

The journals' content is searchable, and archived online content goes back multiple years.

Individual subscriptions to each journal are over $100 each, making AEA membership more of a value then ever at only $80. Members also receive AJE and NDE in hardcopy, discounts on conference and training registration, regular communications about news from all corners of the evaluation community, discounts on books, and the opportunity to participate in the life of the association and the field.

Learn more about AEA and join online at: www.eval.org

Thursday, February 01, 2007

Centre for Global Development

This is the Centre for Global Development Report that upset so many people again. I mean really! Didn't we agree that mixed methods are the way to go? RCTs Cannot possibly be the answer for all our impact evaluation questions.


From the CGD website at:

http://www.cgdev.org/content/publications/detail/7973


When Will We Ever Learn? Improving Lives Through Impact Evaluation

05/31/2006

Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world. But very few programs benefit from studies that could determine whether or not they actually made a difference. This absence of evidence is an urgent problem: it not only wastes money but denies poor people crucial support to improve their lives.

This report by the Evaluation Gap Working Group provides a strategic solution to this problem addressing this gap, and systematically building evidence about what works in social development, proving it is possible to improve the effectiveness of domestic spending and development assistance by bringing vital knowledge into the service of policymaking and program design.

In 2004 the Center for Global Development, with support from the Bill & Melinda Gates Foundation and The William and Flora Hewlett Foundation, convened the Evaluation Gap Working Group. The group was asked to investigate why rigorous impact evaluations of social development programs, whether financed directly by developing country governments or supported by international aid, are relatively rare. The Working Group was charged with developing proposals to stimulate more and better impact evaluations. This report, the final report of the working group, contains specific recommendations for addressing this urgent problem.

Wednesday, January 31, 2007

Monitoring without Indicators - Most Significant Change

On the Pelican list today, they sent through this handy reference to something that I think is infinitely useful for gathering proof and evidence when you don't have indicators and stacks of pre-developed evaluation mechanisms.

Check it out at:
http://www.mande.co.uk/docs/MSCGuide.htm. and http://www.mande.co.uk/MSC.htm


The guide (Prepared by Rick Davies and Jess Dart) explains the MSC technique as follows:

"The most significant change (MSC) technique is a form of participatory monitoring and evaluation. It is participatory because many project stakeholders are involved both in deciding the sorts of change to be recorded and in analysing the data. It is a form of monitoring because it occurs throughout the program cycle and provides information to help people manage the program. It contributes to evaluation because it provides data on impact and outcomes that can be used to help assess the performance of the program as a whole.
Essentially, the process involves the collection of significant change (SC) stories emanating from the field level, and the systematic selection of the most significant of these stories by panels of designated stakeholders or staff. The designated staff and stakeholders are initially involved by ‘searching’ for project impact. Once changes have been captured, various people sit down together, read the stories aloud and have regular and often in-depth discussions about the value of these reported changes. When the technique is implemented successfully, whole teams of people begin to focus their attention on program impact."

Certainly this looks like a very promising technique!

Tuesday, January 30, 2007

Report Back: Making Evaluation Our Own

A special stream was held on making Evaluation our own at the AfrEA conference. After the conference a small committee of African volunteers worked to capture some of the key points of the discussion. Thanks to Mine Pabari from Kenya for forwarding a copy!

What do you think of this?


Making Evaluation Our Own: Strengthening the Foundations for Africa-Rooted and Africa Led M&E

Overview & Recommendations to AfrEA

Niamey, 18th January, 2007

Discussion Overview

On 18 January 2007 a special stream was held to discuss the topic

Making Evaluation our own: Strengthening the Foundations for Africa-Rooted and Africa-Led M&E. It was designed to bring African and other international experiences in evaluation and in development evaluation to help stimulate debate on how M&E , which has generally been imposed from outside, can become Africa led and owned.

The introductory session aimed to set the scene for the discussion by considering i) What the African evaluation challenges are (Zenda Ofir) ii) The Trends Shaping M&E in the Developing World (Robert Piccioto) iii) The African Mosaic and Global Interactions: The Multiple Roles of and Approaches to Evaluation (Michael Patton & Donna Mertens). The last presentations explained, among others, the theoretical underpinnings of evaluation as it is practiced in the world today.

The next session briefly touched on some of the current evaluation methodologies used internationally in order to highlight the variety of methods that exist. It also stimulated debate over the controversial initiative on impact evaluation launched by the Center for Global Development in Washington. The discussion then moved to consider some of the international approaches that are currently useful or likely to become prominent in finding evidence about development in Africa (Jim Rugh, Bill Savedoff, Rob van den Berg, Fred Carden, Nancy MacPherson & Ross Conner)

The final session aimed to consider some possibilities for developing an evaluation culture rooted in Africa. (Bagele Chilisa). In this session some examples of how the African culture leans itself towards evaluation was given and also some examples that demonstrated that the currently used evaluation methodologies could be enriched if it considered an African world view.

Key issues emerging from the presentations and discussion formed the basis for the motions presented below:

  • Currently much of the evaluation practice in Africa is based on external values and contexts, is donor driven and the accountability mechanisms tend to be directed towards recipients of aid rather than both recipients and the providers of aim
  • For evaluation to have a greater contribution to development in Africa it needs to address challenges including those related to country ownership; the macro-micro disconnect; attribution; ethics and values; and power-relations.
  • A variety of methods and approaches are available and valuable to contributing to frame our questions and methods of collecting evidence. However, we first need to reexamine our own preconceived assumptions; underpinning values, paradigms (e.g. transformative v/s pragmatic); what is acknowledged as being evidence; and by whom before we can select any particular methodology/approach.

The lively discussion that ensued led towards the appointment of a small group of African evaluators to note down suggested actions that AfrEA could spearhead in order to fill the gap related to Africa-Rooted and Africa-Led M&E.

The stream acknowledges and extends its gratitude to the presenters for contributing their time to share their experiences and wealth of knowledge. Also, many thanks to NORAD for its contribution to the stream; and the generous offer to support an evaluation that may be used as a test case for an African-rooted approach – an important opportunity to contribute to evaluation in Africa.

In particular, the stream also extends much gratitude to Zenda Ofir and Dr. Sully Gariba for their enormous effort and dedication to ensure that AfrEA had the opportunity to discuss this important topic with the support of highly skilled and knowledgeable evaluation professionals.


Motions

In order for evaluation to contribute more meaningfully to development in Africa, there is a need to re-examine the paradigms that guide evaluation practice on the continent. Africa rooted and Africa led M&E requires ensuring that African values and ways of constructing knowledge are considered as valid. This, in turn, implies that:

§ African evaluation standards and practices should be based on African values & world views

§ The existing body of knowledge on African values & worldviews should be central to guiding and shaping evaluation in Africa

§ There is a need to foster and develop the intellectual leadership and capacity within Africa and ensure that it plays a greater role in guiding and developing evaluation theories and practices.

We therefore recommend the following for consideration by AfrEA:

o AfrEA guides and supports the development of African guidelines to operationalize the African evaluation standards and; in doing so, ensure that both the standards and operational guidelines are based on the existing body of knowledge on African values & worldviews

o AfrEA works with its networks to support and develop institutions, such as Universities, to enable them to establish evaluation as a profession and meta discipline within Africa

o AfrEA identifies mechanisms in which African evaluation practitioners can be mentored and supported by experienced African evaluation professionals

o AfrEA engages with funding agencies to explore opportunities for developing and adopting evaluation methodologies and practices that are based on African values and worldviews and advocate for their inclusion in future evaluations

o AfrEA encourages and supports knowledge generated from evaluation practice within Africa to be published and profiled in scholarly publications. This may include;

§ Supporting the inclusion of peer reviewed publications on African evaluation in international journals on evaluation (for example, the publication of a special issue on African evaluation)

§ The development of scholarly publications specifically related to evaluation theories and practices in Africa (e.g. a journal of the AfrEA)

Contributors

§ Benita van Wyk – South Africa

§ Bagele Chlisa – Botswana

§ Abigail Abandoh-Sam – Ghana

§ Albert Eneas Gakusi – AfDB

§ Ngegne Mbao – Senegal

§ Mine Pabari - Kenya

More Evaluation Checklists

Last week I put a reference to the UFE Check-list on my blog, and today I received a very useful link with all kinds of other evaluation check-lists on the AfrEA listserv . Try it out at:

http://www.wmich.edu/evalctr/checklists/checklistmenu.htm

It has Check-lists for

*Evaluation Management

*Evaluation Models

*Evaluation Values & Criteria

* Check-lists are useful for practitioners because it helps you to develop and test your methodology with view of improving it for the future.
* They are useful for those who commission evaluations because it reminds you what should be taken into account at all stages of the evaluation process.
* I think, however, that check-lists like these can be particularly powerful if they become institutionalised in practice - If an organisation requires the check-list to be considered as part of a day-to-day business process.

Monday, January 29, 2007

Making Evaluation our Own

At the AfrEA conference, there was a special stream on: 'Making Evaluation our Own'. It aimed to investigate where we are in terms of having Africa rooted, Africa lead evaluations.

I found it particularly useful because it became patently obvious that there are African world views and African methods of knowing that are not yet exploited for Evaluation in Africa. This of course brings the whole debate about "African" Evaluation theories to bear, and asks which kinds of evaluation theories are currently influencing our practice as evaluators in Africa.

Marvin C. Alkin and Christina A. Christie developed what they call the EVALUATION THEORY TREE. It splits the prominent (North-American) evaluation theorists into three big branches: Theories that focus on the use of evaluation, theories that focus on the methods of evaluation and theories that focus on how we value when evaluating. You can find more information about this at http://www.sagepub.com/upm-data/5074_Alkin_Chapter_2.pdf




The second tree is a slightly updated version. It was interesting to note that most of my reading about evaluation has been on "Methods" and "Use".

I think that if we are serious about developing our own African evaluation theories, we might need to develop our own African tree. Bob Piccioto mentioned that the African tree might use the branches of the above tree as roots, and grow its own unique branches.


A small commission from the conference put together a call for Action that outlines some key steps that should be taken if we hope to make progress soon. Hopefully I can post this at a later stage.

Keep well!

Thursday, January 25, 2007

UFE & The difference between Evaluation and Research

At the recent AFREA conference I was again reminded of what we are supposed to be doing in evaluation. Consider the word evaluation: It is about valuing something. Valuing for the purposes of accountability and for learning and improvement.

It is not just research, and although some people have indicated that they get irritated with our attempts at distinguishing evaluation from research, I think it is critically important to distinguish between research and evaluation.

Depending on which paradigm you come from, one might argue that research can be the same as evaluation. I don’t argue with that. What I do have a problem with is people approaching evaluations like research projects where the focus is all on “How do we collect evidence?” The methodology is critically important, agreed, and there is nothing that grates me more than seeing how people use poorly designed evaluation methodologies to collect “evidence”.

But evaluation is not just about how we collect information. Evaluation is supposed to take it a step further and make some evaluative judgments based on the data that was collected. Just describing your evaluation findings without saying what it means is senseless.

It is good and well if you find information about the level of maths capacity in rural schools interesting, but an evaluation will also go further and indicate whether the project is relevant, effective, efficient, has an impact and is sustainable or creates sustainable results. Without this additional “Valuing” judgments, an evaluation is only a research project that may increase our knowledge, but don’t help us to make decisions.

Something that may help more evaluations to be true evaluations is the Utilization Focused Evaluation approach of Michael Quinn Patton. It is all about how to ensure that an evaluation serves its intended purpose for the intended users. Go ahead – google Utilization Focused Evaluation and see how many hits come up. It literally is the biggest thing that has hit the Evaluation community in the past 30 years, yet many people are blissfully ignorant of this.

For those who commission evaluations, Patton specifically created a checklist that may be of value in making sure that evaluations are useful. www.wmich.edu/evalctr/checklists/ufe.pdf It might need to be adapted for use in your specific setting, but it definitely asks a couple of pretty critical questions about our evaluations.


Go ahead… I dare you to read up more about UFE (Utilization Focused Evaluation) and not be excited about the possibilities that evaluation has!


Have A good day!

PS. I hope to post some more of my thoughts on the AfrEA conference over the next month or so!

Thursday, January 04, 2007

IOCE

The IOCE is an international organisation for cooperation in evaluation and they have a couple of neat resources on their website:

http://www.ioce.net/resources/reports.shtml

The World Bank's Independent Evaluation Group Finds Progress On Growth, But Stronger Actions Needed For Sustainable Poverty Reduction



The World Bank's Independent Evaluation Group (IEG) is releasing its 2006 Annual Report on Operations Evaluation (AROE)

Joint UNICEF/IPEN Evaluation Working Paper on "New trends in development evaluation"

Resources for Evaluation and Social Research Methods

What Constitutes Credible Evidence in Evaluation and Applied Research? >>

When Will We Ever Learn: Recommendations to Improve Social Development through Enhanced Impact Evaluation

Very very usable Evaluation Journal - AJE

I’ve just paged through the December 2006 issue of the American Journal of Evaluation, and once again I am impressed.

It is such a usable journal for practitioners like myself, whilst still balancing it with the academic requirements that a journal should have. They do this by including

  • Articles – That deal with topics applicable to the broad field of program evaluation
  • Forum Pieces – A section were people get to present opinions and professional judgments relating to the philosophical, ethical and practical dilemmas of our profession.
  • Exemplars - Interviews with practitioners whose work can demonstrate in a specific evaluation study, the application of different models, theories and priciples described in evaluation literature.
  • Historical Record - Important turning points within the profession is analyzed, or historically significant evaluation works are discussed.
  • Method Notes – Which includes shorter papers describing methods and techniques that can improve evaluation practice.
  • Book Reviews – Recent books applicable to the broad field of program evaluation are reviewed.

I receive this journal as part of my membership to the American Evaluation Association – at a fraction of the costs that buying the publication on its own would have.


Go ahead – try it out – Here is a link to its archive:

http://aje.sagepub.com/archive/