Wednesday, April 25, 2007

Handy Publication: New Trends in Evaluation

You know how you always come back with a stack of stuff to read when you've attended an evaluation conference? It most cases the material just gets added to my ever growing "to read" pile. It is only once I start searching for something or decide to spring-clean, that I actually sit down and read some of the stuff. This morning I came across a copy of UNICEF / CEE/CIS and IPEN's New Trends in Evaluation.

What a delightfully simple straightforward publication - yet it packs so much relevant information between its two covers. I wish I had remembered about it last week when I lectured to students at UJ. Before I was able to get on with the lecture on Participatory M&E, I first had to explain how M&E is different and similar to Social Impact Assessments (In the sense of ex-ante Environmental Impact Assessment type assessment). I think it would have been a very handy introductory source to have.

The table of contents looks as follow:

1. Why Evaluate?
The evolution of the evaluation function
The status of the evaluation function worldwide
The importance of Evaluation Associations and Networks
THe oversight and M&E function
2. How to Evaluate?
Evaluation culture: a new approach to learning and change
Democracy and Evaluation
Democratic Approach to Evalution
3. Programme Evaluation Development in the CEE/CIS

But what is really useful is the Annexures:
Annex 1: Internet Bades Discussion Groups Relevant to Evaluation
Annex 2: Internet Websites Relevant to Evaluation
Annex 3: Evaluation Training and Reference Sources Available Online
Annex 4-1: UNEG Standards for Evaluation in the UN System
Annex 4-2: UNEG Norms for Evaluation in the UN System
Annex 5: What goes into a Temrs of Reference; UNICEF evaluation Technical Notes, Issue No.2

The good thing about this publication is that you can download it for free off the internet at

http://www.unicef.org/ceecis/New_trends_Dev_EValuation.pdf

An introductory blurb and a presentation is also available from the IOCE website.

http://ioce.net/news/news_articles/061023_unicef-ipen.shtml

Tuesday, April 10, 2007

Presentation Presented at the SAMEA conference 26 - 30 March 2007

Based on some of the ideas in previous blog entries, I presented the following presentation at the recent SAMEA conference.

Setting Indicators and Targets for Evaluation of Education Initiatives

Introduction

  • Good Evaluation Indicators and Targets are usually an important part of a robust Monitoring and Evaluation system.
  • Although evaluation indicators are usually considered as important, all evaluations do not have to make use of a set of pre-determined indicators and targets.
  • The most significant change (MSC) technique, for example, looks for stories of significant change amongst the beneficiaries of a programme, and after the fact uses a team of people to determine which of these stories represent MSC and real impact.
  • You have to include the story around the indicators in your evaluation reports in order to learn from the findings.
What do we mean?
  • The definition of an Indicator is: “A qualitative or quantitative reflection of a specific dimension of programme performance that is used to demonstrate performance / change”
  • It is distinguished from a Target which: “Specifies the milestones / benchmarks or extent to which the programme results must be achieved”
  • And it also different from a Measure which is: “The Tool / Protocol / Instrument / Gauge you use to assess performance”
Types of Indicators
  • The reason for using indicators is to feel the pulse of a project as it moves towards meeting its objectives or to see the extent to which it has been achieved. There are different types of indicators:
  • Risk/enabling indicators – external factors that contribute to a project’s success or failure. They include socio-economic and environmental factors, the operation and functioning of institutions, the legal system and socio-cultural practices.
  • Input indicators – also called ‘resource’ indicators, they relate to the resources devoted to a project or programme. Whilst they can flag potential challenges, they cannot, on their own determine whether a project will be a success or not.
  • Process indicators – also called ‘throughput’ or ‘activity’ indicators. They reflect delivery of resources devoted to a programme or project on an ongoing basis. They are the best indicators of implementation and are used for project monitoring.
  • Output indicators –indicates whether activities have taken place by considering the outputs from the activities.
  • Outcome indicators - indicates whether your activities delivered a positive outcome of some kind.
  • Impact indicators – Concerns the effectiveness, usually long term, of a programme or project as judged by the measurable achieved in improving the quality of life of beneficiaries or other similar impact level result.
Good Indicators
  • Good Performance Indicators should be
  • Direct (Does it measure Intended Result?)
  • Objective (Is it ambiguous?)
  • Adequate (Are you measuring enough?)
  • Quantitative (Numerical comparisons are less open to interpretation)
  • Disaggregated (Split up by gender, age, location etc.)
  • Practical (Can you measure it timeously and at reasonable cost?)
  • Reliable (How confidently can you make decisions about it?) (USAID, 1996)
SMART Indicators
Most people have also heard about SMART indicators:
  • Specific
  • Measurable
  • Action Oriented
  • Realistic
  • Timed
How we use indicators
  • For many of the evaluation initiatives that we help to plan M&E systems for, we usually work with the managers to set indicators that they understand and can use.
  • Although the issue of data availability and data quality is usually a big concern, it is often the indicators and targets that are set that could make or break an evaluation.
Case Study
  • Implementers of a teacher training initiative wants to know if their project is making a difference in the maths and science performance of learners.
Pitfalls
  • Alignment between Indicators & Targets (If the indicator says something about a number, then the target must also be couched in terms of a number, and not a percentage)
  • Averaging out things that do not belong together (i.e. maths and science) does not make sense at all.
  • Not disaggregating enough (Are you interested in all learners, or is it important to look at disaggregating your data by age group, gender, educator)
  • Assuming that all targets should be about an increase: (Sometimes a trend in the opposite direction exists and it is expected that your programme will only mediate the effects)
  • Assuming that an increase from 20% to 50% is the same as an average increase of 50% to 80%. (Psychometrists have used the standardised gain statistic for a very long time. It is interesting that we don’t see more of it in our programmes.)
  • Ignoring the statistics you will use in analysis: (In some cases you are using a sample and averages. This means an average increase might just look like an increase, but when you test for statistical significance it is actually not an increase)
  • Setting indicators that require two measurements where one would be enough (Are you interested in an average increase, or just the % of people that make some minimum standard.)
  • Ignoring other research done on the topic (If a small effect size is generally reported for interventions of these kinds, isn’t an increase of 30% over baseline a little ambitious?)
  • If you don’t have other research on the topic, it should be allowable to adjust the indicators.
  • Setting an indicator and target that assumes direct causality between the project activity and the anticipated outcome (Even if you have brilliant teachers, how must the learners perform if learners have nowhere to do homework, School discipline is non-existent and after learners have accumulated 10 years of conceptual deficits in their education?)
  • Ignoring Relevance, efficiency, sustainability, and equity considerations. (Is educators training really going to solve the most pressing need?If your programme makes a difference, is it at the same cost as training an astronaut?What will happen if the trained educator leaves?Does the educator training benefit rural learners in the same way in which it would benefit urban learners?)
Ways to address the pitfalls
  • Do a mock data exercise to see how your indicator and target could play out.
  • This will help you think through the data sources, the statistics, and the meaning of the indicator
  • Read extensively about similar projects to determine what the usual effect size is.
  • When you do your problem analysis, be sure to include other possible contributing factors, and don’t try to attribute change if it is not justifiable.
  • Look at examples of other indicators for similar programmes
  • Keep at it and work with someone who would be able to check your proposed indicators with a fresh eye.
Where to look
Example Indicators can be found in:
  • Project / Programme Evaluation reports from multi-lateral donor agencies
  • UNESCO Education For All Indicators
  • Long term donor-funded projects such as DDSP, QIP.
  • StatsSA publications and statistical extracts about the education sector.
  • Government M&E indicators.

Wednesday, March 07, 2007

Announcement from AEA about benefits

The AEA announced that two more Journals are available to members!

Announcement: American Evaluation Association Expands Online Journal Access

AEA Members now receive electronic access to two additional journals - Evaluation and the Health Professions and Evaluation Review - in addition to continued access to AEA's own American Journal of Evaluation and New Directions for Evaluation, as part of membership benefits.

The journals' content is searchable, and archived online content goes back multiple years.

Individual subscriptions to each journal are over $100 each, making AEA membership more of a value then ever at only $80. Members also receive AJE and NDE in hardcopy, discounts on conference and training registration, regular communications about news from all corners of the evaluation community, discounts on books, and the opportunity to participate in the life of the association and the field.

Learn more about AEA and join online at: www.eval.org

Thursday, February 01, 2007

Centre for Global Development

This is the Centre for Global Development Report that upset so many people again. I mean really! Didn't we agree that mixed methods are the way to go? RCTs Cannot possibly be the answer for all our impact evaluation questions.


From the CGD website at:

http://www.cgdev.org/content/publications/detail/7973


When Will We Ever Learn? Improving Lives Through Impact Evaluation

05/31/2006

Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world. But very few programs benefit from studies that could determine whether or not they actually made a difference. This absence of evidence is an urgent problem: it not only wastes money but denies poor people crucial support to improve their lives.

This report by the Evaluation Gap Working Group provides a strategic solution to this problem addressing this gap, and systematically building evidence about what works in social development, proving it is possible to improve the effectiveness of domestic spending and development assistance by bringing vital knowledge into the service of policymaking and program design.

In 2004 the Center for Global Development, with support from the Bill & Melinda Gates Foundation and The William and Flora Hewlett Foundation, convened the Evaluation Gap Working Group. The group was asked to investigate why rigorous impact evaluations of social development programs, whether financed directly by developing country governments or supported by international aid, are relatively rare. The Working Group was charged with developing proposals to stimulate more and better impact evaluations. This report, the final report of the working group, contains specific recommendations for addressing this urgent problem.

Wednesday, January 31, 2007

Monitoring without Indicators - Most Significant Change

On the Pelican list today, they sent through this handy reference to something that I think is infinitely useful for gathering proof and evidence when you don't have indicators and stacks of pre-developed evaluation mechanisms.

Check it out at:
http://www.mande.co.uk/docs/MSCGuide.htm. and http://www.mande.co.uk/MSC.htm


The guide (Prepared by Rick Davies and Jess Dart) explains the MSC technique as follows:

"The most significant change (MSC) technique is a form of participatory monitoring and evaluation. It is participatory because many project stakeholders are involved both in deciding the sorts of change to be recorded and in analysing the data. It is a form of monitoring because it occurs throughout the program cycle and provides information to help people manage the program. It contributes to evaluation because it provides data on impact and outcomes that can be used to help assess the performance of the program as a whole.
Essentially, the process involves the collection of significant change (SC) stories emanating from the field level, and the systematic selection of the most significant of these stories by panels of designated stakeholders or staff. The designated staff and stakeholders are initially involved by ‘searching’ for project impact. Once changes have been captured, various people sit down together, read the stories aloud and have regular and often in-depth discussions about the value of these reported changes. When the technique is implemented successfully, whole teams of people begin to focus their attention on program impact."

Certainly this looks like a very promising technique!