Wednesday, October 18, 2006

My Impressions: UKES / EES Conference 2006

The first joint UKES (United Kingdom Evaluation Society) and EES (European Evaluation Society) evaluation conference was held at the beginning of October in London. It was attended by approximately 550 participants from over 50 countries – Also a number of the prominent thinkers in M&E from North America. Approximately 15 South Africans attended the conference and approximately 300 papers were presented in the 6 streams of the conference. The official conference website is at: http://www.profbriefings.co.uk/EISCC2006/

Although it is impossible to summarise even a representative selection of what was said at the conference, I was struck by particularly discussions around the following:

How North-South and West-East evaluation relationships can be improved.
A panel discussion was held on this topic where a representative from the UKES, IOCE (International Organisation for Cooperation in Evaluation), and IDEAS (International Development Evaluation Association) gave some input followed by a vigorous discussion about what should be done to improve relationships. The international organizations used this as an opportunity to find out what could be done in terms of capacity building, advocacy, sharing of experiences and representation in major evaluation dialogues (e.g. the Paris declaration on Aid effectiveness[1]) etc. Like the South African Association, these associations also run on the resources made available by volunteers so the scope of activities that can be started is limited. The need for finding high-yield, quick gains was explored.
Filling the empty chairs around the evaluation table
Elliott Stern (Past president of UKES / EES and Editor of “Evaluation”) made the point that many of the evaluations done are not done by people that typically identify with the identity of an evaluator - Think specifically of Economists. Not having them represented when we talk about evaluation and how it should be improved means that they miss out on the current dialogue, and we don’t get an opportunity to learn from their perspectives.


Importance of developing a programme theory regarding evaluations
When we evaluate programmes and policies we recognize that clarifying the programme theory can help to clarify what exactly we expect to happen. One of the biggest challenges in the evaluation field is making sure that evaluations are used in the decision-making processes. Developing a programme theory regarding evaluations can help us to clarify what the actions are that’s required to ensure that change happens after the evaluation is completed. When we think of evaluation in this way, it is emphasized once again that delivering and presenting a report only cannot reasonably be expected to impact the way in which a programme is implemented. More research is required to establish exactly under which conditions a set of specific activities will lead to evaluation use.


Research about Evaluation is required so that we can have better theories on Evaluation
Steward Donaldson & Christina Christie (Claremont Graduate University) and a couple of other speakers were quite adamant that if “Evaluation” wants to be taken seriously as a field, we need more research to develop theories that go beyond only telling us how to do evaluations. Internationally evaluation is being recognized as a Meta-Discipline and a Profession, but as a field of science we really don’t have a lot of research about evaluation. Our theories are more likely to tell us how to do evaluations and what tool sets to use, but we have very little objective evidence that one way of doing evaluations is better or produce better results than another.


Theory of Evaluation might develop in some interesting ways
There was also talk about some likely future advances in evaluation theory. Melvin Mark (Current AEA President) said that looking for one comprehensive theory of evaluation is probably not going to deliver results. Different theories are useful under different circumstance. What we should aim for are more contingency theories that tell us when to do what. Current examples of contingency theories include Patton’s Utilization Focused Evaluation Approach - The intended use by intended users determines what kind of evaluation will be done. Theories that take into account the phase of implementation is also critically important. More theories on specific content areas are likely to be very useful e.g. evaluation influence, stakeholder engagement etc. Bill Trochim (President-Elect of AEA) presented a paper on Evolutionary Evaluation that was quite thought provoking and continued from thinking of Donald Campbell etc.

Evaluation for Accountability
Baronness Onora O’Neill (President of the British Academy) expanded what accountability through evaluation means by expanding on the question “Who should be held accountable and by whom?” She indicated that evaluation is but one of a range of activities that’s required to keep governments and their agencies accountable, yet a very critical one. The issue of evaluation for accountability was also echoed by other speakers like Sulley Gariba (From Ghana, previous president of IDEAS) with vivid descriptions of how the African Peer Review Mechanism could be seen as one such type of evaluation that delivers results when communicated to the critical audience.


Evidence Based Policy Making / Programming
Since the European Commission an the OECD was well represented, many of the presentations focused on / touched on topics relating to evidence based policy making. The DAC principles of Evaluation for development assistance (namely Relevance, Effectiveness, Efficiency, Impact, Sustainability) seems to be quite entrenched in evaluation systems, but innovative and useful ways of measuring impact level results was explored by quite some speakers.

Interesting Resources
Some interesting resources that I learned of during the conference include:
www.evalsed.com An online resource of the European Union for the evaluation of Socio-economic development.
The SAGE Handbook of Evaluation Edited by Ian Shaw, Jennifer Green, and Melvin Mark. More information at: http://www.sagepub.com/booksProdDesc.nav?prodId=Book217583
Encyclopedia for Evaluation edited by Sandra Mathisson. More info at: http://www.sagepub.com/booksProdDesc.nav?prodId=Book220777 Other Guidelines for good practice: http://www.evaluation.org.uk/Pub_library/Good_Practice.htm
[1] For more info about the Paris Declaration look at http://www.oecd.org/document/18/0,2340,en_2649_3236398_35401554_1_1_1_1,00.html

Social Entrepreneurship

I’ve got a bee in my bonnet. And I must admit, I don’t quite know what to do with it. It probably has something to do with all of those systems-theory lectures I had at university. Here it is: We know the world and what happens in it cannot necessarily be explained in a linear fashion. So why, oh why do we plan and evaluate ALL our projects according to the logic model (where the combination of A, B and C under conditions D and E will produce F, G and H)? – then again… maybe it is just me and other people (Maybe I should Ask Bob Williams… he’s a real systems guy!) already have very nicely functioning alternative toolsets and methods to evaluate the non-linear world. (If you happen to be one of them, please come and save me from my ignorance and leave a comment so that I can learn from you)

I’m not proposing that we throw out that approach totally. But really! Given the scope of the developmental challenges we have here in SA, we must really hope for a miracle if we think that our logically planned out projects are going to solve all of our problems. If we have a little faith in the fact that we live in a chaotic system that has the capacity for self-organisation, we might actually want to start planning our interventions in a way that empowers key agents in the system to go out and do a number of unexpected and hopefully amazing things.

A related question: Why do we ONLY fund and evaluate projects and organizations, when it is people that make the difference? Let me clarify, I’m not saying projects and organizations don’t make a difference… But it is the 79 year old lady that decides to do something for the kids of her community on one special day. It is the social worker who thinks of a way to take the extra food off our tables and find a way to distribute it to those who need it… It is the guy who drives past the men on the side of the road that suddenly thinks of a way to provide tools and job opportunities to them.

These special people – “Social Entrepreneurs” I think they are called - Should be funded to do what they do best – think of ideas, implement them and set up structures. Because lo and behold they start worrying about how to put dinner on the table and abandon their potentially brilliant idea to take a desk job somewhere! This is apparently exactly what Ashoka does. See their website for more information: http://www.ashoka.org/africa

When venture capital investors want to invest in a new and innovative idea the majority of their pre-assessment work is around the individual that is pitching the idea. Some people just have the diversity of networks, skills and resources at their disposal to make things happen. Maybe there is some lesson in this for us!

Monday, October 16, 2006

Common Pitfalls in M&E

This is an outline for a presentation I recently deliverd.

Common Pitfalls in Monitoring and Evaluation
Issues to Consider when you are the implementer / commissioner of evaluations

Introduction: What people Think of Evaluations
Often people are very scared of evaluations because of previous experiences, lack of experience or a general misconception regarding evaluations.

Introduction: Why must we measure?
Although there is growing consensus that we need to measure the results (outputs, outcomes and impacts) of our projects / programmes / policies, there is still much confusion about exactly why we are doing it.
Two main purposes of evaluations:
-- Accountability to various stakeholders
--Learning to improve the projects / programmes / policies
The projects / progammes / policies we implement affect thousands of people and if we get it wrong thousands will be affected negatively (or not affected at all)
We often complain about the cost of measuring our impact, but have we considered the costs of not measuring our impact?


Introduction: We want to evaluate BUT…
Once we are convinced that we should be measuring our impacts, a range of other questions come up:
--How should it be evaluated?
--When should it be evaluated?
--How will we know that the impact is the best possible?
--How do we know if it is our programme that made those differences?
--Can we do our own evaluation or should we get some specialist to do it?
--If there were simple one-size fits all answers to these questions, evaluation would probably have been much more appealing than it is today.

Common Pitfalls in Evaluation 1
Failing to clarify the intended use or the intended users of the evaluation – Producing "Door Stops".
Thinking you can evaluate your impact after year one of an intervention in a complex system – Expecting too much.
Thinking your impact evaluation is only something you need to worry about at the end of the project – Waiting too long.
Measuring every detail of a programme thinking that it will allow you to get to the big picture "impact" – Measuring too much.
Doing the wrong type of evaluation for the phase in which the project is in – Method / timing match.


Common Pitfalls in Evaluation 2
Allocating too little time and resources to the evaluation – More is better.
Allocating too much time and resources to the evaluation - Less is more.
Sticking to your or someone else’s "template" only – One size does not fit all.
Thinking that an online M&E system will solve all of your problems – Computers don’t solve everything.
Not planning for how the evaluation findings will be used – Findings don’t speak for themselves.

Common Pitfalls in Evaluation 3
Running a lottery when you are supposed to receive tenders for doing the evaluation – Lottery evaluations
Sending the evaluation team in to open Pandora’s box – Don’t do evaluation if you need Organisational Development.
Doing an impact evaluation without taking into consideration the possible influence of other initiatives / factors in the environment – Attribution Error.
Doing an impact evaluation without looking what the unintended consequences of the project was – Tunnel Vision
Ignoring the voices of the "evaluated" – Disempowering people

Common Pitfalls in Evaluation 4
Expecting your content specialist to also be an evaluation specialist and vice-versa – Pseudo Specialists lead to pseudo knowledge
Doing evaluations, creating expectations and then ignoring the results
Do not report statistics like level of significance and effect size when you incorporate a quantitative aspect to your evaluation – Being afraid of the "hard stuff"
Do not acknowledge the lenses you are using to analyse your qualitative data – Being colour blind
Getting hung up on the debate about whether quantitative / qualitative methods are better – Method Madness

How to address the pitfalls
Given that until very recently there were no academic programmes focusing on training people in evaluation, it is important that we find ways of improving our understanding of the field.
You need not be an evaluation specialist to be involved with evaluation.
Make sure that the evaluators you work with have development as an ultimate goal.


How to address the pitfalls
Resources for helping you to do / commission better evaluations
Join an association: For example the South African Monitoring and Evaluation Association (http://www.samea.org.za/) or the African Evaluation Association (http://www.afrea.org/)
Take cognisance of the guidelines and standards produced by these organisations
Make use of the many online resources available on the topic of evaluation (Check out Resources on the SAMEA web page)