Wednesday, October 18, 2006

My Impressions: UKES / EES Conference 2006

The first joint UKES (United Kingdom Evaluation Society) and EES (European Evaluation Society) evaluation conference was held at the beginning of October in London. It was attended by approximately 550 participants from over 50 countries – Also a number of the prominent thinkers in M&E from North America. Approximately 15 South Africans attended the conference and approximately 300 papers were presented in the 6 streams of the conference. The official conference website is at: http://www.profbriefings.co.uk/EISCC2006/

Although it is impossible to summarise even a representative selection of what was said at the conference, I was struck by particularly discussions around the following:

How North-South and West-East evaluation relationships can be improved.
A panel discussion was held on this topic where a representative from the UKES, IOCE (International Organisation for Cooperation in Evaluation), and IDEAS (International Development Evaluation Association) gave some input followed by a vigorous discussion about what should be done to improve relationships. The international organizations used this as an opportunity to find out what could be done in terms of capacity building, advocacy, sharing of experiences and representation in major evaluation dialogues (e.g. the Paris declaration on Aid effectiveness[1]) etc. Like the South African Association, these associations also run on the resources made available by volunteers so the scope of activities that can be started is limited. The need for finding high-yield, quick gains was explored.
Filling the empty chairs around the evaluation table
Elliott Stern (Past president of UKES / EES and Editor of “Evaluation”) made the point that many of the evaluations done are not done by people that typically identify with the identity of an evaluator - Think specifically of Economists. Not having them represented when we talk about evaluation and how it should be improved means that they miss out on the current dialogue, and we don’t get an opportunity to learn from their perspectives.


Importance of developing a programme theory regarding evaluations
When we evaluate programmes and policies we recognize that clarifying the programme theory can help to clarify what exactly we expect to happen. One of the biggest challenges in the evaluation field is making sure that evaluations are used in the decision-making processes. Developing a programme theory regarding evaluations can help us to clarify what the actions are that’s required to ensure that change happens after the evaluation is completed. When we think of evaluation in this way, it is emphasized once again that delivering and presenting a report only cannot reasonably be expected to impact the way in which a programme is implemented. More research is required to establish exactly under which conditions a set of specific activities will lead to evaluation use.


Research about Evaluation is required so that we can have better theories on Evaluation
Steward Donaldson & Christina Christie (Claremont Graduate University) and a couple of other speakers were quite adamant that if “Evaluation” wants to be taken seriously as a field, we need more research to develop theories that go beyond only telling us how to do evaluations. Internationally evaluation is being recognized as a Meta-Discipline and a Profession, but as a field of science we really don’t have a lot of research about evaluation. Our theories are more likely to tell us how to do evaluations and what tool sets to use, but we have very little objective evidence that one way of doing evaluations is better or produce better results than another.


Theory of Evaluation might develop in some interesting ways
There was also talk about some likely future advances in evaluation theory. Melvin Mark (Current AEA President) said that looking for one comprehensive theory of evaluation is probably not going to deliver results. Different theories are useful under different circumstance. What we should aim for are more contingency theories that tell us when to do what. Current examples of contingency theories include Patton’s Utilization Focused Evaluation Approach - The intended use by intended users determines what kind of evaluation will be done. Theories that take into account the phase of implementation is also critically important. More theories on specific content areas are likely to be very useful e.g. evaluation influence, stakeholder engagement etc. Bill Trochim (President-Elect of AEA) presented a paper on Evolutionary Evaluation that was quite thought provoking and continued from thinking of Donald Campbell etc.

Evaluation for Accountability
Baronness Onora O’Neill (President of the British Academy) expanded what accountability through evaluation means by expanding on the question “Who should be held accountable and by whom?” She indicated that evaluation is but one of a range of activities that’s required to keep governments and their agencies accountable, yet a very critical one. The issue of evaluation for accountability was also echoed by other speakers like Sulley Gariba (From Ghana, previous president of IDEAS) with vivid descriptions of how the African Peer Review Mechanism could be seen as one such type of evaluation that delivers results when communicated to the critical audience.


Evidence Based Policy Making / Programming
Since the European Commission an the OECD was well represented, many of the presentations focused on / touched on topics relating to evidence based policy making. The DAC principles of Evaluation for development assistance (namely Relevance, Effectiveness, Efficiency, Impact, Sustainability) seems to be quite entrenched in evaluation systems, but innovative and useful ways of measuring impact level results was explored by quite some speakers.

Interesting Resources
Some interesting resources that I learned of during the conference include:
www.evalsed.com An online resource of the European Union for the evaluation of Socio-economic development.
The SAGE Handbook of Evaluation Edited by Ian Shaw, Jennifer Green, and Melvin Mark. More information at: http://www.sagepub.com/booksProdDesc.nav?prodId=Book217583
Encyclopedia for Evaluation edited by Sandra Mathisson. More info at: http://www.sagepub.com/booksProdDesc.nav?prodId=Book220777 Other Guidelines for good practice: http://www.evaluation.org.uk/Pub_library/Good_Practice.htm
[1] For more info about the Paris Declaration look at http://www.oecd.org/document/18/0,2340,en_2649_3236398_35401554_1_1_1_1,00.html

No comments: