Showing posts with label Evaluation Events. Show all posts
Showing posts with label Evaluation Events. Show all posts

Wednesday, February 26, 2014

My plans for AfrEA 2014 conference

I'm off to Cameroon on Sunday for a week of networking, learning and sharing at the 2014 AfrEA conference in Yaondé. I love seeing bits of my continent. If internet access is available I'll try to tweet from @benitaW.

I am facilitating a workshop on Tuesday together with the wise Jim Rugh and the efficient Marie Gervais to share a bit about a VOPE toolkit EvalPartners is developing. ( A VOPE is an evaluation association or society... voluntary organization for professional evaluation)

Workshop title:Establishing and strengthening VOPEs: testing and applying the EvalPartners Institutional Capacity Toolkit

Abstract: One of the EvalPartners initiatives, responding to requests received from leaders of many VOPEs (Voluntary Organizations for Professional Evaluation), is to develop a toolkit which provides guidance to those who wish to form even informal VOPEs, and leaders of existing VOPEs who seek guidance on strengthening their organization’s capacities.  During this workshop participants will be introduced to the many subjects addressed in the VOPE Institutional Capacity Toolkit, and asked to test the tools as they determine how they could help them apply such resources in strengthening their own VOPEs.

The workshop will be very interactive with lots of exploring, engaging, and evaluating of the toolkit resources. Participants should not come to this workshop expecting that they will sit still for more than 30 minutes at a time. We'll use a combination of learning stations and fishbowls as the workshop methodology.  I'm really looking forward to it!

Eventually the toolkit will be made available online. Follow @vopetoolkit on twitter for more news about developments.

I served on the boards of both AfrEA and SAMEA so I hope that the resources that the Toolkit task force and their marvellous team of collaborators put together in the toolkit will be of use to colleagues across the continent who are still founding or strengthening their VOPEs. It is hard and sometimes thankless work to serve on a VOPE board, and if this toolkit can make someone's life a little easier with examples, tools and advice, I would count this as a worthy effort.

I expect that the workshop will be a good opportunity to get some Feedback to guide us in the completion of the work.

Monday, May 16, 2011

Better Evaluation Virtual Writeshop

Irene Guijt posted this on the Pelican List serv today

Perhaps you have undertaken an evaluation on a program to mitigate climate change effects on rural people living in poverty, or one on capacity development in value chains. Or worked on participatory ways to make sense of evaluation data, or developed simple ways to integrate numbers and stories. We’d like to bring unknown experiences to the global stage for wider use.

Do you have an experience that covers many different aspects of evaluation – design, collection, sensemaking, and reporting? Did you look at different options to develop a context-sensitive approach? And has your evaluation process not yet been shared widely? If your answer is yes to these questions, then our virtual writeshop on evaluation may be of interest.

We will facilitate a virtual writeshop between May and September 2011 that will lead to around 10 focused documents to be shared globally. Participating in the writeshop will give you structured editorial support and peer review to develop a publication for the BetterEvaluation site.

For more information, including how to submit a proposal, go here.

Monday, October 12, 2009

Get involved - SAMEA is preparing a submission

Media statement by the Minister in the Presidency T Manuel for National Planning on the release of the Green Paper on National Strategic Planning
4 September 2009

Today government is releasing two discussion documents, one a Green Paper on National Strategic Planning and the other a Policy Document on Performance Monitoring and Evaluation. The decision by President Zuma to appoint Ministers in the Presidency responsible for National Planning and Performance Monitoring and Evaluation is designed to improve the overall effectiveness of government, enabling government to better meets its development objectives in both the short- and longer-term. These two discussion documents must be seen in the context of wider efforts led by the President to improve the performance of government through enhancing coherence and co-ordination in government, managing the performance of the state and communicating better with the public.
The Green Paper on National Strategic Planning is a discussion document that outlines the tasks of the national planning function, broadly defined. It deals with the concept of national strategic planning, as well as processes and structures. Once consultations on these issues have been completed, the process to set up the high-level structures will commence; and this will be followed by intense work to develop South Africa's long-term vision and other outputs. In other words, the Green Paper does not deal with these substantive issues of content.
The rationale for planning is that government (and indeed the nation at large) requires a longer-term perspective to enhance policy coherence and to help guide shorter term policy trade-offs. The development of a long-term plan for the country will help government departments and entities across all the spheres of government to develop programmes and operational plans to meet society’s broader developmental objectives. Such a plan must articulate the type of society we seek to create and outline the path towards a more inclusive society where the fruits of development benefit all South Africans, particularly the poor.
The planning function is to be coordinated by the Minister in The Presidency for National Planning. There are four key outputs of the planning function. Firstly, to develop a long term vision for South Africa, Vision 2025, which would be an articulation of our national aspirations regarding the society we seek and which would help us confront the key challenges and trade-offs required to achieve those goals. A National Planning Commission comprising of external commissioners who are experts in relevant fields would play a key role in developing this plan. The development of a National Plan would require broader societal consultation and existing forums would be used for this purpose. The Minister in The Presidency will co-ordinate these engagements. A National Plan has to be adopted by Cabinet for it to have the force of a government plan. The Minister would serve as a link between the Commission and Government, feeding the work of the Commission into government.
The next set of outputs cover the five-yearly Medium Term Strategic Framework (MTSF) and the National Programme of Action. These are documents of national government, adopted by Cabinet, drawing on the electoral mandate of the government of the day. The Minister in The Presidency for National Planning, supported by a Ministerial Committee on Planning, would coordinate the development of these documents with input from Ministers, departments, provinces, organised local government, public entities and coordinating clusters.
Further, it is envisaged that the planning function in The Presidency will undertake research and release discussion papers on a range of topics that impact on long-term development. These include topics such as demographic trends, global climate change, human resource development, and future energy mix and food security. The Presidency would also release and process baseline data on critical such as demographics, biodiversity as well as migratory and economic trends. This work will be undertaken by the Minister, working with the National Planning Commission (NPC) and the Minister, working with the NPC would, from time to time, advise government on progress in implementing the national plan, including the identification of institutional and other blockages to its implementation.
One of the functions of The Presidency in respect of national planning is to develop frameworks for spatial planning that seek to undo the damage that apartheid's spatial development patterns have wrought on our society. This includes the development of high level frameworks to guide regional planning and infrastructure investment.
The national planning function will provide guidance on the allocation of resources and in the development of departmental, sectoral, provincial and municipal plans.
The Minister in The Presidency responsible for national planning will be supported by a Planning Secretariat, which will also provide administrative, research and other support to the National Planning Commission. National Strategic Planning is an iterative process involving extensive consultation and engagement within government and with broader society.
It is envisaged that Parliament will play a key role in guiding the planning function through its oversight role but also through facilitating broader stakeholder input into the planning process. For this reason, it is appropriate that Parliament should lead the discussion process on the Green Paper.
This Green Paper is a discussion document. Government welcomes comment, advice, criticisms and suggestions from all in society.
Please address all comments on the Green Paper on National Strategic Planning to the Minister in the Presidency for National Planning c/o:
Hassen Mohamed
E-mail: hassen@po.gov.za
Tel: 012 300 5455
Fax: 086 683 5455
Issued by: The Presidency
4 September 2009

Please see http://www.info.gov.za/speeches/2009/09090414151003.htm for the actual green paper and Policy document on performance monitoring and evaluation.

Friday, November 28, 2008

GDE Colloquium on their M&E Framework

Recently the Gauteng Department of Education held a colloquium on their Monitoring and Evaluation Framework. As one of the speakers, I reflected on the fact that M&E frameworks often erroneously assume that the evaluand is a stable system. I argued that there are multiple triggers that leads to the evolution of the evaluand and that this has implications for M&E.

Triggers for evolving systems, organizations, policies, programmes & interventions
(Morell, J.A. (2005). Why are there unintended consequences of program action, and what are the implications for doing evaluation? In American Journal of Evaluation 2005 (26) p 444 - 463 )
• Unforeseen consequences
– Weak application of analytical frameworks, failure to capture experience of past research
• Unforeseeable consequences
– Changing environments
• Overlooked consequences
– Known consequences are ignored for practical, political or ideological reasons
• Learning & Adapting
– As implementation happens, the learning is used to adapt
• Selection Effects
– If different approaches are tried, those that are successful are likely to be replicated and those that are unsuccessful are unlikely to be replicated.

Implications for M&E
• M&E needs to work in aid of evolution (not just change)
– The M&E framework should be key in allowing the GDE to adapt, learn, respond to changes
• Not just by ensuring that the right information is tracked, but to ensure that the right people have access to it at the right time.
• M&E needs to respond to evolution
– As the Evaluand changes, some indicators will be incorrectly focused or missing, so the framework will have to be updated periodically
– It might be necessary to implement measures that go beyond checking “whether the Dept makes progress towards reaching its goals and objectives”
• Diversity of input into the design of the framework
• Using appropriate evaluation methods
– Consider expected impacts of change in planning for roll-out of M&E

Critical analysis of an M&E framework
• Does it ask the right questions in order for us to judge the merit, worth or value” of that which we are monitoring / evaluating?
• Does it allow for credible & reliable evidence to be used?

Types of Questions to ask
(Chelimsky, E. (2007). Factors Influencing the Choice of Methods in Federal Evaluation Practice. New Directions for Evaluation 113. p 13 - 33)

• Descriptive questions: Questions that focus on determining how many, what proportion etc. for the purposes of describing some aspect of the education context. (e.g. if you were interested in finding out what the drop out rate for no-fee schools is)
• Normative questions: Questions that compare outcomes of an intervention (such as the implementation of new policies) against a pre-existing standard or norm. Norm referenced questions can use various standards to compare against:
– Previous measures for the group that’s exposed to the policy intervention (e.g. if you compare the current drop-out rate to the previous drop-out rate for a specific set of schools affected by the policy)
– A widely negotiated and accepted standard (e.g. if it was accepted that a 5% drop out rate is acceptable, you can check whether the schools currently have that drop-out rate or not)
– Measure from another similar group (e.g. if you compare the drop-out rate for different types of schools)
• Attributive questions: Questions that attempt to attribute outcomes directly to an intervention like a policy change or a programme (Is the change in the drop-out rate in no-fee schools due to the implementation of the no-fee school policy)
• Analytic-Interpretive questions that builds our Knowledge base: Questions that ask about the state of the debate issues important for decision making about specific policies. (e.g. What is known about the relationship between drop-out rate and the per-learner education spend of the Department of Education)

Questions at different Time Periods
• Prior to implementation:
– Q1.1: What does available baseline data tell us about the current situation in the entities that will be affected? (Descriptive)
– Q1.2: Given what we know about existing circumstances and the changes proposed when the new policy / programme is implemented, what are the likely impacts/ effects likely to be? (Analytic-Interpretive, Normative)
• Evidence based policy making requires some sort of ex-ante assessment of the likely changes. This assessment can then later be referred to again when the final impact evaluation is conducted.

• Directly after implementation, and continued until full compliance is reached:
– Q2.1: To what degree is there compliance to the policy / fidelity to the programme design? (Descriptive)
– Q2.2: What are the short term positive and negative effects of the policy change / programme? (Descriptive, Normative and Attributive)
– Q2.3: How can the implementation and compliance be improved? (Analytic-Interpretive)
– Q2.4: How can the negative short term effects be mitigated? (Analytic-Interpretive)
– Q2.5: How can the positive short term effects be bolstered? (Analytic-Interpretive)
• This is important because no impact assessment can be done if the policy / programme has not been implemented properly, if there are significant barriers to the implementation of the policy / programme an intervention to remove these barriers would be necessary or the policy / programme should be changed.

• After compliance has been reached and the longer term effects of the policy are able to be discerned:
– Q3.1: To what degree did the policy achieve what it set out to do? (Normative)
– Q3.2: What has been the longer term and systemic effects attributable to the policy change? (Descriptive, Normative, Attributive)
– Q3.3: How can the implementation be improved / negative effects be mitigated / positive effects be bolstered? (Analytic-Interpretive)
• This is important to demonstrate that policy change was effective in addressing the underlying issues initially requiring the policy change, and to check that no unintended perversions of the policy became implemented.

• Designs appropriate to Descriptive questions:
– CASE STUDY DESIGNS
– RAPID APPRAISAL DESIGNS
– GROUNDED THEORY DESIGNS
• Designs Appropriate to Analytic-Interpretive questions
– LITERATURE REVIEW
– MIXED METHOD DESIGNS
• Designs Appropriate to Normative questions
– TIME SERIES RESEARCH DESIGNS
• Designs Appropriate to Attributive questions
– EXPERIMENTAL DESIGNS
– QUASI-EXPERIMENTAL DESIGNS

Principles for Evidence Collection
• Independence: You cannot ask the same person in whose compliance you are interested, whether they are complying. The incentive to provide false information might be very high.
• Relevance: Appropriate questions must be asked of the right persons..
• Consider Systemic Impacts. Look broader than just the cases directly affected.
• Appropriate Samples need to be selected. The sampling approach, sample size are all related to the question that needs to be answered.
• Appropriate methods need to be selected. Although certain designs are likely to results in easy answers, they might not be appropriate
• Implementation Phase: Take into account the level of implementation when you do the assessment. It is well known that after initial implementation an implementation dip might occur. Do not try to do an impact assessment when the level of implementation has not yet stabilised in the system.
• Fidelity: Take into account the fidelity of implementation, i.e to what degree the policy was implemented as it was intended.

Tuesday, April 10, 2007

Presentation Presented at the SAMEA conference 26 - 30 March 2007

Based on some of the ideas in previous blog entries, I presented the following presentation at the recent SAMEA conference.

Setting Indicators and Targets for Evaluation of Education Initiatives

Introduction

  • Good Evaluation Indicators and Targets are usually an important part of a robust Monitoring and Evaluation system.
  • Although evaluation indicators are usually considered as important, all evaluations do not have to make use of a set of pre-determined indicators and targets.
  • The most significant change (MSC) technique, for example, looks for stories of significant change amongst the beneficiaries of a programme, and after the fact uses a team of people to determine which of these stories represent MSC and real impact.
  • You have to include the story around the indicators in your evaluation reports in order to learn from the findings.
What do we mean?
  • The definition of an Indicator is: “A qualitative or quantitative reflection of a specific dimension of programme performance that is used to demonstrate performance / change”
  • It is distinguished from a Target which: “Specifies the milestones / benchmarks or extent to which the programme results must be achieved”
  • And it also different from a Measure which is: “The Tool / Protocol / Instrument / Gauge you use to assess performance”
Types of Indicators
  • The reason for using indicators is to feel the pulse of a project as it moves towards meeting its objectives or to see the extent to which it has been achieved. There are different types of indicators:
  • Risk/enabling indicators – external factors that contribute to a project’s success or failure. They include socio-economic and environmental factors, the operation and functioning of institutions, the legal system and socio-cultural practices.
  • Input indicators – also called ‘resource’ indicators, they relate to the resources devoted to a project or programme. Whilst they can flag potential challenges, they cannot, on their own determine whether a project will be a success or not.
  • Process indicators – also called ‘throughput’ or ‘activity’ indicators. They reflect delivery of resources devoted to a programme or project on an ongoing basis. They are the best indicators of implementation and are used for project monitoring.
  • Output indicators –indicates whether activities have taken place by considering the outputs from the activities.
  • Outcome indicators - indicates whether your activities delivered a positive outcome of some kind.
  • Impact indicators – Concerns the effectiveness, usually long term, of a programme or project as judged by the measurable achieved in improving the quality of life of beneficiaries or other similar impact level result.
Good Indicators
  • Good Performance Indicators should be
  • Direct (Does it measure Intended Result?)
  • Objective (Is it ambiguous?)
  • Adequate (Are you measuring enough?)
  • Quantitative (Numerical comparisons are less open to interpretation)
  • Disaggregated (Split up by gender, age, location etc.)
  • Practical (Can you measure it timeously and at reasonable cost?)
  • Reliable (How confidently can you make decisions about it?) (USAID, 1996)
SMART Indicators
Most people have also heard about SMART indicators:
  • Specific
  • Measurable
  • Action Oriented
  • Realistic
  • Timed
How we use indicators
  • For many of the evaluation initiatives that we help to plan M&E systems for, we usually work with the managers to set indicators that they understand and can use.
  • Although the issue of data availability and data quality is usually a big concern, it is often the indicators and targets that are set that could make or break an evaluation.
Case Study
  • Implementers of a teacher training initiative wants to know if their project is making a difference in the maths and science performance of learners.
Pitfalls
  • Alignment between Indicators & Targets (If the indicator says something about a number, then the target must also be couched in terms of a number, and not a percentage)
  • Averaging out things that do not belong together (i.e. maths and science) does not make sense at all.
  • Not disaggregating enough (Are you interested in all learners, or is it important to look at disaggregating your data by age group, gender, educator)
  • Assuming that all targets should be about an increase: (Sometimes a trend in the opposite direction exists and it is expected that your programme will only mediate the effects)
  • Assuming that an increase from 20% to 50% is the same as an average increase of 50% to 80%. (Psychometrists have used the standardised gain statistic for a very long time. It is interesting that we don’t see more of it in our programmes.)
  • Ignoring the statistics you will use in analysis: (In some cases you are using a sample and averages. This means an average increase might just look like an increase, but when you test for statistical significance it is actually not an increase)
  • Setting indicators that require two measurements where one would be enough (Are you interested in an average increase, or just the % of people that make some minimum standard.)
  • Ignoring other research done on the topic (If a small effect size is generally reported for interventions of these kinds, isn’t an increase of 30% over baseline a little ambitious?)
  • If you don’t have other research on the topic, it should be allowable to adjust the indicators.
  • Setting an indicator and target that assumes direct causality between the project activity and the anticipated outcome (Even if you have brilliant teachers, how must the learners perform if learners have nowhere to do homework, School discipline is non-existent and after learners have accumulated 10 years of conceptual deficits in their education?)
  • Ignoring Relevance, efficiency, sustainability, and equity considerations. (Is educators training really going to solve the most pressing need?If your programme makes a difference, is it at the same cost as training an astronaut?What will happen if the trained educator leaves?Does the educator training benefit rural learners in the same way in which it would benefit urban learners?)
Ways to address the pitfalls
  • Do a mock data exercise to see how your indicator and target could play out.
  • This will help you think through the data sources, the statistics, and the meaning of the indicator
  • Read extensively about similar projects to determine what the usual effect size is.
  • When you do your problem analysis, be sure to include other possible contributing factors, and don’t try to attribute change if it is not justifiable.
  • Look at examples of other indicators for similar programmes
  • Keep at it and work with someone who would be able to check your proposed indicators with a fresh eye.
Where to look
Example Indicators can be found in:
  • Project / Programme Evaluation reports from multi-lateral donor agencies
  • UNESCO Education For All Indicators
  • Long term donor-funded projects such as DDSP, QIP.
  • StatsSA publications and statistical extracts about the education sector.
  • Government M&E indicators.

Tuesday, January 30, 2007

Report Back: Making Evaluation Our Own

A special stream was held on making Evaluation our own at the AfrEA conference. After the conference a small committee of African volunteers worked to capture some of the key points of the discussion. Thanks to Mine Pabari from Kenya for forwarding a copy!

What do you think of this?


Making Evaluation Our Own: Strengthening the Foundations for Africa-Rooted and Africa Led M&E

Overview & Recommendations to AfrEA

Niamey, 18th January, 2007

Discussion Overview

On 18 January 2007 a special stream was held to discuss the topic

Making Evaluation our own: Strengthening the Foundations for Africa-Rooted and Africa-Led M&E. It was designed to bring African and other international experiences in evaluation and in development evaluation to help stimulate debate on how M&E , which has generally been imposed from outside, can become Africa led and owned.

The introductory session aimed to set the scene for the discussion by considering i) What the African evaluation challenges are (Zenda Ofir) ii) The Trends Shaping M&E in the Developing World (Robert Piccioto) iii) The African Mosaic and Global Interactions: The Multiple Roles of and Approaches to Evaluation (Michael Patton & Donna Mertens). The last presentations explained, among others, the theoretical underpinnings of evaluation as it is practiced in the world today.

The next session briefly touched on some of the current evaluation methodologies used internationally in order to highlight the variety of methods that exist. It also stimulated debate over the controversial initiative on impact evaluation launched by the Center for Global Development in Washington. The discussion then moved to consider some of the international approaches that are currently useful or likely to become prominent in finding evidence about development in Africa (Jim Rugh, Bill Savedoff, Rob van den Berg, Fred Carden, Nancy MacPherson & Ross Conner)

The final session aimed to consider some possibilities for developing an evaluation culture rooted in Africa. (Bagele Chilisa). In this session some examples of how the African culture leans itself towards evaluation was given and also some examples that demonstrated that the currently used evaluation methodologies could be enriched if it considered an African world view.

Key issues emerging from the presentations and discussion formed the basis for the motions presented below:

  • Currently much of the evaluation practice in Africa is based on external values and contexts, is donor driven and the accountability mechanisms tend to be directed towards recipients of aid rather than both recipients and the providers of aim
  • For evaluation to have a greater contribution to development in Africa it needs to address challenges including those related to country ownership; the macro-micro disconnect; attribution; ethics and values; and power-relations.
  • A variety of methods and approaches are available and valuable to contributing to frame our questions and methods of collecting evidence. However, we first need to reexamine our own preconceived assumptions; underpinning values, paradigms (e.g. transformative v/s pragmatic); what is acknowledged as being evidence; and by whom before we can select any particular methodology/approach.

The lively discussion that ensued led towards the appointment of a small group of African evaluators to note down suggested actions that AfrEA could spearhead in order to fill the gap related to Africa-Rooted and Africa-Led M&E.

The stream acknowledges and extends its gratitude to the presenters for contributing their time to share their experiences and wealth of knowledge. Also, many thanks to NORAD for its contribution to the stream; and the generous offer to support an evaluation that may be used as a test case for an African-rooted approach – an important opportunity to contribute to evaluation in Africa.

In particular, the stream also extends much gratitude to Zenda Ofir and Dr. Sully Gariba for their enormous effort and dedication to ensure that AfrEA had the opportunity to discuss this important topic with the support of highly skilled and knowledgeable evaluation professionals.


Motions

In order for evaluation to contribute more meaningfully to development in Africa, there is a need to re-examine the paradigms that guide evaluation practice on the continent. Africa rooted and Africa led M&E requires ensuring that African values and ways of constructing knowledge are considered as valid. This, in turn, implies that:

§ African evaluation standards and practices should be based on African values & world views

§ The existing body of knowledge on African values & worldviews should be central to guiding and shaping evaluation in Africa

§ There is a need to foster and develop the intellectual leadership and capacity within Africa and ensure that it plays a greater role in guiding and developing evaluation theories and practices.

We therefore recommend the following for consideration by AfrEA:

o AfrEA guides and supports the development of African guidelines to operationalize the African evaluation standards and; in doing so, ensure that both the standards and operational guidelines are based on the existing body of knowledge on African values & worldviews

o AfrEA works with its networks to support and develop institutions, such as Universities, to enable them to establish evaluation as a profession and meta discipline within Africa

o AfrEA identifies mechanisms in which African evaluation practitioners can be mentored and supported by experienced African evaluation professionals

o AfrEA engages with funding agencies to explore opportunities for developing and adopting evaluation methodologies and practices that are based on African values and worldviews and advocate for their inclusion in future evaluations

o AfrEA encourages and supports knowledge generated from evaluation practice within Africa to be published and profiled in scholarly publications. This may include;

§ Supporting the inclusion of peer reviewed publications on African evaluation in international journals on evaluation (for example, the publication of a special issue on African evaluation)

§ The development of scholarly publications specifically related to evaluation theories and practices in Africa (e.g. a journal of the AfrEA)

Contributors

§ Benita van Wyk – South Africa

§ Bagele Chlisa – Botswana

§ Abigail Abandoh-Sam – Ghana

§ Albert Eneas Gakusi – AfDB

§ Ngegne Mbao – Senegal

§ Mine Pabari - Kenya

Wednesday, October 18, 2006

My Impressions: UKES / EES Conference 2006

The first joint UKES (United Kingdom Evaluation Society) and EES (European Evaluation Society) evaluation conference was held at the beginning of October in London. It was attended by approximately 550 participants from over 50 countries – Also a number of the prominent thinkers in M&E from North America. Approximately 15 South Africans attended the conference and approximately 300 papers were presented in the 6 streams of the conference. The official conference website is at: http://www.profbriefings.co.uk/EISCC2006/

Although it is impossible to summarise even a representative selection of what was said at the conference, I was struck by particularly discussions around the following:

How North-South and West-East evaluation relationships can be improved.
A panel discussion was held on this topic where a representative from the UKES, IOCE (International Organisation for Cooperation in Evaluation), and IDEAS (International Development Evaluation Association) gave some input followed by a vigorous discussion about what should be done to improve relationships. The international organizations used this as an opportunity to find out what could be done in terms of capacity building, advocacy, sharing of experiences and representation in major evaluation dialogues (e.g. the Paris declaration on Aid effectiveness[1]) etc. Like the South African Association, these associations also run on the resources made available by volunteers so the scope of activities that can be started is limited. The need for finding high-yield, quick gains was explored.
Filling the empty chairs around the evaluation table
Elliott Stern (Past president of UKES / EES and Editor of “Evaluation”) made the point that many of the evaluations done are not done by people that typically identify with the identity of an evaluator - Think specifically of Economists. Not having them represented when we talk about evaluation and how it should be improved means that they miss out on the current dialogue, and we don’t get an opportunity to learn from their perspectives.


Importance of developing a programme theory regarding evaluations
When we evaluate programmes and policies we recognize that clarifying the programme theory can help to clarify what exactly we expect to happen. One of the biggest challenges in the evaluation field is making sure that evaluations are used in the decision-making processes. Developing a programme theory regarding evaluations can help us to clarify what the actions are that’s required to ensure that change happens after the evaluation is completed. When we think of evaluation in this way, it is emphasized once again that delivering and presenting a report only cannot reasonably be expected to impact the way in which a programme is implemented. More research is required to establish exactly under which conditions a set of specific activities will lead to evaluation use.


Research about Evaluation is required so that we can have better theories on Evaluation
Steward Donaldson & Christina Christie (Claremont Graduate University) and a couple of other speakers were quite adamant that if “Evaluation” wants to be taken seriously as a field, we need more research to develop theories that go beyond only telling us how to do evaluations. Internationally evaluation is being recognized as a Meta-Discipline and a Profession, but as a field of science we really don’t have a lot of research about evaluation. Our theories are more likely to tell us how to do evaluations and what tool sets to use, but we have very little objective evidence that one way of doing evaluations is better or produce better results than another.


Theory of Evaluation might develop in some interesting ways
There was also talk about some likely future advances in evaluation theory. Melvin Mark (Current AEA President) said that looking for one comprehensive theory of evaluation is probably not going to deliver results. Different theories are useful under different circumstance. What we should aim for are more contingency theories that tell us when to do what. Current examples of contingency theories include Patton’s Utilization Focused Evaluation Approach - The intended use by intended users determines what kind of evaluation will be done. Theories that take into account the phase of implementation is also critically important. More theories on specific content areas are likely to be very useful e.g. evaluation influence, stakeholder engagement etc. Bill Trochim (President-Elect of AEA) presented a paper on Evolutionary Evaluation that was quite thought provoking and continued from thinking of Donald Campbell etc.

Evaluation for Accountability
Baronness Onora O’Neill (President of the British Academy) expanded what accountability through evaluation means by expanding on the question “Who should be held accountable and by whom?” She indicated that evaluation is but one of a range of activities that’s required to keep governments and their agencies accountable, yet a very critical one. The issue of evaluation for accountability was also echoed by other speakers like Sulley Gariba (From Ghana, previous president of IDEAS) with vivid descriptions of how the African Peer Review Mechanism could be seen as one such type of evaluation that delivers results when communicated to the critical audience.


Evidence Based Policy Making / Programming
Since the European Commission an the OECD was well represented, many of the presentations focused on / touched on topics relating to evidence based policy making. The DAC principles of Evaluation for development assistance (namely Relevance, Effectiveness, Efficiency, Impact, Sustainability) seems to be quite entrenched in evaluation systems, but innovative and useful ways of measuring impact level results was explored by quite some speakers.

Interesting Resources
Some interesting resources that I learned of during the conference include:
www.evalsed.com An online resource of the European Union for the evaluation of Socio-economic development.
The SAGE Handbook of Evaluation Edited by Ian Shaw, Jennifer Green, and Melvin Mark. More information at: http://www.sagepub.com/booksProdDesc.nav?prodId=Book217583
Encyclopedia for Evaluation edited by Sandra Mathisson. More info at: http://www.sagepub.com/booksProdDesc.nav?prodId=Book220777 Other Guidelines for good practice: http://www.evaluation.org.uk/Pub_library/Good_Practice.htm
[1] For more info about the Paris Declaration look at http://www.oecd.org/document/18/0,2340,en_2649_3236398_35401554_1_1_1_1,00.html