Thursday, February 14, 2008

The Global Classroom - Direct from Claremont Graduate University

This very useful resource was brought under my attention via the AEA and SAMEA listservs.

The panel discussions from Claremont Graduate University 's recent "What Works?" workshops are available for viewing online as part of our video library. Evaluators, foundation directors, academics, and leaders of successful for-profits and non-profits came together to discuss what really works when tackling important social problems.

To view this footage, visit us at:

To view many other talks on evaluation and topics in applied psychology, visit our full video library at

Wednesday, February 06, 2008

Taxonomy of Evaluation

I found the Evaluation Webring's Taxonomy of Types, Approaches and Fields of Evaluation Quite Useful.

Category 1: Types of evaluation
Internal evaluation or self- evaluation
An evaluation carried out by members of the organisation(s) who are associated with the programme, intervention or activity to be evaluated.

Ex-ante evaluation or impact assessment
An assessment which seeks to predict the likelihood of achieving the intended results of a programme or intervention or to forecast its unintended effects. This is conducted before the programme or intervention is formally adopted or started. Common examples of ex-ante evaluation are environmental and/or social impact assessments and feasibility studies.

Mid-term or interim evaluation
An evaluation conducted half-way through the lifecycle of the programme or intervention to be evaluated. Monitoring An ongoing activity aimed at assessing whether the programme or intervention is implemented in a way that is consistent with its design and plan and is achieving its intended results.

Ex-post or summative evaluation
An evaluation which usually is conducted some time after the programme or intervention has been completed or fully implemented. Generally its purpose is to study how well the intervention served its aims, and to draw lessons for similar
interventions in the future.

Two processes are often referred to as meta-evaluation: (1) the assessment by a third evaluator of evaluation reports prepared by other evaluators; and (2) the assessment of the performance of systems and processes of evaluation.

Formative evaluation
An evaluation which is designed to provide some early insights into a programme or intervention to inform management and staff about the components that are working and those that need to be changed in order to achieve the intended objectives.

Category 2: Evaluative approaches
Outcome evaluation
An evaluation which is focused on the change brought about by the programme or intervention to be evaluated or its results regarding the intended beneficiaries.
Impact evaluation An evaluation that focuses on the broad, longer-term impact or effects, whether intended or unintended, of a programme or intervention. It is usually done some time after the programme or intervention has been completed.

Performance evaluation
An analysis undertaken at a given point in time to compare actual performance with that planned in terms of both resource utilization and achievement of objectives. This is generally used to redirect efforts and resources and to redesign structures.

Participatory evaluation
An evaluation that actively involves all or selected stakeholders in the evaluation process. Different approaches involve varying degrees of participation, inclusion, capacity-building, ownership, etc.

Empowerment evaluation
An approach that aims to improve programs through using specific tools for assessing the planning, implementation and self-evaluation of programs, and by incorporating evaluation into a program or organization’s planning and management. It involves a high level of participation by stakeholders in the evaluation process and is guided by ten key principles.
Collaborative evaluation An evaluation which aims for a significant degree of collaboration or cooperation between evaluators and stakeholders.

Utilization-focused evaluation
A process that assists the primary intended users of an evaluation to select the most appropriate content, model, methods, and theory for the evaluation, focusing on their intended use of the evaluation. Use refers to how people apply evaluation findings and experience the evaluation process.

Feminist evaluation
An evaluation that commonly involves adapting or redesigning relevant evaluation theories and methodologies so that they are compatible with feminist theories and methodologies. Feminist evaluations aim to be inclusive and empowering for women in particular.

Theory-based evaluation
An evaluation based on the theories of change that underlie a given programme or intervention. Its major aim is to examine the extent to which these theories hold and to validate their underlying assumptions.

Most Significant Change
A form of participatory monitoring and evaluation which involves the collection and systematic review and analysis of change stories by panels of designated stakeholders or staff. It is mainly used to assess intermediate program impacts and outcomes.

Category 3) Fields of evaluation

Programme or project evaluation
The evaluation of a programme or project.

Policy evaluation
The evaluation of policies and procedures.

Evaluation of legislation
The evaluation of a piece of legislation.

Evaluation of technical assistance
The evaluation of technical assistance provided by international, bilateral or multilateral donors.

Organisation or institutional evaluation
An evaluation of an organization’s or other institution’s capacity for innovation and change. It involves examining its decision-making processes and organisational structures.

Proposal assessment
The assessment of bids presented by tenderers following a specific call for tenders/bids.
Financial audit The scrutiny of accounts of an organization or other institution against a set of standards.

Personnel evaluation
A systematic method of evaluating an employee’s or staff member’s performance. This involves tracking, evaluating and providing feedback in relation to specific predetermined standards which are consistent with the organization’s overall

Tuesday, February 05, 2008

Surveys - Should we believe them?

There is a lot written about survey methodology as a tool in evaluation, but despite the easy and neat stats that they deliver, one should regard them with a little bit of skepticism, it seems.

Two stories to demonstrate the point:

According to a speaker on 702 talk radio I heard earlier this week, Volkskas bank still receives votes for one of the best brands in South Africa (in the Annual Markinor survey), despite the fact that it has ceased existence now for more than just a couple of years. At least in this survey, you can identify problematic answers because survey respondents had the option of giving an open-ended answer. I shudder to think what people actually do when they get one of those tick box multiple choice surveys...

In the next example, it is just so clear that one should question even the most basic assumptions people make when they complete a survey.
LONDON (AFP) - Britons are losing their grip on reality, according to a poll out Monday which showed that nearly a quarter think Winston Churchill was a myth while the majority reckon Sherlock Holmes was real. The survey found that 47 percent thought the 12th century English king Richard the Lionheart was a myth. And 23 percent thought World War II prime minister Churchill was made up. The same percentage thought Crimean War nurse Florence Nightingale did not actually exist.Three percent thought Charles Dickens, one of Britain's most famous writers, is a work of fiction himself. Indian political leader Mahatma Gandhi and Battle of Waterloo victor the Duke of Wellington also appeared in the top 10 of people thought to be myths. Meanwhile, 58 percent thought Sir Arthur Conan Doyle's fictional detective Holmes actually existed; 33 percent thought the same of W. E. Johns' fictional pilot and adventurer Biggles.