Friday, November 28, 2008

GDE Colloquium on their M&E Framework

Recently the Gauteng Department of Education held a colloquium on their Monitoring and Evaluation Framework. As one of the speakers, I reflected on the fact that M&E frameworks often erroneously assume that the evaluand is a stable system. I argued that there are multiple triggers that leads to the evolution of the evaluand and that this has implications for M&E.

Triggers for evolving systems, organizations, policies, programmes & interventions
(Morell, J.A. (2005). Why are there unintended consequences of program action, and what are the implications for doing evaluation? In American Journal of Evaluation 2005 (26) p 444 - 463 )
• Unforeseen consequences
– Weak application of analytical frameworks, failure to capture experience of past research
• Unforeseeable consequences
– Changing environments
• Overlooked consequences
– Known consequences are ignored for practical, political or ideological reasons
• Learning & Adapting
– As implementation happens, the learning is used to adapt
• Selection Effects
– If different approaches are tried, those that are successful are likely to be replicated and those that are unsuccessful are unlikely to be replicated.

Implications for M&E
• M&E needs to work in aid of evolution (not just change)
– The M&E framework should be key in allowing the GDE to adapt, learn, respond to changes
• Not just by ensuring that the right information is tracked, but to ensure that the right people have access to it at the right time.
• M&E needs to respond to evolution
– As the Evaluand changes, some indicators will be incorrectly focused or missing, so the framework will have to be updated periodically
– It might be necessary to implement measures that go beyond checking “whether the Dept makes progress towards reaching its goals and objectives”
• Diversity of input into the design of the framework
• Using appropriate evaluation methods
– Consider expected impacts of change in planning for roll-out of M&E

Critical analysis of an M&E framework
• Does it ask the right questions in order for us to judge the merit, worth or value” of that which we are monitoring / evaluating?
• Does it allow for credible & reliable evidence to be used?

Types of Questions to ask
(Chelimsky, E. (2007). Factors Influencing the Choice of Methods in Federal Evaluation Practice. New Directions for Evaluation 113. p 13 - 33)

• Descriptive questions: Questions that focus on determining how many, what proportion etc. for the purposes of describing some aspect of the education context. (e.g. if you were interested in finding out what the drop out rate for no-fee schools is)
• Normative questions: Questions that compare outcomes of an intervention (such as the implementation of new policies) against a pre-existing standard or norm. Norm referenced questions can use various standards to compare against:
– Previous measures for the group that’s exposed to the policy intervention (e.g. if you compare the current drop-out rate to the previous drop-out rate for a specific set of schools affected by the policy)
– A widely negotiated and accepted standard (e.g. if it was accepted that a 5% drop out rate is acceptable, you can check whether the schools currently have that drop-out rate or not)
– Measure from another similar group (e.g. if you compare the drop-out rate for different types of schools)
• Attributive questions: Questions that attempt to attribute outcomes directly to an intervention like a policy change or a programme (Is the change in the drop-out rate in no-fee schools due to the implementation of the no-fee school policy)
• Analytic-Interpretive questions that builds our Knowledge base: Questions that ask about the state of the debate issues important for decision making about specific policies. (e.g. What is known about the relationship between drop-out rate and the per-learner education spend of the Department of Education)

Questions at different Time Periods
• Prior to implementation:
– Q1.1: What does available baseline data tell us about the current situation in the entities that will be affected? (Descriptive)
– Q1.2: Given what we know about existing circumstances and the changes proposed when the new policy / programme is implemented, what are the likely impacts/ effects likely to be? (Analytic-Interpretive, Normative)
• Evidence based policy making requires some sort of ex-ante assessment of the likely changes. This assessment can then later be referred to again when the final impact evaluation is conducted.

• Directly after implementation, and continued until full compliance is reached:
– Q2.1: To what degree is there compliance to the policy / fidelity to the programme design? (Descriptive)
– Q2.2: What are the short term positive and negative effects of the policy change / programme? (Descriptive, Normative and Attributive)
– Q2.3: How can the implementation and compliance be improved? (Analytic-Interpretive)
– Q2.4: How can the negative short term effects be mitigated? (Analytic-Interpretive)
– Q2.5: How can the positive short term effects be bolstered? (Analytic-Interpretive)
• This is important because no impact assessment can be done if the policy / programme has not been implemented properly, if there are significant barriers to the implementation of the policy / programme an intervention to remove these barriers would be necessary or the policy / programme should be changed.

• After compliance has been reached and the longer term effects of the policy are able to be discerned:
– Q3.1: To what degree did the policy achieve what it set out to do? (Normative)
– Q3.2: What has been the longer term and systemic effects attributable to the policy change? (Descriptive, Normative, Attributive)
– Q3.3: How can the implementation be improved / negative effects be mitigated / positive effects be bolstered? (Analytic-Interpretive)
• This is important to demonstrate that policy change was effective in addressing the underlying issues initially requiring the policy change, and to check that no unintended perversions of the policy became implemented.

• Designs appropriate to Descriptive questions:
• Designs Appropriate to Analytic-Interpretive questions
• Designs Appropriate to Normative questions
• Designs Appropriate to Attributive questions

Principles for Evidence Collection
• Independence: You cannot ask the same person in whose compliance you are interested, whether they are complying. The incentive to provide false information might be very high.
• Relevance: Appropriate questions must be asked of the right persons..
• Consider Systemic Impacts. Look broader than just the cases directly affected.
• Appropriate Samples need to be selected. The sampling approach, sample size are all related to the question that needs to be answered.
• Appropriate methods need to be selected. Although certain designs are likely to results in easy answers, they might not be appropriate
• Implementation Phase: Take into account the level of implementation when you do the assessment. It is well known that after initial implementation an implementation dip might occur. Do not try to do an impact assessment when the level of implementation has not yet stabilised in the system.
• Fidelity: Take into account the fidelity of implementation, i.e to what degree the policy was implemented as it was intended.

Thursday, May 08, 2008

Metrics for Social Entrepreneurs

I found this website aimed at social entrepreneurs quite useful.

It lists some approaches for measurement:
"Social entrepreneurs now have a smorgasbord of measurement methodologies to choose from in addition to developing project-specific metrics (i.e., families served, reduction in arrests, units built, jobs created). They include:

• Balanced Scorecard Methodology (New Profit Inc.)
• The Acumen-Mckinsey Scorecard (Acumen Fund)
• Social Return Assessment Scorecard (Pacific Community Ventures)
• AtKisson Compass Assessment for Investors (AtKisson)
• Poverty and Social Impact Analysis (World Bank)
• OASIS: Ongoing Assessment of Social Impacts (REDF)"

They also extract Five principles of metrics that are often mentioned in discussions

1. Do have a set of success metrics
Funders and investors want to know that you have a way of measuring your success.
2. Tailor your metrics to your mission
If you are running a non-profit, then focus on social impact; if you are running a for-profit, you need the third bottom line - ROI.
3. Measure what you can in real time, but understand that social change is often measurable only over a longer period.
Try to find polling and survey organizations that are measuring the long-term trends and use their free published data.
4. Learn about established methodologies for social measurements
Applying them will save you work, get better results, and signal investors that you are serious about metrics.
5. Look at the cost-benefit of your metrics
Determine what percentage of your operations should be reasonably dedicated to success measurement and set it aside in your proposal and operating budgets.

Personally, I think that the issue of Return on Investment is crucial for any social entrepreneur. You need to be able to prove to your donors that they are getting value for money - Too many times teachers are trained at the cost of training and astronaut.

Thursday, February 14, 2008

The Global Classroom - Direct from Claremont Graduate University

This very useful resource was brought under my attention via the AEA and SAMEA listservs.

The panel discussions from Claremont Graduate University 's recent "What Works?" workshops are available for viewing online as part of our video library. Evaluators, foundation directors, academics, and leaders of successful for-profits and non-profits came together to discuss what really works when tackling important social problems.

To view this footage, visit us at:

To view many other talks on evaluation and topics in applied psychology, visit our full video library at

Wednesday, February 06, 2008

Taxonomy of Evaluation

I found the Evaluation Webring's Taxonomy of Types, Approaches and Fields of Evaluation Quite Useful.

Category 1: Types of evaluation
Internal evaluation or self- evaluation
An evaluation carried out by members of the organisation(s) who are associated with the programme, intervention or activity to be evaluated.

Ex-ante evaluation or impact assessment
An assessment which seeks to predict the likelihood of achieving the intended results of a programme or intervention or to forecast its unintended effects. This is conducted before the programme or intervention is formally adopted or started. Common examples of ex-ante evaluation are environmental and/or social impact assessments and feasibility studies.

Mid-term or interim evaluation
An evaluation conducted half-way through the lifecycle of the programme or intervention to be evaluated. Monitoring An ongoing activity aimed at assessing whether the programme or intervention is implemented in a way that is consistent with its design and plan and is achieving its intended results.

Ex-post or summative evaluation
An evaluation which usually is conducted some time after the programme or intervention has been completed or fully implemented. Generally its purpose is to study how well the intervention served its aims, and to draw lessons for similar
interventions in the future.

Two processes are often referred to as meta-evaluation: (1) the assessment by a third evaluator of evaluation reports prepared by other evaluators; and (2) the assessment of the performance of systems and processes of evaluation.

Formative evaluation
An evaluation which is designed to provide some early insights into a programme or intervention to inform management and staff about the components that are working and those that need to be changed in order to achieve the intended objectives.

Category 2: Evaluative approaches
Outcome evaluation
An evaluation which is focused on the change brought about by the programme or intervention to be evaluated or its results regarding the intended beneficiaries.
Impact evaluation An evaluation that focuses on the broad, longer-term impact or effects, whether intended or unintended, of a programme or intervention. It is usually done some time after the programme or intervention has been completed.

Performance evaluation
An analysis undertaken at a given point in time to compare actual performance with that planned in terms of both resource utilization and achievement of objectives. This is generally used to redirect efforts and resources and to redesign structures.

Participatory evaluation
An evaluation that actively involves all or selected stakeholders in the evaluation process. Different approaches involve varying degrees of participation, inclusion, capacity-building, ownership, etc.

Empowerment evaluation
An approach that aims to improve programs through using specific tools for assessing the planning, implementation and self-evaluation of programs, and by incorporating evaluation into a program or organization’s planning and management. It involves a high level of participation by stakeholders in the evaluation process and is guided by ten key principles.
Collaborative evaluation An evaluation which aims for a significant degree of collaboration or cooperation between evaluators and stakeholders.

Utilization-focused evaluation
A process that assists the primary intended users of an evaluation to select the most appropriate content, model, methods, and theory for the evaluation, focusing on their intended use of the evaluation. Use refers to how people apply evaluation findings and experience the evaluation process.

Feminist evaluation
An evaluation that commonly involves adapting or redesigning relevant evaluation theories and methodologies so that they are compatible with feminist theories and methodologies. Feminist evaluations aim to be inclusive and empowering for women in particular.

Theory-based evaluation
An evaluation based on the theories of change that underlie a given programme or intervention. Its major aim is to examine the extent to which these theories hold and to validate their underlying assumptions.

Most Significant Change
A form of participatory monitoring and evaluation which involves the collection and systematic review and analysis of change stories by panels of designated stakeholders or staff. It is mainly used to assess intermediate program impacts and outcomes.

Category 3) Fields of evaluation

Programme or project evaluation
The evaluation of a programme or project.

Policy evaluation
The evaluation of policies and procedures.

Evaluation of legislation
The evaluation of a piece of legislation.

Evaluation of technical assistance
The evaluation of technical assistance provided by international, bilateral or multilateral donors.

Organisation or institutional evaluation
An evaluation of an organization’s or other institution’s capacity for innovation and change. It involves examining its decision-making processes and organisational structures.

Proposal assessment
The assessment of bids presented by tenderers following a specific call for tenders/bids.
Financial audit The scrutiny of accounts of an organization or other institution against a set of standards.

Personnel evaluation
A systematic method of evaluating an employee’s or staff member’s performance. This involves tracking, evaluating and providing feedback in relation to specific predetermined standards which are consistent with the organization’s overall

Tuesday, February 05, 2008

Surveys - Should we believe them?

There is a lot written about survey methodology as a tool in evaluation, but despite the easy and neat stats that they deliver, one should regard them with a little bit of skepticism, it seems.

Two stories to demonstrate the point:

According to a speaker on 702 talk radio I heard earlier this week, Volkskas bank still receives votes for one of the best brands in South Africa (in the Annual Markinor survey), despite the fact that it has ceased existence now for more than just a couple of years. At least in this survey, you can identify problematic answers because survey respondents had the option of giving an open-ended answer. I shudder to think what people actually do when they get one of those tick box multiple choice surveys...

In the next example, it is just so clear that one should question even the most basic assumptions people make when they complete a survey.
LONDON (AFP) - Britons are losing their grip on reality, according to a poll out Monday which showed that nearly a quarter think Winston Churchill was a myth while the majority reckon Sherlock Holmes was real. The survey found that 47 percent thought the 12th century English king Richard the Lionheart was a myth. And 23 percent thought World War II prime minister Churchill was made up. The same percentage thought Crimean War nurse Florence Nightingale did not actually exist.Three percent thought Charles Dickens, one of Britain's most famous writers, is a work of fiction himself. Indian political leader Mahatma Gandhi and Battle of Waterloo victor the Duke of Wellington also appeared in the top 10 of people thought to be myths. Meanwhile, 58 percent thought Sir Arthur Conan Doyle's fictional detective Holmes actually existed; 33 percent thought the same of W. E. Johns' fictional pilot and adventurer Biggles.

Wednesday, January 09, 2008

competencies/capabilities of an evaluator

Q: Do you know of a document that articulates competencies/capabilities of an evaluator? If you do please send me a reference or copy.

A: Lots of work has been done on this topic by various Evaluation Associations across the world. Some useful references:

King, Jean, Stevahn, Laurie, Ghere, Gail, & Minnema, Jane (2001). Toward a taxonomy of essential evaluator competencies. American Journal of Evaluation, 22, 229-247.

Mertens, Donna M. (1994). Training evaluators: Unique skills and knowledge. New Directions for Program Evaluation, 62, 17-27.

Treasure Board of Canada Competency Profile for Federal Public Service Evaluation Professionals

Here is the list:

Essential Competencies for Program Evaluators (ECPE)

(Stevahn and King, Ghere, & Minnema, American Journal of Evaluation, March


1.0 Professional Practice

1.1 Applies professional evaluation standards

1.2 Acts ethically and strives for integrity and honesty in conducting evaluations

1.3 Conveys personal evaluation approaches and skills to potential clients

1.4 Respects clients, respondents, program participants, and other stakeholders

1.5 Considers the general and public welfare in evaluation practice

1.6 Contributes to the knowledge base of evaluation

2.0 Systematic Inquiry

2.1 Understands the knowledge base of evaluation (terms, concepts, theories, assumptions)

2.2 Knowledgeable about quantitative methods

2.3 Knowledgeable about qualitative methods

2.4 Knowledgeable about mixed methods

2.5 Conducts literature reviews

2.6 Specifies program theory

2.7 Frames evaluation questions

2.8 Develops evaluation designs

2.9 Identifies data sources

2.10 Collects data

2.11 Assesses validity of data

2.12 Assesses reliability of data

2.13 Analyzes data

2.14 Interprets data

2.15 Makes judgments

2.16 Develops recommendations

2.17 Provides rationales for decisions throughout the evaluation

2.18 Reports evaluation procedures and results

2.19 Notes strengths and limitations of the evaluation

2.20 Conducts meta-evaluations

3.0 Situational Analysis

3.1 Describes the program

3.2 Determines program evaluability

3.3 Identifies the interests of relevant stakeholders

3.4 Serves the information needs of intended users

3.5 Addresses conflicts

3.6 Examines the organizational context of the evaluation

3.7 Analyzes the political considerations relevant to the evaluation

3.8 Attends to issues of evaluation use

3.9 Attends to issues of organizational change

3.10 Respects the uniqueness of the evaluation site and client

3.11 Remains open to input from others

3.12 Modifies the study as needed

4.0 Project Management

4.1 Responds to requests for proposals

4.2 Negotiates with clients before the evaluation begins

4.3 Writes formal agreements

4.4 Communicates with clients throughout the evaluation process

4.5 Budgets an evaluation

4.6 Justifies cost given information needs

4.7 Identifies needed resources for evaluation, such as information, expertise, personnel, instruments

4.8 Uses appropriate technology

4.9 Supervises others involved in conducting the evaluation

4.10 Trains others involved in conducting the evaluation

4.11 Conducts the evaluation in a nondisruptive manner

4.12 Presents work in a timely manner

5.0 Reflective Practice

5.1 Aware of self as an evaluator (knowledge, skills, dispositions)

5.2 Reflects on personal evaluation practice (competencies and areas for growth)

5.3 Pursues professional development in evaluation

5.4 Pursues professional development in relevant content areas

5.5 Builds professional relationships to enhance evaluation practice

6.0 Interpersonal Competence

6.1 Uses written communication skills

6.2 Uses verbal/listening communication skills

6.3 Uses negotiation skills

6.4 Uses conflict resolution skills

6.5 Facilitates constructive interpersonal interaction (teamwork, group facilitation, processing)

6.6 Demonstrates cross-cultural competence