Recently the Gauteng Department of Education held a colloquium on their Monitoring and Evaluation Framework. As one of the speakers, I reflected on the fact that M&E frameworks often erroneously assume that the evaluand is a stable system. I argued that there are multiple triggers that leads to the evolution of the evaluand and that this has implications for M&E.
Triggers for evolving systems, organizations, policies, programmes & interventions
(Morell, J.A. (2005). Why are there unintended consequences of program action, and what are the implications for doing evaluation? In American Journal of Evaluation 2005 (26) p 444 - 463 )
• Unforeseen consequences
– Weak application of analytical frameworks, failure to capture experience of past research
• Unforeseeable consequences
– Changing environments
• Overlooked consequences
– Known consequences are ignored for practical, political or ideological reasons
• Learning & Adapting
– As implementation happens, the learning is used to adapt
• Selection Effects
– If different approaches are tried, those that are successful are likely to be replicated and those that are unsuccessful are unlikely to be replicated.
Implications for M&E
• M&E needs to work in aid of evolution (not just change)
– The M&E framework should be key in allowing the GDE to adapt, learn, respond to changes
• Not just by ensuring that the right information is tracked, but to ensure that the right people have access to it at the right time.
• M&E needs to respond to evolution
– As the Evaluand changes, some indicators will be incorrectly focused or missing, so the framework will have to be updated periodically
– It might be necessary to implement measures that go beyond checking “whether the Dept makes progress towards reaching its goals and objectives”
• Diversity of input into the design of the framework
• Using appropriate evaluation methods
– Consider expected impacts of change in planning for roll-out of M&E
Critical analysis of an M&E framework
• Does it ask the right questions in order for us to judge the merit, worth or value” of that which we are monitoring / evaluating?
• Does it allow for credible & reliable evidence to be used?
Types of Questions to ask
(Chelimsky, E. (2007). Factors Influencing the Choice of Methods in Federal Evaluation Practice. New Directions for Evaluation 113. p 13 - 33)
• Descriptive questions: Questions that focus on determining how many, what proportion etc. for the purposes of describing some aspect of the education context. (e.g. if you were interested in finding out what the drop out rate for no-fee schools is)
• Normative questions: Questions that compare outcomes of an intervention (such as the implementation of new policies) against a pre-existing standard or norm. Norm referenced questions can use various standards to compare against:
– Previous measures for the group that’s exposed to the policy intervention (e.g. if you compare the current drop-out rate to the previous drop-out rate for a specific set of schools affected by the policy)
– A widely negotiated and accepted standard (e.g. if it was accepted that a 5% drop out rate is acceptable, you can check whether the schools currently have that drop-out rate or not)
– Measure from another similar group (e.g. if you compare the drop-out rate for different types of schools)
• Attributive questions: Questions that attempt to attribute outcomes directly to an intervention like a policy change or a programme (Is the change in the drop-out rate in no-fee schools due to the implementation of the no-fee school policy)
• Analytic-Interpretive questions that builds our Knowledge base: Questions that ask about the state of the debate issues important for decision making about specific policies. (e.g. What is known about the relationship between drop-out rate and the per-learner education spend of the Department of Education)
Questions at different Time Periods
• Prior to implementation:
– Q1.1: What does available baseline data tell us about the current situation in the entities that will be affected? (Descriptive)
– Q1.2: Given what we know about existing circumstances and the changes proposed when the new policy / programme is implemented, what are the likely impacts/ effects likely to be? (Analytic-Interpretive, Normative)
• Evidence based policy making requires some sort of ex-ante assessment of the likely changes. This assessment can then later be referred to again when the final impact evaluation is conducted.
• Directly after implementation, and continued until full compliance is reached:
– Q2.1: To what degree is there compliance to the policy / fidelity to the programme design? (Descriptive)
– Q2.2: What are the short term positive and negative effects of the policy change / programme? (Descriptive, Normative and Attributive)
– Q2.3: How can the implementation and compliance be improved? (Analytic-Interpretive)
– Q2.4: How can the negative short term effects be mitigated? (Analytic-Interpretive)
– Q2.5: How can the positive short term effects be bolstered? (Analytic-Interpretive)
• This is important because no impact assessment can be done if the policy / programme has not been implemented properly, if there are significant barriers to the implementation of the policy / programme an intervention to remove these barriers would be necessary or the policy / programme should be changed.
• After compliance has been reached and the longer term effects of the policy are able to be discerned:
– Q3.1: To what degree did the policy achieve what it set out to do? (Normative)
– Q3.2: What has been the longer term and systemic effects attributable to the policy change? (Descriptive, Normative, Attributive)
– Q3.3: How can the implementation be improved / negative effects be mitigated / positive effects be bolstered? (Analytic-Interpretive)
• This is important to demonstrate that policy change was effective in addressing the underlying issues initially requiring the policy change, and to check that no unintended perversions of the policy became implemented.
• Designs appropriate to Descriptive questions:
– CASE STUDY DESIGNS
– RAPID APPRAISAL DESIGNS
– GROUNDED THEORY DESIGNS
• Designs Appropriate to Analytic-Interpretive questions
– LITERATURE REVIEW
– MIXED METHOD DESIGNS
• Designs Appropriate to Normative questions
– TIME SERIES RESEARCH DESIGNS
• Designs Appropriate to Attributive questions
– EXPERIMENTAL DESIGNS
– QUASI-EXPERIMENTAL DESIGNS
Principles for Evidence Collection
• Independence: You cannot ask the same person in whose compliance you are interested, whether they are complying. The incentive to provide false information might be very high.
• Relevance: Appropriate questions must be asked of the right persons..
• Consider Systemic Impacts. Look broader than just the cases directly affected.
• Appropriate Samples need to be selected. The sampling approach, sample size are all related to the question that needs to be answered.
• Appropriate methods need to be selected. Although certain designs are likely to results in easy answers, they might not be appropriate
• Implementation Phase: Take into account the level of implementation when you do the assessment. It is well known that after initial implementation an implementation dip might occur. Do not try to do an impact assessment when the level of implementation has not yet stabilised in the system.
• Fidelity: Take into account the fidelity of implementation, i.e to what degree the policy was implemented as it was intended.
No comments:
Post a Comment