M&E Blog
This blog is intended as a home to some musings about M&E, the challenges that I face as an evaluator and the work that I do in the field of M&E.Often times what I post here is in response to a particularly thought-provoking conversation or piece of reading. This is my space to "Pause and Reflect".
Thursday, March 21, 2024
A visualization on statistics choices
Wednesday, March 20, 2024
A visualisation of data visualization choices!
I always love it if people can simplify the complicated decisionmaking processes that we have in our heads, to a simple decision tree. A visualisation of data visualizationn choices!
Tuesday, March 05, 2024
Applying systems theoretical concepts to understand sustainability of education intervention outcomes
This Master’s dissertation addresses the research question: To what extent can the systems concept ‘extended dynamic sustainability’ be used to explain why some results of a donor-funded education development intervention were sustained ten years after its conclusion?
EvalEdge Podcast about Storytelling in Evaluation
In this episode of EvalEdge, Asgar Bhikoo and I talk about Storytelling in Evaluation Practice. This episode focused on exploring current lessons related to the use of story-telling as innovation in evaluation practice in Africa. For more information, check out Digital Stories for Impact and Social Impact Storytelling: Using Impact Data to Drive Change.
Here is the impact story tool referenced in the podcast, also here on the Civicus repository.
Monday, July 29, 2019
There are alternatives to Experimental and Quasi-Experimental Impact Evaluation Methods.
This DfID working paper says:
Most development interventions are ‘contributory causes’. They ‘work’ as part of a causal package in combination with other ‘helping factors’ such as stakeholder behaviour, related programmes and policies, institutional capacities, cultural factors or socio-economic trends. Designs and methods for IE need to be able to unpick these causal packages.
Demonstrating that interventions cause development effects depends on theories and rules of causal inference that can support causal claims. Some of the most potentially useful approaches to causal inference are not generally known or applied in the evaluation of international development and aid. Multiple causality and configurations; and theory-based evaluation that can analyse causal mechanisms are particularly weak. There is greater understanding of counterfactual logics, the approach to causal inference that underpins experimental approaches to IE.
Methods that I am currently interested in include
Qualitative Impact Assessment Protocol
The QuIP gathers evidence of a project’s impact through narrative causal statements collected directly from intended project beneficiaries. Respondents are asked to talk about the main changes in their lives over a pre-defined recall period and prompted to share what they perceive to be the main drivers of these changes, and to whom or what they attribute any change - which may well be from multiple sources.Qualitative Comparative Analysis
Typically, a QuIP study involves 24 semi-structured interviews and four focus groups, conducted in the native language by highly-skilled, local researchers. However, this number is not fixed and will depend on the sampling approach used. The research team conducting interviews are independent and blindfolded where appropriate; they are not aware who has commissioned the research or which project is being assessed. This helps to mitigate and reduce pro-project and confirmation bias, as well as enable a broader and more open discussion with respondents about all outcomes and drivers of change.
Qualitative Comparative Analysis (QCA) is a means of analysing the causal contribution of different conditions (e.g. aspects of an intervention and the wider context) to an outcome of interest. QCA starts with the documentation of the different configurations of conditions associated with each case of an observed outcome. These are then subject to a minimisation procedure that identifies the simplest set of conditions that can account all the observed outcomes, as well as their absence. The results are typically expressed in statements expressed in ordinary language or as Boolean algebra. QCA is able to use relatively small and simple data sets. There is no requirement to have enough cases to achieve statistical significance, although ideally there should be enough cases to potentially exhibit all the possible configurations.