Monday, July 29, 2019

There are alternatives to Experimental and Quasi-Experimental Impact Evaluation Methods.


Some of my clients are really interested in measuring their impact. RCTs and other quasi-experiments are first on their list of suggested designs. But our repertoire of IE designs and methods have grown.




This DfID working paper says:


Most development interventions are ‘contributory causes’. They ‘work’ as part of a causal package in combination with other ‘helping factors’ such as stakeholder behaviour, related programmes and policies, institutional capacities, cultural factors or socio-economic trends. Designs and methods for IE need to be able to unpick these causal packages. 
Demonstrating that interventions cause development effects depends on theories and rules of causal inference that can support causal claims. Some of the most potentially useful approaches to causal inference are not generally known or applied in the evaluation of international development and aid. Multiple causality and configurations; and theory-based evaluation that can analyse causal mechanisms are particularly weak. There is greater understanding of counterfactual logics, the approach to causal inference that underpins experimental approaches to IE. 

Methods that I am currently interested in include 

Qualitative Impact Assessment Protocol 
The QuIP gathers evidence of a project’s impact through narrative causal statements collected directly from intended project beneficiaries. Respondents are asked to talk about the main changes in their lives over a pre-defined recall period and prompted to share what they perceive to be the main drivers of these changes, and to whom or what they attribute any change - which may well be from multiple sources.
Typically, a QuIP study involves 24 semi-structured interviews and four focus groups, conducted in the native language by highly-skilled, local researchers. However, this number is not fixed and will depend on the sampling approach used. The research team conducting interviews are independent and blindfolded where appropriate; they are not aware who has commissioned the research or which project is being assessed. This helps to mitigate and reduce pro-project and confirmation bias, as well as enable a broader and more open discussion with respondents about all outcomes and drivers of change.
Qualitative Comparative Analysis 
Qualitative Comparative Analysis (QCA) is a means of analysing the causal contribution of different conditions (e.g. aspects of an intervention and the wider context) to an outcome of interest. QCA starts with the documentation of the different configurations of conditions associated with each case of an observed outcome. These are then subject to a minimisation procedure that identifies the simplest set of conditions that can account all the observed outcomes, as well as their absence. The results are typically expressed in statements expressed in ordinary language or as Boolean algebra. QCA is able to use relatively small and simple data sets. There is no requirement to have enough cases to achieve statistical significance, although ideally there should be enough cases to potentially exhibit all the possible configurations. 

Friday, July 19, 2019

Picture this- Complexity

This handy poster made by Johanna Boehnert explains 16 terms that often pop up in thining about complex systems. It's a bit like a gateway drug to reading more on Complex Systems.

If found it in a tweet by @Heinomatti which refers to the website of CECAN .
But Better Evaluation also has a really nice summary of it.








Wednesday, July 17, 2019

Systems Science and Complexity Science - related but not the same

I'm studying again and for that, I'm reading. A lot. I'm reading about systems thinking and factors that support sustained outcomes of development interventions. Often I stumble on things that make me go: "Ooh - I should remember this next time I do ABC" So this blog is being revived a bit to help keep track of these random thoughts.

I read about the history of systems thinking and complexity science and how both fields have similar challenges. Two great resources:

Midgley and Richardson comparison of paradigms in the Systems Field and the Complexity Field. 



Midgley's reflection on the history of paradigm wars between systems scientists amongst themselves, and complexity scientists amongst themselves. He says: 

Systems scientists were embroiled in a paradigm war, which threatened to fragment the systems research community. This is relevant... because the same paradigms are evident in the complexity science community, and therefore it potentially faces the same risk of fragmentation.

My interest in reading about the relationship between systems science and complexity science got sparked when I looked for examples of emergence, feedback and self-organization in my data and couldn't figure out what that would look like. A colleague suggested that while the concept "feedback" definitely occurs in multiple branches of the systems field (oh and there are so very very many), that the concepts "emergence" and "self-organization" are from complexity science.

One may argue that it probably doesn't matter into which categories these concepts fall, but actually, it does. Because the ontological and epistemological assumptions that underly these paradigms may or may not be similar and should be questioned.

So to get my thinking about the concepts straight, I need to get my thinking about the paradigms straight. Its a work in progress....

Friday, January 27, 2017

Can you tell me "What works in..."

Although we reportedly now live in a post-evidence era, I still choose to cling to the minority view that programmes should be informed by research about what works. But where do you find the evidence?

About two years ago I attended a training course presented by Phil Davies from 3ie. He had many interesting insights to share, but today I was reminded of this excellent list of synthesised evidence that he shared.


One of my recent favourite systematic reviews, conducted by 3ie is The impact of education programmes on learning and school participation in low- and middle-income countries by Snilstveit et al . It has evidence about supplementary education programmes, feeding programmes, ICT in education programmes and a wide range of others. 


Happy Reading!  

Wednesday, July 13, 2016

Evaluative Rubrics - Helping you to make sense of your evaluation data

Three times in one week I've now found myself explaining the use of evaluation rubrics to potential evaluation users. I usually start with an example like this, that people can relate to:
When your high school creative writing paper was graded, your teacher most likely gave you an evaluative rubric which specified that you do well if you 1) used good grammar and spelling, 2) structured your arguments well, and 3) found an innovative and interesting angle on your topic. In essence, this rubric helped you to know what is "good" and what is "not good".
In an evaluation, a rubric does exactly the same. What is a good outcome if you judge a post- school science and maths bridging programme? How does the outcomes of "being employed" or  "busy with a third year  B Sc. Degree at university" compare to an outcome like "being a self-employed university drop-out with three registered patents" or to an outcome like "being unemployed and not sure what to do about the future". A rubric can help you to figure this out.

E. Jane Davidson has some excellent resources on rubrics here and here. If you need a rubric on evaluating value for investment, Julian King has a good resource here.  And of course, there is the usual great content on better evaluation here.

I love how Jane describes why we need evaluation rubrics:
Evaluative rubrics make transparent how quality and value are defined and applied. I sometimes refer to rubrics as the antidote to both ‘Rorschach inkblot’ (“You work it out”) and ‘divine judgment’ (“I looked upon it and saw that it was good”)-type evaluations.