Showing posts with label Participatory Evaluation. Show all posts
Showing posts with label Participatory Evaluation. Show all posts

Wednesday, November 20, 2013

ICT4RED Learning Workshop

I'm contributing to the evaluation of the ICT4RED (Information Communication Technology for Rural Education Development) initiative - A very ambitious project that rolls out teacher professional development to enable teachers and learners to use 21st century methods and tablet computers in rural schools in Cofimvaba in the Eastern Cape. More information about the project here and here.

We decided to use a developmental evaluation approach - I'm practically embedded in the organization that's responsible for implementing the project. I'm finding that this is a wonderful opportunity to influence what happens... But because this project is so different from the many failed technology projects that I've evaluated before, I sometimes wonder whether I am "objective" enough to add actual value.

We organised the M&E team's work into four categories -
  • Monitoring - Measuring progress made on outputs, facilitating ocassional debrief meetings with team members, reflecting on abundant data from various social media streams and participating in weekly project management meetings
  • Evaluation - Measuring the success after various implementation phases - incorporating some self-evaluation workshops, and other more standard evaluation measures including ethnograpic descriptions, baseline and follow up surveys, and a small scale RCT and tracer study
  • Learning - Asking team members to ocassionally reflect on what theyve learnt - from successes or failures
  • Model Development - Developing a modular theory of action underpinned by a theory of change that can be used to support scale up and replication. 

 Yesterday we held a "learning workshop" where team members had to reflect on what they've learnt.
Our first template for the learning briefs asked for the following information:

Project Name
    Give the project name here
Submitted by
    Give the name and component name
Date
    Give the date on which you submit the learning brief
What was the learning?
    Please describe the learning that occurred
Learning brief type
    Indicate which of the following three is applicable and provide a short description
    •    Learning from failure during implementation
    •    Learning from implementation success
    •    Learning from review of previous research and practice (i.e. not practically tested yet)
The Context
    Say something about the context of the learning / project context, add relevant pictures if they are available
Why is this learning important?
    Please describe why this learning is important
Evidence Base
    Please indicate what the evidence base is for this learning brief, if possible, provide references that may help the reader track down the evidence base.
Recommendations for future similar projects
    Please provide your recommendations in a list wise form. If possible add pictures, graphs or diagrams
Recommendations that should be taken into account by the current project
    Please indicate which of the above recommendations should be taken into account for the current project

My colleagues, who are great at packaging information, asked that we focus the learning presentations on
& What we designed
? What we learnt
# What we're doing now
! Advice for Policy and practice



This made for quite nice presentations. We'll probably adapt our learning brief templates accordingly.

Friday, July 08, 2011

Evaluation Basics 101 - Involve the users in the design of your instruments

Early this week, I got back from my work-related travel to Kenya, but then I ran straight into two full days of training. We planned to train the staff of a client on a new observation protocol that we developed for them to use. The new tool was based on a previous tool they had used. Before finalising the tool, we took time to discuss the tool with a small group of the staff and checked that they thought it could work. We thought the training would go well.

Drum roll...It didn't. On a scale of 0 to going well, we scored a minus 10. It felt like I had a little riot on hand when I started with "This is the new tool that we would like you to use".

Thinking about it - I should have crashed and burned in the most spectacular way. Instead, I took a moment with myself, planted a slap on my forehead, uttered a very guttural "Duh!" and mentally paged through "Evaluation Basics 101 - kindergarten version". Then I smiled, sighed, and cancelled the afternoon's training agenda. I replaced it with an activity that I introduced as: "This is the tool that we would like to workshop with you so that we can make sure that you are happy with it before you start to use it".

Some tips if ever you plan to implement a new tool (even if it is just slightly adjusted) in an organization:
1) Get everybody who will use the tool, to participate in the designing of the tool
2) Do not think that an adjustment to an already existing tool exempts you from facilitating the participatory process
3) Do not discuss the tool with only a small group from the eventual user-base. Not only will the other users who weren't consulted riot, even the ones that had their say in the small group are likely to voice their unhappiness.

When we were done, the tool looked about 80% the same as it did at the start, and they did not complain about its length, its choice of rating scale or the underlying philosophy again.

Lesson learnt. (For the second time!)