This five minute presentation shares Michael Patton's view on real time evaluation. Looks like evaluators will soon have to get tactical training to make sure they are ready to execute like a SWAT team!
This blog is intended as a home to some musings about M&E, the challenges that I face as an evaluator and the work that I do in the field of M&E.Often times what I post here is in response to a particularly thought-provoking conversation or piece of reading. This is my space to "Pause and Reflect".
Thursday, February 28, 2013
Thursday, February 14, 2013
Simple Evaluation Tools
I'm starting a project soon where I will have to develop and compile really simple Evaluation materials for organizations that may not have a lot of expertise to do M&E. Here is one of the really simple but striking tools that I came across at the community sustainability engagement evaluation toolbox
Getting the right tools into people's hands is of course only part of the solution to making sure evaluation at grass roots improve. Sometimes it is less the case that people don't know how to do M&E, and more that they are spread too thin to also do M&E....
Getting the right tools into people's hands is of course only part of the solution to making sure evaluation at grass roots improve. Sometimes it is less the case that people don't know how to do M&E, and more that they are spread too thin to also do M&E....
Monday, February 11, 2013
WEF Global Competitiveness Report
Sadly, but not surprisingly, South Africa still hovers near the bottom on most of the education related indicators measured by the World Economic Forum Global Competitiveness Report.
This just means that there is a lot of scope for making a difference here! To play with the data, go here:
This just means that there is a lot of scope for making a difference here! To play with the data, go here:
Thursday, February 07, 2013
MOOCs that Evaluators might consider
In a previous post I shared some ideas about Massive Open Online Courses (MOOCs). I came across a listing of free courses offered by some
prominent US Universities via online platforms. The full list with more than 200 courses is here:
The site uses the following key to provide information on
the certification offered through these courses.
Free
Courses Credential Key
CC = Certificate of Completion
SA = Statement of Accomplishment
CM = Certificate of Mastery
C-VA = Certificate, with Varied Levels of Accomplishment
NI – No Information About Certificate Available
NC = No Certificate
CC = Certificate of Completion
SA = Statement of Accomplishment
CM = Certificate of Mastery
C-VA = Certificate, with Varied Levels of Accomplishment
NI – No Information About Certificate Available
NC = No Certificate
What caught my eye is the fact that there are quite a few
courses listed that might be interesting to evaluators looking to improve their
stats capacity.
Introduction
to Statistics (NI) – UC Berkeley on edX – January 30 (TBD weeks)
Introduction
to Statistics: Making Decisions Based on Data (C-VA) – Udacity
Probability
and Statistics (NC) – Carnegie Mellon
Statistical
Reasoning (NC) – Carnegie Mellon
A few of the courses that started recently that also looks
interesting include:
Data
Analysis (NI) – Johns Hopkins on Coursera – January 22 (8 weeks)
Introduction
to Databases (SA) – Stanford on Class2Go – January 15 (9 weeks)
Introduction
to Infographics and Data Visualization (CC) Knight Center at
UT-Austin - January 12 (6 weeks)
Social Network
Analysis (CC) – University of Michigan on Coursera – January 28 (9
weeks)
Looks like we will have to keep a closer eye on this type of information!
Monday, February 04, 2013
Reflections from various Evaluations of ICT projects
After doing a few evaluations of ICT projects implemented in schools, I reflected on some of the lessons we've learnt throughout. Its not an exhaustive list, and certainly a lot of it is common sense, but somehow it is the common sense things that people do not always plan for.
Some of the key questions that I would like to see answered in evaluations of these type of initiatives include:
Is the content relevant? (Content review)
Is the content user friendly for the intended users (Heuristics Evaluation)
Was it implemented at the requisite “dosage” level for it to possibly work? (Fidelity monitoring)
Can it effect change? (Experimental design)
At what cost (to participants and donors) (Cost analysis)
Then only, can you start to answer: Did it work (Quasi-experimental design)
Does it work better than “something else” (comparative analysis), or how does it work with “something else”
Some of the key questions that I would like to see answered in evaluations of these type of initiatives include:
Is the content relevant? (Content review)
Is the content user friendly for the intended users (Heuristics Evaluation)
Was it implemented at the requisite “dosage” level for it to possibly work? (Fidelity monitoring)
Can it effect change? (Experimental design)
At what cost (to participants and donors) (Cost analysis)
Then only, can you start to answer: Did it work (Quasi-experimental design)
Does it work better than “something else” (comparative analysis), or how does it work with “something else”
Subscribe to:
Posts (Atom)