Tuesday, January 29, 2013

Online Tertiary Education

Thomas Friedman wrote an article in the NYTimes about the "revolution" in Universities .
Revolution Hits the Universities
Nothing has more potential to let us reimagine higher education than massive open online course, or MOOC, platforms.
I think this is a wonderful development and one that I have eagerly awaited. Having access to great education opportunities without having to travel will help me become a better evaluator. Already I visit www.betterevaluation.org; www.mymande.org and www.statistics.com for some of my personal capacity development needs. I might pursue formal credentialing some time in the future via this route.

I acknowledge that this move to online training is  a juggernaut that will not be stopped. I just wonder what the systemic effects will be? How much "blood" will be shed in this "revolution" before the necessary checks and balances will be implemented? As with all revolutions, its not going to have good effects for everybody!

One category of "deaths" that I foresee is that of the average university professor as a teacher.

 If everyone does a course with the "best" prof. in the world, the second and third best profs won't have teaching jobs anymore. The effects might be that we could end up with a dangerously monolithic way of thinking, with all kinds of implications for how we define problems, seek answers and develop the body of scientific knowledge. On the other hand, a common language may finally emerge, allowing more people to stand on the shoulders of giants to reach for diverse solutions in their diverse contexts.

Back when TV was introduced we had no idea what impact it will eventually have. I think we are standing in that exact same spot again...

Monday, January 28, 2013

A new start for 2013

I recently took on a long term development project that involves a certain baby with beautiful blue eyes, so the blogging had to move to the back burner. But here is a fresh contribution for this month.



As part of the M&E I do for educational programmes, I frequently suggest to clients that they not only consider the question: “Did the project produce the anticipated gains?” but that they also answer the question “Was the project implemented as planned? This is because sub-optimal implementation is, in my experience, almost always to blame for negative outcomes of the type of initiatives tried out in education. 

Consider the example of a project which aims to roll out extra computer lessons in Maths and Science in order to improve learner test scores in Maths and Science.  We not only do pre-and post-testing of the learner test scores in the participating schools, but we also track how many hours of exposure the kids got, what content they covered, how they reacted to the content, etc. And we attend the project progress meetings where the project implementer reflects on the implementation of the project.  Where we eventually don’t see the kind of learning gains anticipated, we are then able to pinpoint what went “wrong” with the implementation – frequently we can predict what the outcome will be based on what we know from the implementations. This manual on implementation research outlines a more systematic approach to figuring out how best to implement an initiative – written with the health sector in mind.  

Of course implementation success and the final outcomes of the project is only worth investigating for interventions where there is some evidence that the kind of intended changes are possible. If there is no evidence of this kind, we sometimes conduct a field trial with a limited number of kids, on limited content, over a short period of time in an implementation context similar to the one designed for the bigger project.  This helps us to answer the question “Under ideal circumstances, can the initiative make a difference in test scores?

What a client chooses to include in an evaluation is always up to them, but let this be a cautionary tale: A client recently declined to include a monitoring / evaluation component that considered programme fidelity, on the basis that it would make the evaluation too expensive. When we started collecting post-test data for the evaluation, we discovered a huge discrepancy between what happened on the ground, and what was initially planned–  Leaving the donor, the evaluation team and the implementing agency with a situation that has progressed too far to fix easily.  Perhaps if there was better internal monitoring this situation could have been prevented. But involving the evaluators in some monitoring would have definitely helped too!