Thursday, November 26, 2009

A lonely brainstorm... Or many minds?

A grantmaking organization (our client) is interested in evaluating the level of their service delivery and relationship management – as perceived by the grantees that they disburse funds to. So here is the question – What are the evaluation standards that we should use?

Grantee perceptions?
The terms of reference indicates that the client expects that the evaluators will interact with the grantees to answer their questions. But if we ask grantees what they think of the grant maker’s processes, approach, involvement, communication etc. we might get senseless data because the wide range of grantees will have very different expectations about what qualifies as good service delivery / relationship management. It will probably be easy to collect data about their perceptions, but that won’t be very useful. And then there is also the issue of possible bias: Those grantees that experienced difficulty in submitting reports etc for monitoring purposes, might actually be slightly more negative than the rest of the grantees that would probably be eager to be complement the people that will dish out their next pay check.

The Grantmakers’ own standards?
It might make sense to determine whether the grantmaker has any implicit or explicit service delivery standards or contracted agreements that could be used as the standard to evaluate their performance against. But if the grantmaker has a standard that says: “All applications must be acknowledged in writing within 6 months from the date of receipt” that would be easy to check, but surely that service standard seems a little odd? Does it really take six months to respond to a submission?

Industry standards and benchmarks?
The alternative would be to look at service delivery standards and benchmarks as set by other industry players. There’s lots of literature about grantmaking internationally, but information about South African grantmakers are limited – There is the CSI handbook, but it doesn’t contain the level of detail that may be required to develop an extensive set of evaluation standards and benchmarks. And grant makers are notoriously secretive about their approach, systems and quality standards, so we will probably not be able to get detailed information from more than a handful of players in the field that we have established past relationships with.

Room for a participatory agreement on what exactly should be measured?
It is possible that a rigorous engagement of grantees and grant makers at the outset of the evaluation could provide the most satisfactory solution to the “which standards should we use” question. And that is probably just what we will do! Background research about all of the above will probably provide a good basis to start the workshop, but it will be interesting to see what the final consensus will dictate!