Thursday, November 26, 2009

A lonely brainstorm... Or many minds?

A grantmaking organization (our client) is interested in evaluating the level of their service delivery and relationship management – as perceived by the grantees that they disburse funds to. So here is the question – What are the evaluation standards that we should use?

Grantee perceptions?
The terms of reference indicates that the client expects that the evaluators will interact with the grantees to answer their questions. But if we ask grantees what they think of the grant maker’s processes, approach, involvement, communication etc. we might get senseless data because the wide range of grantees will have very different expectations about what qualifies as good service delivery / relationship management. It will probably be easy to collect data about their perceptions, but that won’t be very useful. And then there is also the issue of possible bias: Those grantees that experienced difficulty in submitting reports etc for monitoring purposes, might actually be slightly more negative than the rest of the grantees that would probably be eager to be complement the people that will dish out their next pay check.

The Grantmakers’ own standards?
It might make sense to determine whether the grantmaker has any implicit or explicit service delivery standards or contracted agreements that could be used as the standard to evaluate their performance against. But if the grantmaker has a standard that says: “All applications must be acknowledged in writing within 6 months from the date of receipt” that would be easy to check, but surely that service standard seems a little odd? Does it really take six months to respond to a submission?

Industry standards and benchmarks?
The alternative would be to look at service delivery standards and benchmarks as set by other industry players. There’s lots of literature about grantmaking internationally, but information about South African grantmakers are limited – There is the CSI handbook, but it doesn’t contain the level of detail that may be required to develop an extensive set of evaluation standards and benchmarks. And grant makers are notoriously secretive about their approach, systems and quality standards, so we will probably not be able to get detailed information from more than a handful of players in the field that we have established past relationships with.

Room for a participatory agreement on what exactly should be measured?
It is possible that a rigorous engagement of grantees and grant makers at the outset of the evaluation could provide the most satisfactory solution to the “which standards should we use” question. And that is probably just what we will do! Background research about all of the above will probably provide a good basis to start the workshop, but it will be interesting to see what the final consensus will dictate!

Monday, October 26, 2009

Visualization Methods - Really really interesting

Previously I wrote about Edward Tufte's Book on presenting graphs. Well, it seems that data visualization has been taken to a whole new level.

Ralph Lengler & Martin J. Eppler form the Institute of Corporate Communication compiled a "Periodic table" of visualization methods that categorizes and shows examples of about 100 visualization methods.

The table can be downloaded in pdf format at:
http://www.visual-literacy.org/periodic_table/periodic_table_as_pdf.pdf

But try the online version - As you mouse over the various "elements" an example pops up to demonstrate what it looks like.
http://ow.ly/v9RI

The full article explaining the table can be found at
http://ow.ly/wk7d

Wow - It takes people specializing in visualization methods to think of such an innovative way to present their concept.

PS. I heard about this on the American Evaluation Assocation's Linked in Group.This and other useful information gets shared from time to time.

Monday, October 12, 2009

Get involved - SAMEA is preparing a submission

Media statement by the Minister in the Presidency T Manuel for National Planning on the release of the Green Paper on National Strategic Planning
4 September 2009

Today government is releasing two discussion documents, one a Green Paper on National Strategic Planning and the other a Policy Document on Performance Monitoring and Evaluation. The decision by President Zuma to appoint Ministers in the Presidency responsible for National Planning and Performance Monitoring and Evaluation is designed to improve the overall effectiveness of government, enabling government to better meets its development objectives in both the short- and longer-term. These two discussion documents must be seen in the context of wider efforts led by the President to improve the performance of government through enhancing coherence and co-ordination in government, managing the performance of the state and communicating better with the public.
The Green Paper on National Strategic Planning is a discussion document that outlines the tasks of the national planning function, broadly defined. It deals with the concept of national strategic planning, as well as processes and structures. Once consultations on these issues have been completed, the process to set up the high-level structures will commence; and this will be followed by intense work to develop South Africa's long-term vision and other outputs. In other words, the Green Paper does not deal with these substantive issues of content.
The rationale for planning is that government (and indeed the nation at large) requires a longer-term perspective to enhance policy coherence and to help guide shorter term policy trade-offs. The development of a long-term plan for the country will help government departments and entities across all the spheres of government to develop programmes and operational plans to meet society’s broader developmental objectives. Such a plan must articulate the type of society we seek to create and outline the path towards a more inclusive society where the fruits of development benefit all South Africans, particularly the poor.
The planning function is to be coordinated by the Minister in The Presidency for National Planning. There are four key outputs of the planning function. Firstly, to develop a long term vision for South Africa, Vision 2025, which would be an articulation of our national aspirations regarding the society we seek and which would help us confront the key challenges and trade-offs required to achieve those goals. A National Planning Commission comprising of external commissioners who are experts in relevant fields would play a key role in developing this plan. The development of a National Plan would require broader societal consultation and existing forums would be used for this purpose. The Minister in The Presidency will co-ordinate these engagements. A National Plan has to be adopted by Cabinet for it to have the force of a government plan. The Minister would serve as a link between the Commission and Government, feeding the work of the Commission into government.
The next set of outputs cover the five-yearly Medium Term Strategic Framework (MTSF) and the National Programme of Action. These are documents of national government, adopted by Cabinet, drawing on the electoral mandate of the government of the day. The Minister in The Presidency for National Planning, supported by a Ministerial Committee on Planning, would coordinate the development of these documents with input from Ministers, departments, provinces, organised local government, public entities and coordinating clusters.
Further, it is envisaged that the planning function in The Presidency will undertake research and release discussion papers on a range of topics that impact on long-term development. These include topics such as demographic trends, global climate change, human resource development, and future energy mix and food security. The Presidency would also release and process baseline data on critical such as demographics, biodiversity as well as migratory and economic trends. This work will be undertaken by the Minister, working with the National Planning Commission (NPC) and the Minister, working with the NPC would, from time to time, advise government on progress in implementing the national plan, including the identification of institutional and other blockages to its implementation.
One of the functions of The Presidency in respect of national planning is to develop frameworks for spatial planning that seek to undo the damage that apartheid's spatial development patterns have wrought on our society. This includes the development of high level frameworks to guide regional planning and infrastructure investment.
The national planning function will provide guidance on the allocation of resources and in the development of departmental, sectoral, provincial and municipal plans.
The Minister in The Presidency responsible for national planning will be supported by a Planning Secretariat, which will also provide administrative, research and other support to the National Planning Commission. National Strategic Planning is an iterative process involving extensive consultation and engagement within government and with broader society.
It is envisaged that Parliament will play a key role in guiding the planning function through its oversight role but also through facilitating broader stakeholder input into the planning process. For this reason, it is appropriate that Parliament should lead the discussion process on the Green Paper.
This Green Paper is a discussion document. Government welcomes comment, advice, criticisms and suggestions from all in society.
Please address all comments on the Green Paper on National Strategic Planning to the Minister in the Presidency for National Planning c/o:
Hassen Mohamed
E-mail: hassen@po.gov.za
Tel: 012 300 5455
Fax: 086 683 5455
Issued by: The Presidency
4 September 2009

Please see http://www.info.gov.za/speeches/2009/09090414151003.htm for the actual green paper and Policy document on performance monitoring and evaluation.

Monday, October 05, 2009

Check out the GOODs

A colleague referred me to a refreshing website that might be interesting to do-gooders the world over. It is called GOOD.

Maybe they called it GOOD because it can be found at the following url: http://www.good.is/. Apparently "GOOD is a collaboration of individuals, businesses, and nonprofits pushing the world forward"

Maybe they called it GOOD because it is good. I remind you, dear reader, that I am an evaluator so I should - according to the Scrivenian* wisdom I sometimes subscribe to - be particularly well placed to pass judgements about merit and worth. However, I will reserve judgement about the Goodness of GOOD for now. Except for saying what I have already said about it.

There is an interesting blog about Innovation and Evaluation in philanthropy. See

http://www.good.is/post/innovation-and-evaluation-are-inseparable/


*OK, that only sounded GOOD in my head, but the meaning I'm hoping to convey is that Michael Scriven's writings are relevant here.

Tuesday, September 29, 2009

ELDIS RESOURCE

http://www.eldis.org/go/topics/resource-guides/manuals-and-toolkits/monitoring-and-evaluation

The Eldis Community site enables development professionals across the world to debate, discuss and exchange ideas and information. This community group, focusing on results-based M&E, is composed of development evaluation practitioners committed to evaluation capacity building at all levels of human development activities - global, country or community level; policy, programme or project level - with the aim of bringing about an equitable, accountable and progressive society for everyone.

Wednesday, March 18, 2009

Social Capital is a fundamental requirement for associations to work.

The IOCE has an EvaLeaders listserve which aims to connect key people across the worlds' evaluation associations. I took up the task of trying to think of something to do to get the discussion going. We settled for a "monthly discussion question" and after posting the first of the questions, we were met with a resounding silence.

A variety of hypotheses were shared in order to explain the silence, the most interesting one:

"Our first question assumed that those on the EvaLeaders list share a sense of community with leaders of other IOCE member evaluation associations, and thus would be willing to take the time to write something about what their group is up to... the reality check is that there is a long-term process involved".

Concepts like "Evaluation Community" and "Community of Practice" are frequently used when speaking about Evaluation Associations, but I certainly have not sat down to think of what this actually means in practice. I have not really come to terms with the fact that social capital is inherent in working networks... capital in all shapes and sizes are requried for a network to work. In a working network, more social capital is also easily created.

Evaluation Associations are social networks, and although we typically evaluate an association's effectiveness by the number of activities they present and by the size of their membership, the true value of an association is actually in the strength of the links between members. Its these links that make shared values and common activities possible. If something as abstract as "hapiness" can dynamically spread through social networks*, then surely values, knowledge and a whole host of other fuzzy, yet potentially important evaluation-aligned attributes can be transferred too.

The question is: How do you get the minimum social capital together to start a vibrant network? Are there social-capital loans available from the World Bank? How many in-kind donations would be required? :)


I'm afraid I have more questions than answers to ponder...




*"Dynamic spread of happiness in a large social network: longitudinal analysis ver 20 years in the Framingham Heart Study" written by James Fowled and Nicholas Christakis. (BMJ 2008;337:a2338 doi:10.1136/bmj.a2338)

Monday, March 09, 2009

Participatory Evaluation Design

I'm planning an evaluation planning meeting during which the intended evaluation users will design an organizational capacity evaluation. The organizations under scrutiny deliver services to the disabled (Or is the correct term "differently Abled"?). We will start with "drawing the road" (Ross Connor recently did a presentation on this at the Lisbon EES Conference) followed by the development of a stakeholder map, clarification of evaluation questions and the development of an evaluation matrix.

The evaluation matrix will outline the final evaluation questions, indicate which stakeholder need it addresses, and will also identify the data collection method and source. As a quality control exercise I'm planning to give the team a checklist that would ask the members whether the planned data collection meets some basic evaluation principles.

Some of the principles that I will try to incorporate:
• Independence: You cannot ask the same person in whose compliance you are interested, whether they are complying. The incentive to provide false information might be very high. You can ask school principals about the degree to which the Province has met their commitments, and you can ask parents whether the school charges money, but you cannot ask the school principal whether they are charging school fees if they have been declared a no-fee school.
• Relevance: Appropriate questions must be asked. You cannot expect a member of the general public (e.g. a parent) if the school is complying with the school funding norms – He / she is unlikely to know what these entail.
• Consider Systemic Impacts. Look broader than just the cases directly affected. No fee schools are not the only ones likely to be impacted by this specific policy provision. The schools in the area are also likely to be affected in some way.
• Appropriate Samples need to be selected. The sampling approach, sample size are all related to the question that needs to be answered.
• Appropriate methods need to be selected. Although certain designs are likely to results in easy answers, they might not be appropriate
• Implementation Phase: Take into account the level of implementation when you do the assessment. It is well known that after initial implementation an implementation dip might occur. Do not try to do an impact assessment when the level of implementation has not yet stabilised in the system.
• Fidelity: Take into account the fidelity of implementation, i.e to what degree the policy was implemented as it was intended.
• Quality Focus: Although a specific funding policy might have as a major aim to improve access to services, quality should always be a consideration. It is no use you have increased access to a service that never before delivered quality outputs, outcomes and impacts. Similarly it is no use that access to a good quality service improved, but due to the increased up-take of the service, the quality were negatively impacted.



I'll provide some feedback after the workshop