I'm planning an evaluation planning meeting during which the intended evaluation users will design an organizational capacity evaluation. The organizations under scrutiny deliver services to the disabled (Or is the correct term "differently Abled"?). We will start with "drawing the road" (Ross Connor recently did a presentation on this at the Lisbon EES Conference) followed by the development of a stakeholder map, clarification of evaluation questions and the development of an evaluation matrix.
The evaluation matrix will outline the final evaluation questions, indicate which stakeholder need it addresses, and will also identify the data collection method and source. As a quality control exercise I'm planning to give the team a checklist that would ask the members whether the planned data collection meets some basic evaluation principles.
Some of the principles that I will try to incorporate:
• Independence: You cannot ask the same person in whose compliance you are interested, whether they are complying. The incentive to provide false information might be very high. You can ask school principals about the degree to which the Province has met their commitments, and you can ask parents whether the school charges money, but you cannot ask the school principal whether they are charging school fees if they have been declared a no-fee school.
• Relevance: Appropriate questions must be asked. You cannot expect a member of the general public (e.g. a parent) if the school is complying with the school funding norms – He / she is unlikely to know what these entail.
• Consider Systemic Impacts. Look broader than just the cases directly affected. No fee schools are not the only ones likely to be impacted by this specific policy provision. The schools in the area are also likely to be affected in some way.
• Appropriate Samples need to be selected. The sampling approach, sample size are all related to the question that needs to be answered.
• Appropriate methods need to be selected. Although certain designs are likely to results in easy answers, they might not be appropriate
• Implementation Phase: Take into account the level of implementation when you do the assessment. It is well known that after initial implementation an implementation dip might occur. Do not try to do an impact assessment when the level of implementation has not yet stabilised in the system.
• Fidelity: Take into account the fidelity of implementation, i.e to what degree the policy was implemented as it was intended.
• Quality Focus: Although a specific funding policy might have as a major aim to improve access to services, quality should always be a consideration. It is no use you have increased access to a service that never before delivered quality outputs, outcomes and impacts. Similarly it is no use that access to a good quality service improved, but due to the increased up-take of the service, the quality were negatively impacted.
I'll provide some feedback after the workshop
No comments:
Post a Comment