Thursday, August 16, 2007

What is an Evaluator?

Do you find it difficult to explain to people what you do? Those magical two sentences that will get people to go "Aaaaaah, now I get what you do?" Unfortunately I have not been able to come up with something concrete yet. But I am still trying.

In a television interview to publicise the SAMEA conference, the DDG from the PSC Mr. Mash Dipofu tried to explain it with an example. He asked the TV presenter if he knew what the viewers thought of his programme, how it can be improved and how many people actually watches it. He explained that by answering these questions, you are doing what an evaluator would be doing and answering the underlying question: Does what I am doing have value?

Which got me thinking. So much of what we do as evaluators are also done by other professionals.

  • We are a little like investigative journalists: We talk to people and ask questions and gather information to make an argument for or argainst something. Sometimes to inform readers of some wrong doing... Sometimes we celebrate what has been achieved.
  • Then we are also a little like the weather guy. We collect numbers over a long period of time and by applying some statistical techniques we can start predicting what will happen in future.
  • Another way of looking at our job is to compare it to that of teachers. We guide people to learn from their environments - assuming that they need to be taught how to use the information at their disposal to make intelligent choices. We check with tests whether the intended result has been achieved... much like teachers check whether their students have mastered a skill or knowledge component.
  • And then of course evaluators are also a little like an auditor in the way that we try to prove to people that money has been well spent.

The problem with explaining to people what we do, is probably because people tend to confuse it with research and planning and implementation and all kinds of other things. I have also spent some time to think about how being an evaluator is different from being a researcher, a planner and an implementer.

  • Because we use research techniques to collect evidence in order to evaluate, the difference between being a researcher (who asks questions in a specific way to gather evidence) and an evaluator (who asks questions in a specific way to gather evidence to then make a value judgment about the evaluand) is sometimes a little difficult to explain. But there is a difference!
  • Planning, on the other hand comes quite naturally when you are an evaluator. After delivering an evaluation, I frequently get asked to assist in planning processes - if people value what you produced in the evaluation they want to make sure that they plan to implement the recommendations made. Being a weather guy and an auditor makes it easier to plan because you can draw info together to make predictions, and you know that you will have to explain to people why you chose to spend their money in a particular way.
  • I think evaluators will probably make terrible implementers. As an evaluator you are constantly asking questions: Is this the best way to do things? Will we achieve results? How would we know that we added value? How do we know that this is the best way forward? To implement, however, you sometimes have to say "Well I don't know all the answers but I am making a decision to do ABC in the following way and that is the way it is!"
Despite having useful analogies to explain what evaluators do and don't do, I think that we are at risk if we, as practitioners of a scientific metadiscipline, don't understand how the ideology underlying evaluation is different from other those informing other jobs. Evaluators might be sharing some commonalities with teachers, weather people, auditors and journalists, but we have different values and assumptions guiding our work. We cannot forget that our work probably has a deeply political nature because we have to choose at some stage whose questions we will have to answer.

I found the following useful bit about evaluation approaches and the underlying philosophy, epistemology and ontology at

Classification of approaches

Two classifications of evaluation approaches by House House, E. R. (1978). Assumptions underlying evaluation models. Educational Researcher. 7(3), 4-12. and Stufflebeam & Webster Stufflebeam, D. L., & Webster, W. J. (1980). An analysis of alternative approaches to evaluation. Educational Evaluation and Policy Analysis. 2(3), 5-19. can be combined into a manageable number of approaches in terms of their unique and important underlying principles.

House considers all major evaluation approaches to be based on a common ideology, liberal democracy. Important principles of this ideology include freedom of choice, the uniqueness of the individual, and empirical inquiry grounded in objectivity. He also contends they all are based on subjectivist ethics, in which ethical conduct is based on the subjective or intuitive experience of an individual or group. One form of subjectivist ethics is utilitarian, in which “the good” is determined by what maximizes some single, explicit interpretation of happiness for society as a whole. Another form of subjectivist ethics is intuitionist / pluralist, in which no single interpretation of “the good” is assumed and these interpretations need not be explicitly stated nor justified.

These ethical positions have corresponding epistemologies—philosophies of obtaining knowledge. The objectivist epistemology is associated with the utilitarian ethic. In general, it is used to acquire knowledge capable of external verification (intersubjective agreement) through publicly inspectable methods and data. The subjectivist epistemology is associated with the intuitionist/pluralist ethic. It is used to acquire new knowledge based on existing personal knowledge and experiences that are (explicit) or are not (tacit) available for public inspection.

House further divides each epistemological approach by two main political perspectives. Approaches can take an elite perspective, focusing on the interests of managers and professionals. They also can take a mass perspective, focusing on consumers and participatory approaches.

Stufflebeam and Webster place approaches into one of three groups according to their orientation toward the role of values, an ethical consideration. The political orientation promotes a positive or negative view of an object regardless of what its value actually might be. They call this pseudo-evaluation. The questions orientation includes approaches that might or might not provide answers specifically related to the value of an object. They call this quasi-evaluation. The values orientation includes approaches primarily intended to determine the value of some object. They call this true evaluation.

No comments: