Early this week, I got back from my work-related travel to Kenya, but then I ran straight into two full days of training. We planned to train the staff of a client on a new observation protocol that we developed for them to use. The new tool was based on a previous tool they had used. Before finalising the tool, we took time to discuss the tool with a small group of the staff and checked that they thought it could work. We thought the training would go well.
Drum roll...It didn't. On a scale of 0 to going well, we scored a minus 10. It felt like I had a little riot on hand when I started with "This is the new tool that we would like you to use".
Thinking about it - I should have crashed and burned in the most spectacular way. Instead, I took a moment with myself, planted a slap on my forehead, uttered a very guttural "Duh!" and mentally paged through "Evaluation Basics 101 - kindergarten version". Then I smiled, sighed, and cancelled the afternoon's training agenda. I replaced it with an activity that I introduced as: "This is the tool that we would like to workshop with you so that we can make sure that you are happy with it before you start to use it".
Some tips if ever you plan to implement a new tool (even if it is just slightly adjusted) in an organization:
1) Get everybody who will use the tool, to participate in the designing of the tool
2) Do not think that an adjustment to an already existing tool exempts you from facilitating the participatory process
3) Do not discuss the tool with only a small group from the eventual user-base. Not only will the other users who weren't consulted riot, even the ones that had their say in the small group are likely to voice their unhappiness.
When we were done, the tool looked about 80% the same as it did at the start, and they did not complain about its length, its choice of rating scale or the underlying philosophy again.
Lesson learnt. (For the second time!)
No comments:
Post a Comment