I was in recently in Kenya with Lesley, another member of the Darwin team at LTS, conducting field based monitoring and evaluation (M&E). Although I have been to Kenya before, and travelled throughout East Africa, it was my first time to be at the Kenyan coast. The idea was that as well as supporting Lesley and finding out more about M&E in the Darwin Initiative, we could also use some of the findings for the poverty thematic review that we’re working on.
I was in Kenya for 8 days in total, and mainly focused on a mid-term review of one project. As Lesley’s previous blog post mentioned, the main purpose of these mid-term reviews is:
- the verification of results
- obtaining an independent perspective on the projects
- troubleshooting and supporting projects to reorient themselves
In her blog, Lesley talked generally about what we do and why, so following on from that, I thought that it might be interesting to focus on the methodology that we used during the fieldwork. We often think that it’s the findings of M&E that are important, but I think that there’s lots of interesting lessons that we can learn from having open methodological discussions.
A methodology is basically a system of methods that are used in a particular way or area of study. I particularly enjoy developing methodologies, as I think that they help you think practically about how you are going to conduct a piece of research, however big or small it is. For this particular field-based evaluation we decided to use a range of methods to capture different aspects of the project and to suit the different people we were targeting. For consistency, we used the same approach across the different geographical areas the project was operating in.
This methodology, informed by the broader evaluation questions, guided the fieldwork:
- Identifying indicators of coral reef health
I was in Kenya specifically to look at the poverty aspects of a marine project. Little did I know that as well as talking to various people the project had been working with, including local communities, local fisheries officials, this would also involve some snorkelling. As someone who isn’t a particular fan of sea creatures (including fish), I was a little bit apprehensive about the snorkelling part. Luckily, I was with Lesley (a marine expert) and representatives of the community (local experts) who made me feel extremely comfortable, whilst at the same time educating me on indicators of marine ecosystem health. Luckily the majority of the time was spent talking to people, so I felt much more in my comfort zone.
- Semi-structured interviews with key informants
Semi-structured interviews are an established tool in conducting evaluations. We used specific evaluation questions to develop a set of questions to act as an interview guide. Questions were left open to encourage participants to elaborate on their responses and explore why respondents were giving particular answers. Such an approach also enabled us to probe on particular issues, whilst at the same time allowing participants to lead the conversations.
We started out with semi-structured interviews with project staff to give us a better idea of how the project worked in practice. A couple of days into the evaluation we also talked to a range of local fisheries officials to understand their level of engagement with the project and their perceptions of how they felt the project was contributing.
- Semi-structured focus groups with community members
We followed a similar semi-structured process for focus group discussions with community members. Initially, we held meetings with a couple of community groups to verify information about the project, such as when it started, what the main activities were, and also try to understand how both beneficiaries and non-beneficiaries viewed the project. We also used these discussions to identify what the main benefits and challenges had been so far. In some communities we split the community members into smaller groups to encourage participation.
- Participatory ranking
Participatory ranking is a commonly used methodology to better understand the range of views. It is a ‘mixed methods’ approach that generates a rich picture of the participant’s views that can be quantified and compared within and between groups, and act as points of discussion for the collection of qualitative information.
Building on what we’d found out in the community meetings, we developed a participatory ranking exercise. Each individual was given three ‘votes’ to identify which, for them, were the greatest benefits the project had brought them. When participants had completed the ranking exercise, we recorded the voting and then asked a series of questions to help us understand why people had voted for certain things.
- Theory of change mapping
We concluded the trip with a final meeting with the project team to share what we’d observed in field and also obtain their input into building a theory of change for the project. The idea of building a theory of change was daunting at first, but after a while the staff got to grips with the process and were able to talk animatedly about how they envisaged the project, how this linked to their activities and identify the associated assumptions.
We selected this particular range of methods because we felt they best suited the questions we were asking and the people we were targeting. This methodology provided us with a systematic way of conducting M&E in this context. Of course this is just one approach, and there are a whole range of methods and other participatory tools that we could have used. Was this the best approach? Well that’s open to debate, so let us know what you think.
Want to know more about our findings? Then follow the blog for updates and keep your eyes peeled on twitter as I will be discussing them in future blogs. @Darwin_Defra