M&E Thursday Talk: Conflict Scans as Monitoring Tools for Reflective Practice – Lessons Learned from Search for Common Ground- 3.27.14
The Peacebuidling Evaluation Consortium (link is external) (PEC) and the Network for Peacebuilding Evaluation (NPE) are pleased to have hosted their second Thursday Talk on “Conflict Scans as Monitoring Tools for Reflective Practice – Lessons Learned from Search for Common Ground (SFCG)” with Charline Burton, Design Monitoring and Evaluation Specialist at SFCG (link is external) on March 27,2014 from 10am-10:45am EDT (UTC-4).
Part 1: Charline Burton
The remarks that I’m going to make today comes from Search for Common Ground’s experience as a peacebuilding actor as well as our experience of reflection and learning within our institutional learning team.
Because we are peacebuilding actors, the vast majority of our projects take place in conflict settings: we’re working in CAR, Sudan, Jerusalem or Nigeria to name a few. And even if we follow the OECD-DAC advice of conducting conflict assessments when starting new programs, there is still a huge challenge that we are constantly facing in all of the countries we’re working in. It is: Conflict is dynamic, and the situation that you faced and analyzed at Time Zero of your project may be totally different 6, 12 or 18 months later. Ukraine, Sudan and CAR are the living examples of what I’m referring to. The question that we all ask ourselves is: how do you ensure that your project’s strategy and our activities are still in line with the context? Our resources are limited – I’m referring to our budget and staff resource – and we can’t be dedicating all our time to constantly be monitoring the conflict context.
Two years ago, at SFCG we started a reflexive process about what ways we could start doing some real-time monitoring of the conflict dynamics. What I’m going to share with you today is SFCG’s experience in trying to overcome this challenge, and the lessons that we learned from it.
B. What we did in the Democratic Republic of Congo (DRC)
DRC is a Central African country that has been in permanent conflict over the last fifteen years, At the moment we started the “conflict scans”, which I’m about to explain in detail, there were no less than 15 different rebel groups in Eastern Congo. Search for Common Ground had been implementing peacebuilding activities for over a decade, systematically facing the same challenge of having soon outdated baseline assessments which we often could not rely on after a few months. So what did we try to do?
- We had the specific purpose that the data would feed a reflective practice to help inform and if need be review our program’s design
- From the early stages, we knew that we wanted to measure the dynamic at a very local level: given the size of the country we had no intention to actually measure the national context;
- We also wanted to be able to monitor the conflict dynamic in a way that was both systematic but also was not labor and resource intensive
We therefore launched a new concept for Search for Common Ground, with what we called the “conflict scan”. A conflict scan is a quick data collection procedure to help program managers gather information about the changing conflict dynamics in the environment in which a project is operating. It helps inform conflict sensitive decision making by providing frequent, up-to-date information about how conflict actors are relating to one another, how everyday people are dealing with conflict, and what conflict trends and triggers pose a risk to projects and their staff.
How we collected the data:
As I told you, we wanted this monitoring not to be too time or resource consuming, so we decided to integrated conflict monitoring into our activities. And to do so, we picked one of our most popular activity: the participatory theatre. In a few words participatory theatre is an outreach activity that is conducted by actors in very remote communities. If we call it “participatory theatre” it is among other things because the actors spend half a day in every community discussing local problems and conflicts before creating a play that is in line with the local reality and only after that will they be showing their play in which they model positive ways that the local conflicts can be resolved.
- We decided to take benefit of those “informal conversations” to start collecting data to systematically monitor the conflict trends in our target areas.
- We therefore designed a questionnaire that we asked the actors to use in order to formalize what used to be informal conversations. The questionnaire was open-ended with a long list of coded answers.
- The actors were trained on the use of the questionnaire, and how they should fill it.
- We had a trial period of about 2 months before adapting the questionnaire
- The questionnaire was designed to fit each project’s needs, but yet there are a set of about 10 questions that are the same ones in each questionnaire. This is designed to help gather some meta-data and we eventually intend on doing yearly meta-data analysis
- For each region, some other qualitative data collection also took place, such as a focus group discussions and key interviews. The number of such activity would depend on each project’s budget and staff availability
- A report was produced for each region by the program managers, with support from a technical team made out of the Monitoring and Evaluation Coordinator and the Conflict Sensitivity Program Coordinator. The periodicity of the report varies, but it is usually issued every 3 to 6 months.
How we used the data:
Primary user: For SFCG’s audience:
The primary user of this data was therefore Search for Common Ground and our implementation partners: each time a report was issued, we would sit with the program staff and partners and reflect for an average two to four hours. The main reflection questions usually went around the project’s target groups, the project’s activities, the project’s locations, the project’s general strategy. Are they still accurate? Are there better programmatic options to ensure as much impact as possible?
Secondary users: the aid actors
The “real-time” Action Research reports that we issue on some key region, particularly in the Kivus were also shared with humanitarian NGOs, UN agencies and donors to assist in ensuring that their work too was conflict sensitive. This information sharing system contributes to a collaborative learning process between different aid sectors and actors. This data was meant to avoid:
- NGOs “Doing harm” by undermining the local social fabric and peace dynamics
- NGOs missing opportunities to have a positive impact on social cohesion
C. Lessons learned
There are some lessons learned from this experience, which I want to share with you.
- Flexibility: Monitoring of conflict trends allowed for SFCG to be very reactive to the dynamic context and proved to be a successful strategy to ensure a quality and timely response to emerging issues. However, this was possible only because we have fairly flexible program designs, which leave some room for adapting your activities and focus. In order to ensure such flexibility, there are two actions to take:
A) Ensure that the project you design is flexible in essence. For instance, you may want to say that “the exact locations of the program will be decided upon the data collected during the baseline conflict assessment and will be revised quarterly based on the monitoring data of the conflict dynamic”
B) Talk with you donor to ensure that they buy in this flexible programming;
- Monitoring the conflict takes time, so plan for it: Data collection was a relatively quick process since we were integrating it in our usual activities. However, data analysis was much more time-consuming than initially expected, and program managers and DME staff had to dedicate a lot of their working time to do the analysis. (2-3 weeks). There was a tension between the need to use a rigorous process and the need to publish our reports quickly;
- Communications: The partner buy-in was differing widely from one organization to the other. Some of them have been really using the data that we were sharing with them, actually making programmatic changes; while others did not seem interested at all or were even reluctant at being involved because they feared it would damage their image. It is important to ensure a clear communication over our goals and methods and provide clear example of how others may use our data to improve their impact locally;
- Format: In line with this, it is also important that you adapt the format of your reports to your audience. SFCG produced one single report while there were two main intended users: SFCG on the one hand, humanitarian actors on the other hand. Because we had to have nice-looking reports that would go public, we ended up delaying the internal use of the data because as the whole process would take more time
- Don’t raise expectations. In other conflict monitoring processes that SFCG set up in other conflict countries, we noticed that sometimes our informants developed expectations that sending data to SFCG would lead to deployment or response by security actors to the incident reported.
- Formalizing the process of monitoring conflict: We noticed that lots of partner organizations were already conducting some kind of conflict monitoring. But they did not formalize the process nor did they formalize the analysis. Even peacebuilding organizations think about conflict/peace drivers and monitor it– but they don’t systematically document the changes. And this is an effort that I strongly encourage everyone to do.
Well, this is the essence of what I wanted to share with you today. I’m sure that some of you have experience that they’d like to share about how they integrated conflict monitoring into their monitoring systems or some others have questions that will help pushing the field of peacebuilding evaluation forward. I’ll do my best to answer to them. Thanks for listening to me.
Part 2: Question-Answer Period
The following questions were responded to by Charline Burton, and her answers are summarized below each question. Please help us continue this discussion by providing your own thoughts, experiences, and perspectives on these very insightful questions!
- Managing expectations amongst partners: How do you ensure that partners’ expectations around data are not too high?
Charline Burton: We work with two types of partners: international NGOs and UN agencies are one type, and the other is local civil society organizations, local leaders, and “men on the street,” community members. The local groups tend to be the ones who may raise their expectations too high, so to mitigate this risk, we ensure that our data collectors are properly trained on how the data will be used, what it can and can’t do, and that they convey this information to our informants. Ensuring clear communications with local leaders and influencers is critical, so when we go to a new area, we make sure that we explain clearly to the local leaders the scope of what our projects can do, and what they can’t do. For example, in northern Uganda and Nigeria, we set up an early-warning SMS system, but we found that people’s expectations of what would come from sending a text were higher than what we had the capacity to do. So we developed stronger relationships with the security sector, the people who could respond to security threats.
- Communications: How long does it take to write two reports? Many of us barely have the capacity to manage a single evaluation report. Further, did your journalist background affect how you framed the different reports for the different audiences?
Charline Burton: I don’t think my background in journalism affected how the reports were framed. Your report should be based on who your target audience is and how they are going to use the information. For example, when sharing reports with communities, we don’t provide written reports, because many people are illiterate. Instead, we have oral presentations, with maybe some graphics, that present the results. When you’re writing multiple reports from the same group of data, you should start by using the data internally, because you don’t need to worry about the format as the purpose is to identify how your program or project should be adapted. Then, at SFCG, what we ended up doing was hiring a person to help us analyze data and create formatted reports for our partners, so that our program staff aren’t overwhelmed.
- Data collection: How do you maintain balance between program implementation and data collection?
Charline Burton: In the participatory theatre program I described, the program staff were already collecting data, so we focused on formalizing this process and adding a few questions. This ensures that there is added value without too much added time commitment. We found, however, that it does not take zero time to produce reports, but rather it takes about two to three weeks. We thought it would be easy to produce reports, but it does take time. In most cases, local peacebuilding partners are already collecting data as they try to understand and end the conflict, so you formalize and build on that.
- Data collection: How did you match data collection to early warning systems?
Charline Burton: In peacebuilding projects, we try to increase the capacity of local people to deal with conflict, and we help with mediation between groups, but we don’t have the capacity ourselves to respond to security threats. So, as we collect data, we make sure that we share it with relevant actors who may be in better positions to act/react to security threats.
- Data collection: Could you describe how you tell people how the data will be used and who it will be shared with?
Charline Burton: When we started doing formal data collection, we would make sure that the data collectors would say that a report would come of the data, and who it would be shared with, but that it would all be anonymous.
- Cross-sector partnerships: How did that partnership develop?
Charline Burton: We started doing conflict scans in partnership with other humanitarian aid organizations, who were working in education, health, sanitation, etc. So based on the positive feedback from those partners and that experience, we started the whole program on conflict sensitivity, sharing our conflict analyses with other humanitarian actors hoping that they would use the analysis to make their programs more conflict sensitive, so that their education/health/sanitation programs could contribute to peacebuilding.
- Security sector partnerships: How do avoid being seen as a part of the security system in the country–“spies” for authorities, if you will?
Charline Burton: Through clear communication. Data collection is only one part of many peacebuilding activities that we do in communities; we also have many projects on negotiation, training different actors on conflict sensitivity- so it is not just a “one-shot” thing where we come in and collect data. People in the communities know us, and because we make sure that we talk with everybody, from the officials to the local leaders to the “men on the street,” we have a high level of visibility, and are able to ensure that people understand what we are doing.
- Partner feedback: What feedback have you received from partners you’ve worked with?
Charline Burton: We shared our conflict assessment with many actors in the DR Congo, and we found that the assessment helped organizations select their local partners, and decide how to select beneficiaries, such as where to build a school. We highlighted both visible and invisible actors in the conflict, which we hope helped the actors we shared the report with redesign their programs to more properly respond to the local dynamics and needs.
- Partner feedback: Could you give us some more examples of where a local partner or a humanitarian aid organization partner gave you feedback after you started using these reflective practices? What have people said about this process?
Charline Burton: Some partners gave us very positive responses, but there were some who were more suspicious of the process. In the DRC, there are so many humanitarian organizations, that sometimes they are actors in the conflict, especially because local people don’t always understand how an organization is operating, how it is choosing where to spend its resources. So, if we have a report that identifies a humanitarian organization as an actor in the conflict, then we will first contact the organization and say “this is what people are saying about your staff or how you choose where to work.” We also give the organization the opportunity to explain itself through a community mediation forum, because often, it is because people don’t understand how an organization is operating that they suspect it. Another mitigation strategy is to simply not name the organization. Sometimes, in our findings, a group of people will say certain things about another group, while that other group says things about the first group- so we may have separate discussions with the separate groups, to ensure that they feel safe saying what they believe. Then, we can say this is what the other group is saying, and this is what your group is saying- do you agree with this, is this accurate? So we have them validate the data. Then we ask if we can share the data with provincial or national actors. So in these ways, we mitigate these challenges.
- Quantitative vs. qualitative data: My impression is that the conflict monitoring process is largely qualitative, based on interviews. Do you use any quantitative methods to balance that and provide reliable information?
Unfortunately, we ran out of time before this question could be addressed, but please check back for a written response! In the meantime, what are your thoughts on this question and the others? Provide your comments and help us continue the discussion!