Webinar! Measuring Peacebuilding Education and Advocacy with the Harvard Humanitarian Initiative
Education for Peace 1

The Education for Peacebuilding M&E Community on DME for Peace was pleased to host a discussion on, Measuring Peacebuilding Education and Advocacy, Strategic Level Results and Beyond on Tuesday October 27, 2015.

With support from UNICEF Learning for Peace Program, The Harvard Humanitarian Initiative (HHI) is conducting a series of knowledge, attitude, and perception (KAP) surveys to measure the relationship between education and peacebuilding measures, including social cohesion and resilience, in 5 selected countries Burundi, Cote d’Ivoire, Democratic Republic of the Congo, State of Palestine, and Uganda.

Each study was designed to provide UNICEF country offices and respective government and implementing partners with relevant data to inform education for peacebuilding programming and monitor results of ongoing interventions. Together, these studies strengthen existing efforts to measure dimensions central to education for peacebuilding, including social cohesion and resilience.

Patrick Vinck, Ph.D.,  director of the Harvard Humanitarian Initiative’s Peace and Human Rights Data Program led a discussion on the methodological approaches HHI developed for this collaborative research initiative, as well as some of the challenges with implementation across different contexts and stakeholders.

About the Speaker

Patrick Vinck, Ph.D., is the director of the Harvard Humanitarian Initiative’s Peace and Human Rights Data Program. He is assistant professor at the Harvard Medical School and Harvard T.H. Chan School of Public Health, and holds faculty appointments at the Faculty of Arts and Science, and Brigham and Women’s Hospital. Currently, Patrick directs several research projects to assess progress toward peace, resilience and social cohesion in countries affected by mass violence. His research, available at www.peacebuildingdata.org, examines the effects of exposure to violence, long term solutions to displacement, and the role of institutions such as education and transitional justice in peacebuilding. He has written about the consequences of war and the role of justice, governance and transitional mechanism to achieve peace. He is also the co-founder of KoBoToolbox (www.kobotoolbox.org) a leading tool developed to enhance the collection, aggregation and visualization of crisis data. He co-founded of the Data Pop Alliance, a partnership between HHI, MIT and ODI to advance the responsible use of Big Data for social good. Patrick serves as a regular consultant and advisor on peacebuilding and vulnerability analysis to the United Nations World Food Program, World Bank, and Peacebuilding Fund.

Recording and Powerpoint

View the presentation Powerpoint here, HHI Powerpoint

Please join the conversation in the comments below! And join the broader conversation on our discussion pages.


  1. Hello All! Here are the questions we did not have time for during this webinar. Please feel free to chime in if you have an answer to any of these. We also hope to hear from our speaker, Patrick Vinck, when he has some time so please check back for his comment post. Thanks again for being part of this community of practice! – Francis

    1) Where can we find the results of the research you did in Burundi? I’m asking this because Burundi data has not been made available on their website.

    2) What were the challenges and successes with linking children with their parents/caregivers and educator’s/community leaders’ responses? I supported the design of an Early Childhood Education baseline in a country in SE Asia that followed a similar methodology, thus would be pleased to learn from your experiences.

    3) Do you measure any indicators on social behavior change (going beyond perceptions to measure actual behaviors?)?

    4) How has the data been utilized (or will be utilized) to strengthen education policy and practices so they may contribute to social cohesion and resilience, which is the overarching objectives of the program?

    5) Thank you so much for this valuable presentation. Have you got any preliminary results or interesting evidence from the countries that you have already undertaken the studies?

    6) Can you show how attribution is explicitly demonstrated using your approach?

    7) What type of specific indicators did you collect data on?

    8) When you’re doing surveys and interviews of minors, what are the rules regarding informed consent? It is hard to get institutional permission for this kind of study?

    9) What do the axes mean?

    10) Thank you for your presentation. Can you discuss how you developed (or where you acquired) indicators for social cohesion, resilience, etc.?

  2. Hello all, I just returned from travel so I will try to tackle some of the questions now…

    1) Burundi data – all of our research and results ultimately are posted on peacebuildingdata.org We are finalizing the report for Burundi and hope to have it out by early December 2015 – it will also feature interactive maps to browse through various indicators.

    2) The use of digital data collection tools is central to our ability to link data about children with their parents/caregivers and educator’s/community leaders’ responses. We used bar-codes with unique identifiers to link the records. Interviewers would scan the relevant bar-code at the beginning of the interview and specify the type of respondent (e.g. child, caregiver,…), creating a link between records without collecting any kind of identifying data. This requires training and quality control, but careful planning (e.g. organizing appointments for interviews) and monitoring help achieve high quality. Of course there is the occasional “orphan data”, for example if the caregiver is no longer available for interview, but these are minimal problems. (the tool we used and developed is free and open-source at http://www.kobotoolbox.org)

    3) We measure social behaviors and in some case self-reported changes in social behaviors. But just like perception measures there are limitations – people may recall past behaviors that are different than their actual actions. Careful design and selection of the recall period help a lot building a more reliable instrument.

    4) That is a challenging question – how will the data be used. Our team values highly the role of data and research in informing policies and programs. However, evidence-based decision making is not easy to achieve and a range of factors influence how data is being used. We dont always know what part of the data will ultimately be found to be most useful, what analysis resonates most with the program implementer. In this case the participatory design and in-depth discussion about the results ensures a direct link with the field. The information gathered helps understand the context in which programming takes place and to some extent the effect it is having. This in turn informs current and future programming.

    5) I promise results will be coming very soon!

    6) Attribution is not explicitly demonstrated using this approach. Data is collected at one point in time among various groups. The differences observed can be the result of many factors beyond the program itself. Self-reported attribution of social behavioral changes is not a strong evidence, but we can clearly identify associations between key variables. In this case it was impossible to adopt a design that would explicitly demonstrate attribution – we became involved as the project was already well underway. But even when it is possible to do, designs such as RCTs have their own limitations and generate important ethical questions. Ultimately, I would argue that no method is inherently better than another, but that the research questions, the program cycle, and resources available determine the optimal design.

    7) The survey instrument is quite long, with interviews lasting on average an hour. Each country has its own instrument, but similar core elements about demographics, priorities, access to and perception of services, experience of education, security, exposure to violence, sense of cohesion and resilience factors. Our work on resilience with Interpeace helped us better understand and define concepts like resilience and we are working on putting together a guidance on this topic.

    8) Doing research with minors is difficult both because the content needs to be appropriate, and because administratively it requires a number of additional steps. Interviewing minors requires their parent or guardians to consent and for the youth to assent to being interviewed. This process of informed consent is essential to the conducting of ethical research. Being at a US-based research institutions, we are fortunate to have access to an institutional review boards which reviews and approves our protocol. Similar approvals are obtained in each of the country of operations. Such review is typically not required when development and humanitarian agencies collect data for programmatic purpose. However, I would strongly recommend doing so to ensure that subjects are protected. The process can be a bit slow and cumbersome, but it is not “hard” and ensure ethical standards are adopted.

    9) The “concept map” is not a structured graphic. It is just a visual tool that enables us to identify key concepts, group them together, and then reflect on dimensions or indicators that should be collected. It reflects the output of brainstorming sessions in which we try to break the linear thinking of theories of changes and logframes and instead map the complex concepts, variables and their interactions so that we can identify key dimensions that the survey instrument should explore.

    10) When seeking to develop indicators for complex concepts such as social cohesion or resilience, I would ask two central and quite simple questions (not necessarily in this order)
    (a) How have others done it?
    (b) How is it defined locally (where the research will take place)?
    Responding to these questions is not as simple as asking them – the first one requires extensive research into existing literature, which quickly points to differences in approaches and debates. Experience helps navigating what is most relevant, but there are never definitive answers such as “this is how you should measure resilience”. In fact I recently co-hosted a meeting at USIP on understanding and measuring “reconciliation”. What was striking was the range of approaches, each valid but different from the other.
    Rooting the concepts in local understanding is just as important. Social cohesion in a society divided along ethnic or cultural lines may need to be defined and measured differently than in a context where such divides do not exist, or were other exist. This is why mixed methods are essential – in our case we frequently use sequential designs whereby qualitative work – focus groups, interviews – helps us better define the concepts and the indicators we subsequently measure, and then help us interpret the findings.
    After 15 years, we have developed some measures that we prefer over others. One challenge is to make these more widely available but we are working on that!

Leave a reply

You must be logged in to post a comment.