RESOURCES

Attending Evaluation 2017? Check out our guide to this year’s conference!

DME for Peace

Created 11/07/2017

Blog

ADVANCED, BEGINNER, INTERMEDIATE

Are you attending next week’s American Evaluation Association’s Annual Conference: Evaluation 2017? Check out DME for Peace’s recommendations for what’s not to be missed in Washington, DC! For a list of all the events, check out the Evaluation 2017 website.

Thursday, November 9th

Defining Conflict: The Development of Shared Language and Cross-cultural Understandings about Abstract Outcomes

When: 1:15-2:00pm

Where: Roosevelt 2

This presentation will discuss the importance of having a common language for conflict and the challenges we face as peacebuilders in responding to these conflicts. As the peacebuilding field continues to grow, moving towards common language on success is crucial to being able to speak to results as an organization and examine the relevance of broader programmatic priorities. However, the word conflict is not universally understood – it can mean violence, natural disagreement, injustice, and much more. Search for Common Ground established a common language for the conflicts in which we work – a process that took one year of testing and review across all regional teams and a variety of country contexts. This presentation outlines this process and the agreed upon common categories of conflict, the questions that were agreed upon as relevant for the organization, and how it has improved reflection and cross-organizational learning for country teams.

When the Goal is Prevention: Monitoring, Evaluation, Research and Learning Processes to Understand Drivers of Violent Extremism

When: 1:15-2:00pm

Where: PARK TWR STE 8218

Violent extremism (VE) challenges governments and communities across the globe. Approaches to preventing VE include addressing core political and social challenges that allow drivers of VE to fester, spread and drive fragility and instability. It is critical to understand drivers, including the push and pull factors that lead to recruitment and radicalization and to equip local stakeholders with the tools to address them. However, programs focused on VE face unique challenges when it comes to monitoring and evaluating their results and using programmatic data and evidence to learn and adapt future efforts. Monitoring and evaluation specialists from organizations currently implementing programs to prevent VE will discuss these challenges, as well as monitoring, evaluation, research and learning practices that can inform approaches to counter and prevent VE.

Learning from Youth-led Research and Evaluation

When: 2:15-3:00pm

Where: Washington 5

International development and peacebuilding organizations are making increasing use of youth-led research methods. In these approaches, youth play a leadership role in all stages of design, data collection, data analysis and results sharing, with professional supervision and guidance throughout the process to support the quality of research efforts. This panel will discuss the experiences of two international peacebuilding nonprofit organizations, Search for Common Ground and PeacePlayers International, in facilitating youth-led data collection at differing stages of the program cycle. One example will examine youth-led research conducted to inform program design and planning, while the other will discuss a youth-led outcome evaluation.

Within and Across Case Comparison of Developmental Evaluation (DE) in USAID

When: 2:15-3:00pm

Where: Coolidge

Social Impact, Search for Common Ground, and the William Davidson Institute at University of Michigan have partnered with the U.S. Global Development Lab at USAID, to test the developmental evaluation (DE) approach through the Developmental Evaluation Pilot Activity (DEPA-MERL). This session will explore preliminary findings within and across the pilots in the USAID context, related to three research questions posed: 1) How is DE able to capture and promote use of emergent learnings in support of ongoing development of programming? 2) What are the barriers and enablers to DE implementation? 3) What do key informants’ consider to be the value (added or lost) of conducting a DE versus a traditional evaluation? During the session, the team will provide an overview of the qualitative methods used to answer these research questions.

Frontiers in Resilience Measurement and Evaluation

When: 4:30-5:15pm

Where: Roosevelt 5

Incredible progress has been made in measuring household resilience to the impacts of natural disasters and other crises. This has been achieved through efforts to monitor recovery post shock (e.g. post-earthquake Nepal, post-typhoon Philippines, complex emergencies in Somalia, etc.) as well as prospective holistic assessments to understand the dynamic social, ecological and economic systems within which communities operate and how they are affected by shocks.  

Knowing what makes households resilient is only half the battle – now practitioners must evaluate what works to build resilience. Validating a resilience approach to international development must be built on empirical evidence demonstrating efficacy and cost-effectiveness – otherwise resilience proponents remain vulnerable to skepticism, critique, and rejection. This panel, comprised of Mercy Corps, TANGO International, and Tulane University, presents three examples of recently concluded studies responding to these questions. Presenters will discuss methodological approaches, challenges, key findings, and issues for future research.

Friday, November 10th

From Learning to Action in Humanitarian Evaluation

When: 8:00-9:30am

Where: Roosevelt 5

Given the unprecedented magnitude of the current humanitarian crises globally and the rather scant learning occurring among evaluators based in the field, the need for a refined understanding of evaluation methodologies and improved impact measures is greater now than ever. With such awareness, this panel will try to promote a reflection on how to best contribute action-oriented learning in the humanitarian as well as development evaluation arena. The first presentation will provide an overview of the current planning and management efforts done by humanitarian organization to enhance learning among their local staff and partners. The second presentation will focus on the processes and results of capacity development efforts aimed at enhancing both evaluation methodologies and practices in humanitarian aid. The third presentation will explore how the use of ethical guidelines could contribute to concrete improvements in programming and, as a result of that, to the protection of affected populations.

From learning to action in gender and humanitarian evaluations: examples of mixed and multi-methods evaluations conducted across methodological and cultural boundaries

When: 1:45-3:15pm

Where: Thurgood Marshall East

Advancing knowledge about the merit and worth of policies and programs frequently requires evaluators to broaden their theoretical, methodological and ideological frameworks – often requiring evaluators to go beyond their “comfort zone” and cross methodological and conceptual boundaries.  However, little has been written about the real-world organizational, ideological, methodological and personal experiences and challenges experienced during this process. The panel will share experiences from a team of veteran evaluators who have extensive field and academic experience in the process of crossing boundaries.  We will begin with a review of recent literature on crossing evaluation boundaries, particularly with respect to multi and mixed-methods approaches.  Case studies and personal experiences will then be shared on: incorporating a feminist focus into evaluations where technical experts and organizational decision-makers were used to more conventional evaluation approaches; and crossing multiple boundaries, physical and methodological, in evaluations of humanitarian programs conducted in conflict zones.

War of the Worlds: Evaluators versus Everyone & the need to communicate results

When: 3:30-4:15pm

Where: PARK TWR STE 8219

Thousands of evaluations are published every year and thousands of hours are spent on how best to communicate the lessons learned. There appears to be a disconnect in the ability of the peacebuilding field to both communicate the value, and translate the findings, of evaluations so that key lessons are utilized in future programs and field-wide learning. For many organizations, monitoring and evaluation staff often find themselves isolated due to the nature of their work, and there’s a disconnect between evaluators and program or communications staff. Fundamentally, all of the peacebuilding field, M&E staff or otherwise, seek to improve programming to make the greatest positive impact within the communities that they work. This session will focus on identifying the challenges and potential solutions to the communication disconnect by drawing on the experience of Search for Common Ground, the DME for Peace project, and Mercy Corps to allow participants to discuss and brainstorm how to effectively communicate results.

Breaking Barriers to Participation and Inclusion in Peacebuilding Evaluation

When: 4:30-5:15pm

Where: PARK TWR STE 8219

How can we tackle the barriers to participation and inclusion in evaluation? In December 2016, DME for Peace hosted the Breaking Barriers Conference and Workshop in Cape Town. Through the use of human centered design processes, DME for Peace brought together a diverse group of local practitioners, global experts, funders, policy makers, multi-sectoral programmers, academics, and evaluators to confront the barriers to participation and inclusion in peacebuilding evaluation. Through a facilitated series of interactive, design thinking exercises that utilized different approaches to enhance genuine participation in the design, analysis, and solution-generation stages of the workshop, Breaking Barriers developed four human-centered, locally-led, innovative solutions to improving participation and inclusion of evaluation practice in complex environments. This session will dive into the four concepts developed in Cape Town and engage the AEA audience in a discussion around participation and inclusion in evaluation.

Starting a Successful Developmental Evaluation: Hot Tips and Practical Learnings From Embedded Evaluators

When: 4:30-5:15pm

Where: Washington 1

Social Impact, Search for Common Ground, and the William Davidson Institute at University of Michigan have partnered with the U.S. Global Development Lab at USAID, to test the effectiveness of a developmental evaluation (DE) approach through the Developmental Evaluation Pilot Activity (DEPA-MERL). DE uses embedded evaluators who gather extensive documentation over the course of the DE, and grapple with prioritization, personal information management systems, and burn-out. Over the months or years of a developmental evaluation, embedded evaluators develop deep personal relationships with stakeholders that have the ability to undermine crucial factors of objectivity and clarity in analysis if not carefully monitored. By engaging in a jeopardy-style panel discussion with embedded evaluators from the DEPA-MERL team and from other leading organizations working on DE methodology in international development, this panel will share candid and practical lessons learned from real embedded evaluators.

Evaluating Faith-Based Peacebuilding

When: 6:30-7:15pm

Where: Washington 6

There is a lack of effective evaluation of faith-based peacebuilding because there is not a process of considering the conviction of religious peacebuilders that there is a transcendent, supernatural presence throughout the process. A consortium led by the Alliance for Peacebuilding (AfP) has developed the EIAP Guide for practitioners to evaluate religious peacebuilding. In this session, we address the implications by presenting the distinct features of faith-based activity that are especially relevant for evaluating religious peacebuilders’ activities and results. We examine the interrelationship between three complementary components of religiosity-one in which belonging is most fundamental, doing is most visible, while believing provides explanation and legitimizes the other two. We present a framework for professional evaluation of faith-based peacebuilding built around the ones that can be measured – doing and believing and present the EIAP Guide, as well as the methods used to develop the final Guide.  

Saturday, November 11th

Time Series and High Frequency Data Collection: Alternative to a Randomized Control Trial for Evaluating Mass Media Campaigns in Burkina Faso

When: 8:00-9:00am

Where: Thurgood Salon South – Ignite 60

Rigorous evaluation of mass media campaigns is challenging since they naturally operate at scale, which often precludes the possibility of randomization or control zones.  As such, Randomized Control Trials (RCTs) cannot be used to evaluate the impact of media campaigns delivered at a national scale.  Other evaluation designs are also unable to measure the impact of multiple campaigns, implemented at different points in time. Development Media International, a specialist in running evidence-based media campaigns to change behaviors and improve lives in developing countries, has developed an innovative evaluation method using high frequency, time-series data collection, to evaluate media campaign results and enhance organizational learning and decision making. This session will present lessons learned in the field on the benefits of a time-series data collection approach as an alternative to RCTs, including best practices for employing this method and evidence of its applicability, from an ongoing evaluation in Burkina Faso.

Better Learning for Better Results: Improving the Impact of M&E in Peacebuilding

When: 9:15-10:00am

Where: Salon 1

Peacebuilding has lagged behind other fields such as education and public health in developing a culture of monitoring and evaluation and embracing practices of adaptive learning. The Peacebuilding Evaluation Consortium (PEC) has made great progress over the past three years in developing and testing rigorous methodological tools, encouraging a shift in the culture of learning in the peacebuilding field, and providing a “safe space” for important explorations of the nature of success, failure and impact in peacebuilding programming. This panel will explore the ongoing technical challenges of peacebuilding evaluation in complex environments, and the difficulties in creating a true culture of sharing and learning.  Principals of the PEC will discuss their current work in developing shared platforms, creating guides for all levels of M&E activities, testing new tools, and holding policy dialogues around learning, accountability and adaptive learning, and will then turn to challenges and bright spots for the field.

The Holy Grail of Utilization-Focused Evaluation? Early Lessons from Piloting Developmental Evaluation in the USAID Context

When: 9:15-10:00am

Where: Washington 1

Developmental Evaluation (DE) aspires to be the holy grail of utilization-focused evaluation. By embedding a skilled evaluator into a program or organization, DEs generate ongoing, practical, context sensitive information that informs timely decision-making. However, in a bureaucratic context such as USAID, where the DEPA-MERL project is piloting DE, there are different incentives and constraints to adaptive management. This panel will not only interrogate the prospects that DE has for utilization-focused evaluation at USAID, but it will also shed light on practical tools (e.g. reporting and facilitation techniques) and lessons learned that current and future evaluators might use in their own work to boost utilization in bureaucratic or other less agile institutions.

Thinking Evaluatively in Peacebuilding Design, Implementation and Monitoring – Three Options

When: 11:15-12:00pm

Where: Roosevelt 2

CDA Collaborative Learning Projects (CDA) will present a new resource guide: “Thinking Evaluatively in Peacebuilding Design, Implementation and Monitoring – Three Reflecting on Peace Practice (RPP) and Do No Harm (DNH) – infused options to strengthen the effectiveness of peacebuilding strategies and programs.” The Guide puts forward three options: Program Quality Assessments, Evaluability Assessments (two established processes in the evaluation field), as well as Strategy and Program Reflection Exercises, using findings and lessons from CDA’s Reflecting on Peace Practice (RPP) and Do No Harm (DNH) Programs as criteria for effective and relevant peacebuilding engagement. It provides concrete guidance for practitioners on how to implement different ‘evaluative’ options – short of formal evaluations. The demonstration session will focus on practical opportunities for DM&E and peacebuilding practitioners to apply these options for evaluative processes with staff and program stakeholders. Participants in this session are encouraged to review the Guide before the session http://bit.ly/ThinkingEvaluatively

 

Leave a Reply

Your email address will not be published. Required fields are marked *