Conducting an evaluation of a development intervention is often a complex process, invariably because reality is complex and unpredictable where issues emerge that need responding to. An evaluation, especially a one-off exercise, can only capture part of this reality. A common problem associated with evaluations is that they are generally not used because many evaluations:
• fail to focus on intended use by intended users and are not designed to fit the context and situation
• do not focus on the most important issues – resulting in low relevance
• are poorly understood by stakeholders
• fail to keep stakeholders informed and involved during the process and when design alterations are necessary.
This guide is therefore timely as it provides a basic foundation on how to make evaluations matter. It brings together existing concepts, evaluation methods and tools that have been found to work well in the field in a way that is straightforward and easy to follow. Stories of people’s experiences have been used to illustrate key points. In addition to this, the chapters have been written in a way that allows you to read them independently.
The guide is not a comprehensive book on how to carry out evaluations. Rather, it attempts to provide an overall framework with guiding principles for conducting an evaluation. The guide draws heavily on the experiences of the Centre for Development Innovation, Wageningen University & Research centre particularly with its work around ‘managing for impact’ in the international PPME-managing for impact course, a regional IFAD-funded capacity development program on managing for impact in East and Southern Africa, strengthening M&E systems of organisations and the many evaluations carried out by CDI. The guide also draws heavily on Michael Quinn Patton’s Utilization- Focused Evaluation approach (2008). The importance of good evaluative practice and the need to embed evaluations into existing learning processes within organisations are emphasised.
Chapter 1 presents four core principles underpinning evaluations that matter. These are: utilization-focused and influence- and consequenceaware; focusing on stakes, stakeholder engagement and learning; situational responsiveness; and multiple evaluator and evaluation roles.
Chapter 2 gives an overview of suggested steps for designing and facilitating evaluations that matter, with a particular focus on utilization and being aware of the possible influences and consequences of evaluations. It stresses the importance of including primary intended users and other key stakeholders in the evaluation so as to enhance understanding of the development intervention. The key steps of the evaluation process – establishing ability and readiness; focusing; implementing and evaluating the evaluation – are covered. In Chapter 3, the role of stakeholders is highlighted in terms of their stakes, participation, consequences of choosing who to involve and who not to involve in the process. The need to balance content and people processes is also discussed.
Core concepts and ideas centred on making evaluations learning experiences are presented in Chapter 4. Barriers to learning and ways of enhancing learning among stakeholders are also explored. Chapter 5 brings the possible influences of evaluation on change processes to the surface and explains how you can go about managing change. Central to this is Kotter’s (2002) suggested steps to facilitate change.
You will find in the Annexes an example of learning purposes, evaluation questions, uses and users of an evaluation for a food security programme, a table comparing traditional programme evaluation with developmental evaluation (Patton, 2011), as well as a list of references followed by a glossary and acronyms and abbreviations.
You must be logged in in order to leave a comment