Making Evaluation Results Count: Internalising Evidence by Learning
Development agencies are consistently required to improve their performance, in terms not only of project outcomes, but also of the quality of their programming and their institutional capacity. In a very practical sense, evaluations are now perceived as learning opportunities. It only seems logical, therefore, to try and improve the internalisation of evaluation results at different levels. But how can this be done? What dilemmas do we face and what have we learned so far?
Internalising evidence from evaluations: three perspectives on learning
(a) In development policy and programming
'Donors must enhance their capability for undertaking continuous learning, and use techniques that allow partners and stakeholders to act as the primary sources of information on changing opportunities and institutional capacity constraints.' (Donor representative, Maastricht workshop)
Probably the most straightforward way of enhancing learning is to look at its internalisation in development policy and programming. However, many of the existing feedback mechanisms are still mainly one-directional, drawing on the logic of information dissemination to selected target groups rather than communication around evidence as a reiterative learning process.
The OECD/DAC Working Party on Aid Evaluation (2001) reviews some of the current experiences. Most reports have called for the establishment of what may be called policy innovation networks, mobilising Southern partners and stakeholders to engage in existing learning and feedback routes. The International Fund for Agricultural Development (IFAD) recognises the need to 'shift the fulcrum of evaluation feedback to the South' to encourage more direct links between feedback of findings and the planning and monitoring of country programmes. Many development agencies emphasise results-based planning and management as a way of improving the practical use of evaluation results. The European Commission's 'Fiche Contradictoire' and UK's Department for International Development's (DFID) 'Public Service Agreements' are examples of this. The review regards the communication of lessons through the mass media as increasingly important. Finally, the review calls on development agencies to do more to share their experiences with each other.
(b) In organisations and among partners
'Evaluations of organisations are often used as 'frustrated PR tools' … to demonstrate impact; not necessarily to be self-critical and to learn how to improve performance.' (Evaluator, Maastricht workshop)
A second approach focuses on organisational learning, recognising that development processes result from actions and interactions by a set of diverse stakeholders. Active participation, capacity-building and learning by all these actors is a fundamental rather than an instrumental condition. The locus for change is the facilitation of collective rather than individual learning. As a result, policy-makers and/or donors become one among many, rather than the only intended learner.
An organisational learning approach to evaluation not only fundamentally changes the way social actors relate to each other, it also requires a radical shift in the role of the evaluator. All actors, including the evaluator, have to recognise they are part of a joint learning effort. In such an 'epistemic community', the evaluator becomes a facilitator in a joint inquiry rather than an expert wielding an 'objective' measuring stick. Yet such communities run the risk of 'clique-building', reducing the diversity of opinion if the discourse is captured by the most vocal actors in the group (Sutton, 1999). Critical self-reflection must be maintained in order to build in 'reality checks' and thus avoid too narrow a discourse among closed circles.
(c) In society at large
'[Societal learning] requires a fresh attitude to evaluation findings, and encourages a multi-stakeholder debate on evaluation results …' (Maastricht workshop participant)
A third perspective focuses on a type of learning that leads to change in society at large. When the sharing and interpretation of evidence extend beyond those directly involved in the evaluation process, conflicts of interest are common and consensus becomes the exception rather than the rule. The question then is whether and how interested parties can exert pressure for change and whose interpretation of the findings is the dominant one. The co-existence of multiple truths requires a more transparent analysis of findings and the creation of 'sense-making fora' for stakeholders to interpret and validate the evidence. Some commentators stress that such broadening of the interpretation of evidence to a wider audience and different interest groups can help to avoid 'paradigm traps' among scientists and policy-makers that limit their views on development options (Uphoff, Combs, 2001).
Linked to the societal uptake of evidence is what Weiss described as 'knowledge creep' (1980), i.e. the way in which the conceptual use of evidence can 'gradually bring about major shifts in awareness and reorientation of basic perspectives' among a broader audience. These ideas have recently re-emerged in concepts such as knowledge management and knowledge as a global public good, pioneered by the World Bank and others. Yet the use of scientific evidence - derived from evaluations, research or other country analytic work to address development problems - has long brought with it the risk of failing to connect with realities and evidence at a local level and with methods that seek to record and enhance endogenous development processes and knowledge. On the other hand, the knowledge systems approach pioneered by Wageningen University in the Netherlands focuses on innovation as an emergent property of social interaction and learning among multiple stakeholders who, invariably, represent multiple intentions and (often conflicting) interests (Röling, 2002, Engel and Salomon, 1997).
The perspective on societal learning from evaluations, i.e. learning that goes beyond a small number of directly involved stakeholders, could have far-reaching consequences for our thinking on development cooperation. Traditional evaluation as we know it may gradually fade into the background, to be replaced by multiple forms of evidence-gathering and sharing among diverse groups of stakeholders, the adaptive management of resources and multiple communication and negotiation processes at various levels. There will be a greater need for good governance to create enabling conditions for such processes and for conflict resolution between stakeholders. As a result, governments and donors will become crucial players who may either enable or prevent society from learning. As both development policy-makers and field practitioners alike have urged, this perspective links evaluation with governance issues.