Ella Duncan

@ellasd active 4 years, 2 months ago

Forum Replies Created

Author Posts

March 31, 2016 at 8:41 pm EST

Hi all,

Apologies for the garbled message! It seems there was a formatting issue
with this post, so here it is again – on behalf of community member Atlee
Chait.

M&E of a program is designed before a program begins, and may require
flexibility to changing contexts. Let’s say your area of interest is not
involved in active conflict but it becomes affected by forced displacement
and refugee flows as a spillover effect of a nearby conflict. There is the
possibility that the types of questions you are asking residents in the
area are no longer sensitive to the situation and the potential that beyond
the current population of displaced persons, that your target population
has now become vulnerable while previously they were not.

Considering this reality from the perspective of education for
peacebuilding programming, From your experience in the field:
1. How can displaced persons who become at least medium-term residents in a
region be incorporated into your sample?
2. Understanding that any project operating in a region directly or
indirectly affected by conflict can both have a positive and/or negative
impact on that environment, how can one assess if it is safe for the
displaced persons to be a part of the program? Additionally, how can one
assess if it is safe for the already identified residents to remain a part
of the program without exacerbating tensions or putting any participants in
harm’s way?
3. Are there existing criteria for appropriate participation levels of
displaced peoples?
4. What are steps for meaningful inclusion of displaced peoples in M&E
processes?

Find the original post here:

M&E of Education & Peacebuilding in Areas of Forced Displacement & Refugee Flows

November 7, 2015 at 8:53 pm EST

Thank you everyone who joined us for the webinar with the Harvard Humanitarian Initiative on Measuring Peacebuilding Education and Advocacy.

Our speaker, Patrick Vinck, has responded to the questions raised during the presentation on our discussion page, available here, http://www.dmeforpeace.org/educateforpeace/1484-2/

See where the conversation is going, and add your thoughts!

Very best,

The Education for Peacebuilding M and E and DME for Peace Team

July 13, 2015 at 9:37 pm EST

Thanks for this summary and pulling out major points!

For me the key to your post is the question raised at the end,

3- While monitoring and evaluating projects, do we get distracted by
numbers and lose track of the actual change, if any, on the ground?

Monitoring and Evaluation is really valuable to programs when it leads
in *learning,
*a great evaluation is no good on the shelf. So while we consider the best
ways to track results, we also need to think about the best ways to
communicate those results back to communities, funders, and implementing
organizations. Beyond data viz or any other current rage, what is it that
keeps M&E from communicating well with the people who need to appreciate it
and learn from it? I’d be interested to hear anyone’s experiences in
successes (or failures!) in closing that loop in a productive way.

On Mon, Jul 13, 2015 at 5:22 PM, Educate for Peace wrote:

Monitoring and Evaluation of a Street Children Project is a document that
uses
the example of the WHO Street Children Project to provide definitions of
monitoring and evaluation, stress their importance, and give specific steps
and examples of how to successfully monitor and evaluate projects. The
simple
language of this document and the fact that it’s built on an actual
intervention makes it very easy to comprehend and link to other real
projects.
Also, the variety of activities this document offers, which encourage the
reader to practice the tips in each section, turn reading this document
into a
lively interactive session rather than a technical boring read!
A few takeaways from the document:
Chapter 1 offers definitions for terms used in the M&E field. It also
highlights the importance of implementing M&E as monitoring and evaluation
helps:
• Set priorities and manage time
• Keep the implementation flexible and adaptable in the face on any
emergent
issues
• Provide a baseline that progress could be measured against
• Identify what activities work and direct funds towards them
• Create a chance to replicate success and avoid repeating mistakes
• Increase the overall confidence in a given project
According to chapter 2, practitioners must set a strategic plan before
implementing any projects. The components of this plan are:
• Clear aims and objectives of the project
• An outline of the intended strategies to be used
• A list of activities, time frame, and assignment of responsibilities
• Budget
• A plan for monitoring and evaluation
Chapter 3 offers definitions of project monitoring and process evaluation
as
follows:
Project monitoring is the process of measuring what services and how much
service a project is providing and who is providing and who is receiving
those
services. Project monitoring is most useful when it becomes a routine part
of
the work of a project. Managing the project more effectively should be the
driving factor behind choosing which variables to measure at every
juncture of
the project’s life
Process evaluation covers all aspects of the process of project delivery
and
involves the operation of a project. It aims to measure the activities of
the
project, project quality and who the project is reaching. Your first
obligation in process evaluation is to make sure that the activities you
planned are actually occurring and the project is meeting the needs of the
intended population.
Chapter 4, which covers monitoring and evaluation methods, defines
quantitative and qualitative data collection, offers examples of methods
used
to gather both types of data, and tips on when to use each one.
Quantitative
data collection is defined in this chapter as a formal, objective,
systematic
process of using numerical information to obtain knowledge of the average
or
normal and to categorize and generalize this knowledge. As for qualitative
data collection, it’s defined as a systematic, subjective approach used to
describe life experiences and give them meaning as information obtained
using
qualitative methods helps to provide meaning and understanding of the
specific
rather than the general, of values and of life experiences. In this
chapter,
the document stresses how the type of information you’re after and the
context of the project are the factors that should mold data collection
methods and tools.
Taking the particular example of Street Children, chapter 5 talks about
monitoring the community and the importance of doing that in projects like
Street Children where community plays a key role. It explains that most
practitioners monitor communities whether consciously or not, and it
provides
steps to make the process as simple as possible by integrating it in the
daily
work. Those steps include tips on how to decide on what to monitor, how to
implement your monitoring plan, and when and how to review the data
collected.
Afterwards, in chapter 6, the document tackles conducting an outcome
evaluation and reporting the results of this evaluation stressing the
importance of closing the loop by communicating the results to people who
need
them. It talks about the value and components of a written evaluation
report
and it gives several examples on how to present its content to relevant
people.
Finally, in chapter 7, the document puts all the information presented in
it
into one fictional case study that synthesizes all the tips.
I would like to open the discussion up to the rest of our community with
the
following questions:
1- What documents do you use that make the technical aspect of M&E more
accessible, and help us move from what we should do to actual
implementation?
2- Are we managing to close the learning loop and direct the feedback were
it
needs to be?
3- While monitoring and evaluating projects, do we get distracted by
numbers
and lose track of the actual change, if any, on the ground?

Viewing 3 posts - 1 through 3 (of 3 total)