An Introduction to Monitoring and Evaluation of Healthcare Projects. Monitoring and Evaluation is an integral component for the success of any donor-funded project as it provides accountability, and well-informed decisions through the use of data and plan that guides implementation
2. INTRODUCTION TO
MONITORING AND
EVALUATION
Welcome to the Introduction to M&E module!
Across health programs, there continues to be high demand for
organizations to demonstrate the effect and impact of their interventions.
Program implementers want to know what works and doesn't work.
Ministries of Health want to be strategic in what they support. Donors
want to know you are spending their money well.
In response, there has been greater demand for M&E.
This week, we will begin by defining monitoring and evaluation (M&E),
their distinct purposes, the circumstances in which each is used, and how
they work together to improve programs. We will also discuss key M&E
activities in a typical program cycle and the roles these activities play in
enhancing a program's success
3. LEARNING OBJECTIVES
After completing this module, you will be able to:
•Define monitoring and evaluation
•Distinguish between monitoring and evaluation
•Explain why M&E is important; and
•Explain how key M&E activities fit into a typical program cycle.
NB Though we primarily focus on M&E at the program level in this course, many of the concepts
also apply to projects. How do projects differ from programs?
Programs are on-going usually, while projects have an end point. Projects might start mid-way
through a fiscal year and span multiple years before they’re finished. Programs are strongly tied to a
defined fiscal year. It could be a calendar year or a different fiscal year. Though some projects are
quite large in terms of funding and the number of people and resources, programs provide a larger
context around those projects. In fact, most programs tend to have several projects.
4. THE POWER OF MEASURING
RESULTS
a) If you do not measure results, you cannot tell success from failure
b) If you cannot see success, you cannot reward it
c) If you cannot reward success, you are probably rewarding failure
d) If you cannot see success, you cannot learn from it
e) If you cannot recognize failure, you cannot correct it
f) If you can demonstrate results, you can win support. Adapted from
Osborne & Gaebler, 199
5. WHAT IS M&E?
Let’s begin by defining what M&E is. M&E is a process of
collecting, analyzing, applying and disseminating data to
assess progress toward program goals and objectives. M&E
contributes to identifying program successes and challenges,
which informs programmatic decision making, improving
performance and contributing to achieving desired results.
6. WHAT IS EVALUATION?
While monitoring and evaluation work hand in hand, they are distinct
concepts: Monitoring can be defined as the ongoing, routine collection and
analysis of data about a program’s activities in order to measure program
progress. Evaluation can be defined as the systematic process of collecting,
analyzing, and using data about a program’s design, implementation, and
results to determine its effectiveness, relevance and impact,. Both monitoring
and evaluation are important in informing decisions and adjustments about
program implementation. Evaluation often involves measuring changes in
knowledge, attitudes, behaviors, skills, community norms, utilization of
services • Provides feedback that helps programs analyze the consequences,
outcomes and results of its actions
7. MONITORING VS. EVALUATION
Monitoring and evaluation also each serve unique functions in an M&E system. Though we’ll go
into more detail elsewhere in the course, let’s take a moment to examine some of the key
differences between monitoring and evaluation.
Monitoring is primarily concerned with:
●Regular, ongoing data collection and analysis to track program progress;
●Monitoring is typically focused more on tracking outputs, or, the immediate results of your
activities; and
●Is used to inform day-to-day management. Monitoring helps program managers to understand
the extent to which activities are being implemented as planned, on time, and within budget.
8. MONITORING VS.
EVALUATION
Whereas evaluations are mainly focused on:
●The assessment of program effectiveness. In other words, how
well a program was implemented and whether it made a
difference.
●Evaluations also often assess whether the program was efficient
and relevant.
●Evaluations are focused more on measuring outcomes, which are
typically intermediate changes and impacts, which are long-term,
large-scale changes.
●Evaluations provide a broader overview of the program
9. WHY M&E IMPORTANT?
Effective and efficient use of resources and provide
accountability to donors
To assess whether the project has achieved its objectives ‐
has the desired effects
Program improvement. To learn from our activities, and
provide information to design future projects
To make decisions about project management and service
delivery and broader policy
10. WHY M&E IMPORTANT?
Thus, the key reasons for M&E can be summarized under four headings.
(1) For accountability: demonstrating to donors, taxpayers, beneficiaries and
implementing partners that expenditure, actions and results are as agreed or
can reasonably be expected in the situation.
(2) For operational management: provision of the information needed to co-
ordinate the human, financial and physical resources committed to the project
or program, and to improve performance
(3) For strategic management: provision of information to inform setting and
adjustment of objectives and strategies.
(4) For capacity building: building the capacity, self-reliance and confidence of
beneficiaries and implementing staff and partners to effectively initiate and
implement development initiatives.
(5). For measuring Project’s performance. Monitoring and evaluation of projects
can be a powerful means to measure their performance, track progress towards
achieving desired goals, and demonstrate that systems are in place that support
organizations in learning from experience and adaptive management.
(6) For Program Improvement. improve project and program design through the
feedback provided from mid-term, terminal and ex post evaluations
11.
12. DIFFERENCE BETWEEN MONITORING AND
EVALUATION
Monitoring Evaluation
Monitoring is the systematic and routine
collection of information about the
programs/projects activities
Evaluation is the periodic assessment of the
programs/projects activities
It is ongoing process which is done to
see if things/activities are going on track
or not i.e. it regularly tracks the program
It is done on a periodic basis to measure the success
against the objective i.e. it is an in-depth
assessment of the program
Monitoring is to be done starting from
the initial stage of the projects
Evaluation is to be done after certain point of time of
the project, usually at the mid of the project,
completion of the project or while moving from one
stage to another stage of the projects/programs
Monitoring is done usually by the internal
members of the team
Evaluation is done mainly done by the external
members. However, sometimes it may be also done
by internal members of the team or by both internal
and external members in a combined way
Monitoring provides information about
the current status and thus helps to take
immediate remedial actions, if necessary
Evaluation provides recommendations, information
for long term planning and lessons for
organizational growth and success
13. DIFFERENCE BETWEEN MONITORING AND
EVALUATION CONT…
Monitoring Evaluation
It focuses on input, activities and output It focuses on outcomes, impacts and overall
goal
Monitoring process includes regular meetings,
interview, monthly and quarterly reviews etc.
Usually quantitative data.
Evaluation process includes intense data
collection, both qualitative and quantitative
It has multiple points of data collection Data collection is done at intervals only
It gives answer about the present scenario of
the project towards achieving planned results
considering the human resources, budget,
materials, activities and outputs
It assesses the relevance, impact,
sustainability, effectiveness and efficiency of
the projects
Monitoring studies the present information
and experiences of the project
Evaluation studies the past experience of the
project performance
14. DIFFERENCE BETWEEN
MONITORING AND EVALUATION
CONT…
Monitoring checks whether the project did
what it said it would do
Evaluation checks whether what the project
did had the impact that it intended
Regular report and updates about the
project/program act a deliverables here
Reports with recommendations and lessons
act as a deliverable here
There are few quality checks in monitoring There are many quality checks in evaluation
It compares the current progress with the
planned progress
It looks at the achievement of the programs
along with both positive/negative,
intended/unintended effects
15. DIFFERENCE BETWEEN MONITORING AND EVALUATION
CONT…
Item Monitoring Evaluation
Frequency Done continuously Conducted periodically
Main action Keep track, oversight Assessment
Basic purpose Support program implementation Improve effectiveness, impact
and inform future programming
Focus Inputs, Outputs, Process, Work
plans
Effectiveness, Relevance,
Efficiency, Impact,
Sustainability
Time focus Present Past-future
Level of attention Detail Big picture
Inspiration Motivation Creativity
Skills required Management Leadership
Undertaken by Program management External evaluators
16. IMPORTANCE OF MONITORING AND EVALUATION IN
PROJECT/ PROGRAMME DEVELOPMENT
it provides the only consolidated source of information showcasing project
progress;
it allows actors to learn from each other’s experiences, building on expertise and
knowledge;
it often generates (written) reports that contribute to transparency and
accountability, and allows for lessons to be shared more easily;
it reveals mistakes and offers paths for learning and improvements;
it provides a basis for questioning and testing assumptions;
it provides a means for agencies seeking to learn from their experiences and to
incorporate them into policy and practice;
it provides a way to assess the crucial link between implementers and
beneficiaries on the ground and decision-makers;
it adds to the retention and development of institutional memory;
it provides a more robust basis for raising funds and influencing policy
18. TYPES OF MONITORING
Process or performance monitoring focuses on the activities carried out as part of
a development intervention. It is designed to assess whether and/or how well those
activities are being implemented. It also covers the use of resources. Process
monitoring is designed to provide the information needed to continually plan and
review work, assess the success or otherwise of the implementation of projects and
programmes, identify and deal with problems and challenges, and take advantage
of opportunities as they arise.
Results or impact monitoring aims to assess the changes brought about by a
project or programme on a continuous basis. Often this means assessing changes
in a target population (e.g. individuals, communities, supported organisations,
targeted decision-makers). Impact monitoring can be used to assess progress
towards goals and objectives, as well as unintended change.
Beneficiary monitoring, or beneficiary contact monitoring, is a specific type of
impact monitoring that aims to track the perceptions of project or programme
beneficiaries. Beneficiary monitoring can be seen as a specific type of participatory
monitoring and evaluation (M&E).
Situation monitoring, sometimes known as scanning, is concerned with monitoring
the external environment. Sometimes this is done through defining and collecting
indicators relating to issues such as the local political situation, changes in the
19. TYPES OF MONITORING CONT…
Financial monitoring is concerned with the monitoring of budgets and finance, and
is linked to auditing. It is usually concerned with tracking costs against defined
categories of expenditure
Compliance monitoring is designed to ensure compliance with issues such as donor
regulations, grant or contract requirements, government regulations, and ethical
standards (ibid)
20. EVALUATION APPROACHES
Conducting evaluation can take the form of Internal, External and Mixed
Internal/External,
Participatory and Result based
1) Self-Evaluation
Involves organization or project holding up a mirror to itself and assessing how it is
doing, as a way of learning and improving practice
It takes a very self-reflective and honest organization to this effectively
2) Participatory Evaluation
Form of internal evaluation intended to involve as many people with direct stake in
the project
Project personnel and beneficiaries carry out evaluation together
Outsider can only act as a facilitator of the process but not an evaluator
21. EVALUATION APPROACHES
3) Rapid Participatory Evaluation
Qualitative way of conducting evaluation. Semi-structured and conducted by
multidisciplinary team over a short time useful as a starting point for understanding a
situation
Involves use of secondary data review, direct observation, semi structured interviews,
key informants, group interviews, diagrams, maps, and calendars
Permits one to get valuable input from those intended to benefit from the project
4) External Evaluation
An evaluation carried out by a carefully chosen outsider or outside team
5) Internal Evaluation
Evaluation is carried out by the project team from within the organization
6) Interactive Evaluation
Involves a very active interaction between an outside evaluator or
22. TYPES OF EVALUATION
Two major types of evaluation:
a) Formative
b) Summative
a)Formative Evaluation
Ensures that a program or program activity is feasible, appropriate, and acceptable before it is
fully implemented. It is usually conducted when a new program or activity is being developed or
when an existing one is being adapted or modified.
At the formative stage, evaluation is needed in order to:
Assess the possible consequences of the planned project(s) to the people in the
community over a period of time
Make a final decision on what program alternative should be implemented
Assist in making decisions on how the program will be implemented.
23. TYPES OF EVALUATION
b) Summative evaluation
Draws learning from a completed project
Collects and analyses data to determine if and to what extent a program achieved its
intended outcomes
Identifying constraints or bottlenecks inherent in the implementation phase;
Assessing the actual benefits and the number of people who benefited;
Providing ideas on the strength of the project, for replication;
Providing a clear picture of the extent to which the intended objectives of the project
have been realized.
Examples of summative evaluation includes:
i. Outcome evaluations
ii. Impact evaluation
iii. Cost-effectiveness and cost-benefit analysis
24. TYPES AND USES OF EVALUATION
Evaluation Types When to use What it shows Why it is useful
Formative Evaluation • During the development of
a new program.
• When an existing program
is being modified or is being
used in a new setting or with
a new population.
• Whether the proposed
program elements are likely
to be needed, understood,
and accepted by the
population you want to reach.
• It allows for modifications
to be made to the plan before
full implementation begins.
Process Evaluation • As soon as program
implementation begins.
• During operation of an
existing program.
• How well the program is
working. • The extent to
which the program is being
implemented as designed.
• Provides an early warning
for any problems that may
occur
Outcome Evaluation • After the program has made
contact with at least one
person or group in the target
population.
• The degree to which the
program is having an effect
on the target population’s
behaviors.
• Tells whether the program
is being effective in meeting
it’s objectives.
Economic Evaluation: Cost
Analysis, Cost-Effectiveness
Evaluation, Cost-Benefit
Analysis, Cost-Utility Analysis
• At the beginning of a
program. • During the
operation of an existing
program
• What resources are being
used in a program and their
costs (direct and indirect)
compared to outcomes.
• Provides program managers
and funders a way to assess
cost relative to effects. “How
much bang for your buck.
Impact Evaluation • During the operation of an
existing program at
appropriate intervals.
• At the end of a program.
• The degree to which the
program meets its ultimate
goal on an overall rate of STD
transmission (how much has
program X decreased the
morbidity of an STD beyond
the study population).
• Provides evidence for use in
policy and funding decisions.
25. WHY PROJECT EVALUATION?
Evaluation generates ‘evidence’ and objective information that enable
managers to make informed decisions and plan strategically
1) Support program improvements:
a. Did it work or not, and why?
b. How could it be done differently for better results? What works, why and in what context
c. Which strategies worked? Can they be extended/expanded/replicated
d. Decision makers use evaluations to make necessary improvements, adjustments to the implementation approach or strategies, and to decide on alternatives
e. To inform decisions on operations, policy and strategy related to ongoing or future projects
2) Building knowledge for generalizability and wider-application
a. What can we learn from the evaluation?
b. How can we apply this knowledge to other contexts?
3) Supporting and demonstrating accountability to decision makers.
a. Are we doing the right things?
b. Are we doing things right?
c. Did we do what set out to do?
d. Determine the merit or worth and value of an initiative and its quality
e. Accountability framework requires credible and objective information
4) To demonstrate accountability to decision-makers e.g. donors and program countries
5) To enable corporate learning and contribute to the body of knowledge on what works and what does not work and why
6) To measure effects or benefits of program interventions
7) To give stakeholders the opportunity to have a say in program output and quality
8) To justify programs to donors, partners and other constituencies
26. PRIMARY USES OF EVALUATION FINDINGS
Rendering judgments
Summative evaluations of program’s overall effectiveness e.g., audit, renewal,
quality control, accreditation
Facilitating improvements
Formative evaluation to improve program
e.g., program’s strengths/weaknesses, progress
Generating knowledge
Conceptual use of findings
e.g., generalization, theory building
27. WHEN SHOULD M&E TAKE PLACE?
M&E is a continuous process that occurs throughout the life of a
program
M&E should be planned at the design stage of a program, with the
time, money, and personnel that will be required calculated and
allocated in advance
Monitoring should be conducted at every stage of the program, with
data collected, analysed, and used on a continuous basis
Evaluations are usually conducted at the end of programs. However,
they should be planned for at the start because they rely on data
collected throughout the program, with baseline data being especially
important
28. CHALLENGES IN M&E PRACTICE
lack of experience;
limited financial and staff resources;
gaps in technical knowledge with regard to defining performance
indicators, the retrieval, collection, preparation and interpretation of
data; and
inefficient monitoring and evaluation practices.
29. THE FIVE STRATEGIC M&E QUESTIONS TO ASK;
Relevance - Is what we are doing now a good idea in terms of improving the
situation at hand? Is it dealing with the priorities of the target groups? Why or
why not?
Effectiveness - Have the plans (purposes, outputs and activities) been achieved?
Is the intervention logic correct? Why or why not? Is what we are doing now the
best way to maximise impact?
Efficiency – Are resources used in the best possible way? Why or why not? What
could we do differently to improve implementation, thereby maximising impact,
at an acceptable and sustainable cost?
Impact - what change has occurred - long-term goals? Why or why not? What
unanticipated positive or negative consequences did the project have? Why did
they arise?
Sustainability - Will there be continued positive impacts as a result of the project
after the project funds run out in four or five years? Why or why not?