SlideShare a Scribd company logo
1 of 65
Monitoring and Evaluation


     Basic Definitions, Institutional
                   and
Attributes to Establish an M&E System
               8 Aug 2012
            By Tariq Zaman

                                        1
What is M&E?
M&E is about collecting,
 storing, analyzing and
 finally transforming
 data into strategic
 information




“Do not COLLECT data
  unless somebody is
  going to USE IT”


                              2
What Is M&E?

Monitoring
A continuing function that uses systematic collection of data
on specified indicators to provide management with
indications of the extent of progress and achievement of
objectives and progress in the use of allocated funds

Evaluation
The process of determining the worth or significance of a
program to determine the relevance of objectives, the efficacy
of design and implementation, the efficiency or resource use,
and the sustainability of results

  M&E are synergistic…monitoring is a necessary, but not
             sufficient, input into evaluation


                                                             3
What is Program Monitoring,
         Evaluation?

Monitoring is the      Evaluation is the use of
                         social research
 routine process of
                         methods to
 data collection and     systematically
 measurement of          investigate an
 progress toward         achievement of a
 program objectives      program’s results




                                                  4
Why M&E?
                                            S
  Data &      M                             y
information   &                             s
                                            t
              E                             e
                                            m

                        Strategic Information



                  Evidence informed decision making




                    Reduced morbidity and mortality

                                                5
Importance of M&E
 Provides information:
  a) on program progress and effectiveness
  b) for policy-making and advocacy
  c) to plan future resource needs
 Improves program management and decision-making
  (managing by results)
 Allows accountability to stakeholders, including donors
 Ensures most effective and efficient use of resources




                                                            6
Definition of Monitoring
•    Monitoring is the systematic and continuous
     collection and analysis of information about
     the progress of a project over time

•   It is a tool for identifying strengths and
weaknesses in a project and for providing
management with sufficient information to
make the right decisions



                                                 7
Differences

   Monitoring                       Evaluation
    
        Short-term view
                                         Long-term view
    
        Immediate analysis of
                                         Through analysis of
                                          achievement of
        ongoing activities
                                          specific objectives
       Influences control of            Influences future
        ongoing activities                planning
       Periodical and regular           A detailed report with
       Carried out by staff or           suggestions
        donors                           Carried out by external
                                          or internal (donors)


                                                              8
DIFFERENCE BETWEEN
  MONITORING AND EVALUATION
 Monitoring means tracking the key
  elements of programme performance on a
  regular basis (inputs, activities, results)

 In contrast, evaluation is the episodic
  assessment of the change in targeted
  results that can be attributed to the
  programme/project intervention, or the
  analysis of inputs and activities to
  determine their contribution to results.   9
The Purpose of M&E


         Program
         Improvement
                       Data Sharing
                       with Partners


                  Reporting/
                  Accountability




                                       10
What is the purpose….?
 Improve program implementation
  
      Data on program progress and implementation
     Improve program management and decision making


 Inform future programming


 Inform stakeholders
     Accountability (donors, beneficiaries)
     Advocacy


                                                       11
Purpose of Monitoring
     Produces timely, accurate and adequate
information about the adherence of a project with        its
plan

     Provides data so that plans can be adjusted and
      resources managed in answer to project needs
      and opportunities

     Records information in sufficient detail to illustrate
      accountability and provide for future evaluations

     Appropriate monitoring generates the minimum
      data necessary for analysis and uses the simplest
      effective data collection methods
                                                          12
Purpose of Monitoring

    Determining whether the inputs in the project are
     well utilized

    Ensuring all activities are carried out properly by the
     right people and in time

    Identifying problems facing the community or project
     and finding solutions

     Determining whether the way the project was
planned is the most appropriate way of solving the
problem at hand
                                                          13
Who needs, uses M&E
        Information?
                Managers
To Improve program
  implementation…
                         Donors
To Inform and improve    Governments

  future programs        Technocrats




                         Donors
Inform stakeholders      Governments
                         Communities
                         Beneficiaries



                                          14
Who conducts M&E….?
Program implementer
Stakeholders
Beneficiary



Remember ..
M&E Technical skills
                       Participatory process


                                               15
M&E Results – RBM Approach

  SMART Results
       pecific: Results must describe a specific future condition
   S

       easurable: Results, whether quantitative or qualitative, must have
  M    measurable indicators, making it possible to assess whether they were
       achieved or not

   A   chievable: Results must be within the capacity to achieve

       elevant: Results must make a contribution to selected priorities of the
   R   national development framework

       imebound: Results are never open-ended - there is an expected date of
   T   accomplishment

                                                                                 16
BENEFITS OF MONITORING &
                  EVALUATION
Monitoring and evaluation (M&E) helps programme
implementers to:
Determine the extent to which the programme
/project is on track and to make any needed
corrections accordingly

Make informed decisions regarding operations
management and service delivery;

Ensure the most effective and efficient use of
resources;

Evaluate the extent to which the programme /project
is having or has had the desired impact.          17
Monitoring System
     It is a system for collecting and using information
      about the progress of a project

     It helps to take appropriate decisions

     Provides a communication system, in which
information flows in different directions between all the
people involved




                                                       18
Essential Components of a
         Monitoring System

   Selection of indicators for each activity

   Collection of data concerning the indicators

   Analysis of data

   Presenting information in an appropriate way

   Using this information to improve the work

                                                   19
Monitoring Tools and Templates
     Monitoring and Evaluation Framework
     LFA, RBM, and other formats of donors
     Internal/External reviews reports
     Progress reports and reviews
     Work plan and reporting matrix
     Budget utilization and audit reports, etc
     Surveys (structured and un structured interviews)
     Data – primary and secondary (office record,
progress reports)
                                                          20
Baseline Data
      A collection of data about the characteristics, for
       example of a population or an area before a
program or project is set up. The data can be compared
with a study of the same characteristics carried out later in
order to see what has changed.




                                                           21
Baselines, Targets and
    Performance


            Commitment

                                                  Performance



Current
level of
achieveme
nt



                Baseline   Target   Achievement



                                                                22
Logical Framework Analysis/Approach

   Logical Framework Analysis/Approach (LFA) is an

    analytical process for structuring and systematizing

    the analysis of a project or program




                                                      23
LFA MATRIX
Narrative        Verifiable   Means of       Assumptions
Summary          Indicators   Verification

Goals


Purpose


Outcome/Output


Activities



                                                           24
The Logical approach
                                    Goal       ofLong-term, widespread improvement
                                               
                                                  the Results Chain
                                    (Long           in society
                                     Term
                                                   “Big picture”(country longer term
                                   Impacts)
                                                     strategy)
                   Strategy


                                                  Effects or behavior changes
                                 Outcomes
Results Planning




                                                   resulting from a strategic program



                                 Outputs          Products and services that need to
                                                   be delivered to achieve the expected
                   Programming




                                                   outcomes


                                 Activities       What actually was done with the
                                                   available resources to produce the
                                                   intended outputs



                                  Inputs          Critical resources (expertise,
                                                   equipment, supplies) needed to       25
                                                   implement the planned activities
Kinds of Monitoring

 Process Monitoring
 Effect Monitoring
 Monitoring Significant Change




                                  28
Process Monitoring

     Process monitoring includes information on the use

      of resources, progress of activities, and the way they

      are carried out, which is known as process

monitoring




                                                          29
Process Monitoring
 Process monitoring is collecting information on the
  use of inputs, the progress of activities, and the
  way these are carried out
 Process monitoring looks at why and how things
  have happened; it looks at relevance,
  effectiveness and the efficiency of processes
 It involves stakeholders and beneficiaries in
  planning, in deciding what is to be monitored, and
  in developing and recording monitoring processes
 Process monitoring requires documentation of
  how the process was carried out

                                                   30
Effect Monitoring
   Effect monitoring is collecting information on
    progress towards achieving objectives, and on
    what the effects are in relation to these objectives
   Effect monitoring is a form of continuous self-
    evaluation.
   If it is done well, formal evaluations will be needed
    less often, and if a formal evaluation is carried out,
    the program staff will already be familiar with their
    work in relation to their objectives.
   They will be able to participate more fully in the
    evaluation, and find it less threatening.
   All monitoring systems should include both
    process and effect monitoring.                        31
Monitoring Significant Change
 The "significant change"; method of
  monitoring is not new, but it is not widely
  known
 The method has been used by Australian
  Overseas Volunteers to assess their
  contribution in development agencies,
  during their overseas appointment

                                     Contd.....
                                                  32
Monitoring Significant Change
 The first step to take is for the staff of the
  implementing organization to identify what
  areas, or domains, of change they want to
  monitor using the significant change method.
 The primary focus should be on two types of
  change: changes in the lives of individuals,
  and changes in the organization
 The basis of the significant change method is
  a simple question: "Describe what you think
  was the most significant change that you
  contributed to your project"

                                               33
The Chain Of Results: Causal Sequence For An
         Intervention To Achieve Desired Objective


INPUTS              ACTIVITIES             OUTPUTS          OUTCOMES          IMPACT




                                                                             Quality of life
                    . Management             . Services
                                                                             Conditions:
  .Infrastructure   . Training
                    . Counseling
                                                - Access
                                                - Quality
                                                             . Behaviors
                                                             . Practices
                                                                             . Human
  . Human                                                                    . Economic
                    . Logistic               . Awareness     . Decision
  . Finance                                                                  . Civic
                    management               . Knowledge     . Utilization
  . Equipment                                                                . Environment
                    . Operation research     . Attitude      of services
  . Technology                                                               MDGs:
                                             . Capacities
  . Policy
                    . Conference             . Competency                    . Poverty
  . Time
                    . Facilitation           . Opinion                       . Morbidity
  . Volunteers
                    . etc.                   . Aspiration                    . Mortality
  . Partners
                                             . Motivation                    . HIV prevalence
                                                                             . Education
                                                                             . Employment
                                                                             . Gender equality



            Measure process
UNCT 26th February 2009                                         Measure impact            34
Inputs
 The human, financial and technical
 resources deployed, their effectiveness,
 cost effectiveness and opportunities can
 be assessed




                                            35
Output Monitoring
   Output monitoring looks at the immediate results of

    a development programe or project

                         OR

   The immediate results the project achieves.It is

    sometimes called as “deliverables”



                                                       36
Outcome Monitoring

     Outcome monitoring is the regular reporting of

program results in ways that stakeholders can use to

understand and judge these results




                                                       37
Impact Assessment
 Significant lasting changes in people’s
 lives, brought about by a given action
 or a series of actions




                                            38
Roadmap: From Input to Impact
                              • Child mortality reduced,
                                   poverty reduced, higher
                 Goal              income levels, improved
Results          (Impacts)         gender equality, sustainable
                                   agriculture
                              •   Increased market access,
                 Outcomes         increased short term income


                              • # producers in the system, #
                 Outputs          new products available, #
Implementation




                                  producers supported, #
                                  consumers aware
                              •   Awareness raising campaigns,
                 Activities       development of standards,
                                  producer support, audits

                              • Funds, staff & resources etc
                  Inputs
                                                            39
Multi-tiered Holistic M&E
Rebuilding Lives and Communities



                                       Input            Output                Outcome                     Impact
                                   Monitoring          Monitoring             Monitoring               Assessment
                                                                     CMTs                       SSTs
                                         • PC-I    •TMRs of                 •Measurement of            •Captures
                                                                             the services              changes in
                              • Planning and                                 provided by ERRA          peoples lives
                                   Designing        physical
                                                    reconstruction
                • Tendering                         works
            •Commencement

                                                                            •Carried out at            •Carried out at
                                                                     CMTs facilities and        SSTs the Household
                                                   •Carried out at          beneficiary level          Level
                                                   the Projects’
                                                   Site
Monitoring and Evaluations
        Framework




                             41
M&E Questions
   Monitoring questions
    
        What is being done?
    
        By whom?
    
        Target population?
       When?
    
        How much?
    
        How often?
    
        Additional outputs?
       Resources used? (Staff,
        funds, materials, etc.)




                                  42
M&E Questions
   Evaluation Questions?
    
        Is the content of the
        intervention or the activity
        being delivered as
        planned?

    
        Does the content of the
        intervention or the activity
        reflect the requisite
        standards?

       Have the intervention
        achieved the expected
        results?


                                       43
What do we need to answer these questions…?




INDICATORS …to take measurements.



                                              44
Indicators: Definition
   Markers that help to measure change by
    showing progress towards meeting
    objectives

   Observable, measurable, and agreed upon
    as valid markers of a less well-defined
    concept or objective

   Indicators differ from objectives in that they
    address specific criteria that will be used to
    judge the success of the project or program.

      See comment for examples
                                                     45
What is an indicator?
   The United Nations World Food Programme's Office of
    Evaluation describes an as a indicator quantitative or
    qualitative factor or variable that provides a simple and
    reliable means to measure achievement or to reflect the
    changes connected with an intervention. Indicators are
    compared over time in order to assess change. In the
    logical framework approach, an operation is broken
    down into design elements (inputs, activities, outputs,
    outcomes and impacts) and separate indicators are used
    to measure performance.

                                                           46
General characteristics
The desired properties of indicators, also known as
variables, will depend on the approach adopted and the
nature of the programme or project being evaluated. All
indicators have specific characteristics:
 Numeric = the values are numbers
 Nominal = the values have names (e.g. male and
   female)
 Continuous = the values are infinite or very large
 Ordinal/categorical = the values have a known order
   (e.g. low to high)

                                                          47
Characteristics of Good Indicator


 Indicators will vary from one project to
  another, according to the work and its
  context, but in general they are often
  expected to be:
 SMART (specific, measurable, attainable,
  relevant and time-bound)
 SPICED (subjective, participatory,
  interpreted, cross-checked, empowering
  and diverse)
                                             48
Type and Level of Each Indicator
   Type
    
      Input/Process (Monitoring)
     Outcome / Impact (Evaluation)




   Level
    
      Global level
     Country level

     Program level



                                      49
What Is a Good Indicator?
   Valid: Measures the effect it is supposed to measure

   Reliable: Gives same result if measured in the same way

   Precise: Is operationally defined so people are clear about what
    they are measuring

   Timely: Can be measured at an interval that is appropriate to the
    level of change expected

   Comparable: Can be compared across different target groups or
    project approaches



                                                                        50
KPI = Key Performance Indicator


                            55
Key Performance Indicators

     Key performance indicator (KPI) is a measure that is

      employed to refer to a concept when no direct

measure is available




                                                        57
WFP Emergency Operation M&E Framework
                                                       RESULTS

   Input             Activities         Output                Outcomes            Impact
   (Resources)       ( Interventions,   Targeted              Increased            Increased
                     Services)          women receiving       household food      consumption
                                        full family ration    supply              especially W,
                                                                                  Ch & V Ind.

    X kg maize,      Distribution of     # of family         % of target house    Average # of
    X kg oil         Family ration to    ration recipients   Hold with adequate   Meals per day
    X Kg other       Women               disaggregated       Food supply          by gender
                                         by gender                                and age




                  Program-based Data                             Population-based Survey




                  Measure process                                Measure impact            58
Evaluation

    “Evaluation is a management tool which measures

     the change or results a project intervention



    An assessment at one point in time of the effects of

     a piece of work and the extent to which stated

OBJECTIVES have been achieved.”


                                                       60
Purpose of Evaluation
      Worth or significance of a development activity, policy or
programme
      The relevance of objectives of projects
      The relevance and effectiveness of programme/project design
       and implementation
      The efficiency of resource use
      The sustainability of results
      Incorporates lessons learned into the decision making process
       of both partner and donor organizations
      Provides recommendations for policy formulation
      Presents longer term implications in terms of sustainability of
       the proposed intervention

                                                                    61
Evaluation should not be done for


     To justify a decision which has already been made

      for other reasons, for example, the decision to stop

      funding a place of work



     To assign blame for a problem which has arisen

                                                        62
Types of Questions
    Addressed by Evaluation

      About how the programme could be improved

      About how the aims and objectives should be

modified or revised

      About the work can be monitored and evaluated

      About how the work could be made more cost

effective
                                                       63
Role of Evaluator
      The evaluator is a CONTROLLER in an attempt to hold

       implementing agencies responsible for their decisions

       and actions

      The evaluator is a MEDIATOR between divergent

knowledge interests

      The evaluator is a FACILITATOR in support of weak

       groups attempts to increase their decisions and influence

       their own lives                                        64
Advantages and Disadvantages of Internal
             and External Evaluators
                  External                                       Internal
Can a fresh look at the programme             Knows the programme too well
Not personally involved so it is easier to be Finds it hardest to be objective
objective
Is not part of the normal power structure     Is part of the power and authority structure
Gains nothing from the programme, but May be motivated by hopes of personal gain
may gain prestige from the evaluation
Trained in evaluation methods. May have May not be specially trained in evaluation
experience in other evaluators. Regarded methods. Does not have more (or only a
as an expert by the programme            little more) training than others in the
                                         programme
An outsider who may not understand the Is familiar with and understands the
programme or the people involved. May programme and can interpret personal
take a long time to read background behavior and attitudes
information
May cause anxiety as programme staff and Known to the programme, so poses no
participants be not sure of his or her threat of anxiety or disruption. Final
motives                                  recommendations   may   appear 65
                                                                         less
                                         threatening
Methodology of evaluations
 Combination of qualitative(desk reviews,
 key informants interviews, focus group
 discussions, observations) and
 quantitative(household surveys, health
 facility surveys or other special surveys)
 methods need to be used and spelled out
 in this section.



                                              66
Theory-based evaluation
   Similar to Log Frame but more detailed in
    understanding programme logic

   Seeks to identify causal or determining factors
    seen as important for success and then what
    should be monitored

   Ultimately leads to the determination of critical
    success factors (CSFs)

   Evaluation of CSFs used to inform likelihood
    of programme success
                                                    67
Cost benefit evaluation

   Cost-benefit and cost-effectiveness analyses
    are tools for assessing whether or not the
    costs of an activity can be justified by the
    outcomes and impacts

   Cost-benefit analysis measures both inputs
    and outputs in monetary terms

   Cost-effectiveness analysis estimates inputs
    in monetary terms and outcomes in non-
    monetary quantitative terms                  69
Impact Assessment

 The systematic identification of the effects
 –positive or negative, intended or not –
 caused by a program or project

 Impact evaluations can range fromlarge
 scale sample surveys to small-scale rapid
 assessment


                                             71
Impact Domains
   Impact at Individual and House hold levels
       
           Physical assets
       
           Financial assets
       
           Human assets
       
           Income
       
           Food security
   Impact at Community Level
       
           Physical assets
       
           Natural resource base
       
           Social capital
   Higher-Level Impact
       
           On Institutions
       
           On Policies and Regulations

                                                 82
M&E SYSTEM FOR QUALITY
               RESULTS

   A robust M&E systems supports:
    
        Compliance and mid-course correction
    
        Organizational effectiveness
    
        Efficient resource management and
    
        Timely accomplishment of quality results




                                                   83
Monitoring and Ealuation Framework
          Core Principles
   Joint and Harmonized. Provides a robust M&E Umbrella for all
    stakeholders

   Results Focused. Focuses on results and impacts as well as
    Budgets

   Lesson Learning. Presents information for continuous lesson
    learning and programme planning as well as accountability

   People Focused. Involves all stakeholders and includes direct
    feedback from beneficiaries

   Transparency and Communication. Demonstration independence
    and communicates information to appropriate people



                                                               84
Strategic Planning for M&E: Setting
            Realistic Expectations



           All              Most               Some              Few
Number
of
Projects
            Input/ Output     Process            Outcome         Impact
            Monitoring        Evaluation         Monitoring /    Monitoring /
                                                 Evaluation      Evaluation

                      Levels of Monitoring & Evaluation Effort




                                                                                93
                                                                                93

More Related Content

What's hot

Result based management
Result based management Result based management
Result based management Severus Prime
 
Results Based Monitoring and Evaluation
Results Based Monitoring and EvaluationResults Based Monitoring and Evaluation
Results Based Monitoring and EvaluationMadhawa Waidyaratna
 
Project Management Information System (PMIS) - System Benefits Summary
Project Management Information System (PMIS) - System Benefits SummaryProject Management Information System (PMIS) - System Benefits Summary
Project Management Information System (PMIS) - System Benefits SummaryBashar Jabban, MBA
 
Project communication management
Project communication managementProject communication management
Project communication managementDikshant Ghimire
 
DRF Monitoring and Evaluation Design Guide (1)
DRF Monitoring and Evaluation Design Guide (1)DRF Monitoring and Evaluation Design Guide (1)
DRF Monitoring and Evaluation Design Guide (1)Kristine Crassweller
 
Institutionalizing the Use of Evidence for Public Policy: A long path in Mexico
Institutionalizing the Use of Evidence for Public Policy: A long path in MexicoInstitutionalizing the Use of Evidence for Public Policy: A long path in Mexico
Institutionalizing the Use of Evidence for Public Policy: A long path in MexicoUnicefMaroc
 
Module 7: Monitoring and Evaluation Dima course content
Module 7: Monitoring and Evaluation Dima course contentModule 7: Monitoring and Evaluation Dima course content
Module 7: Monitoring and Evaluation Dima course contentMichael Kenny
 
Balanced Scorecard IT Strategy and Project Management
Balanced Scorecard IT Strategy and Project ManagementBalanced Scorecard IT Strategy and Project Management
Balanced Scorecard IT Strategy and Project ManagementGlen Alleman
 
Results based-management
Results based-managementResults based-management
Results based-managementkarim lkhal
 
RESULT BASED M&E in FFA-revised
RESULT BASED M&E in FFA-revisedRESULT BASED M&E in FFA-revised
RESULT BASED M&E in FFA-revisedStephen Musimba
 

What's hot (20)

Result based management
Result based management Result based management
Result based management
 
Governance
GovernanceGovernance
Governance
 
Results Based Monitoring and Evaluation
Results Based Monitoring and EvaluationResults Based Monitoring and Evaluation
Results Based Monitoring and Evaluation
 
Result based monitoring and evaluation for agriculture june 25 presented
Result based monitoring and evaluation for agriculture june 25 presentedResult based monitoring and evaluation for agriculture june 25 presented
Result based monitoring and evaluation for agriculture june 25 presented
 
Project Management Information System (PMIS) - System Benefits Summary
Project Management Information System (PMIS) - System Benefits SummaryProject Management Information System (PMIS) - System Benefits Summary
Project Management Information System (PMIS) - System Benefits Summary
 
Project communication management
Project communication managementProject communication management
Project communication management
 
Monitoring and evaluation
Monitoring and evaluationMonitoring and evaluation
Monitoring and evaluation
 
DRF Monitoring and Evaluation Design Guide (1)
DRF Monitoring and Evaluation Design Guide (1)DRF Monitoring and Evaluation Design Guide (1)
DRF Monitoring and Evaluation Design Guide (1)
 
Institutionalizing the Use of Evidence for Public Policy: A long path in Mexico
Institutionalizing the Use of Evidence for Public Policy: A long path in MexicoInstitutionalizing the Use of Evidence for Public Policy: A long path in Mexico
Institutionalizing the Use of Evidence for Public Policy: A long path in Mexico
 
Module 7: Monitoring and Evaluation Dima course content
Module 7: Monitoring and Evaluation Dima course contentModule 7: Monitoring and Evaluation Dima course content
Module 7: Monitoring and Evaluation Dima course content
 
Balanced Scorecard IT Strategy and Project Management
Balanced Scorecard IT Strategy and Project ManagementBalanced Scorecard IT Strategy and Project Management
Balanced Scorecard IT Strategy and Project Management
 
2_Project Scope Management
2_Project Scope Management2_Project Scope Management
2_Project Scope Management
 
Results based-management
Results based-managementResults based-management
Results based-management
 
Project Time Management
Project Time ManagementProject Time Management
Project Time Management
 
RESULT BASED M&E in FFA-revised
RESULT BASED M&E in FFA-revisedRESULT BASED M&E in FFA-revised
RESULT BASED M&E in FFA-revised
 
Result based management
Result based management Result based management
Result based management
 
P724 Web
P724 WebP724 Web
P724 Web
 
Monitoring and Evaluation System for CAADP Implementation_2010
Monitoring and Evaluation System for CAADP Implementation_2010Monitoring and Evaluation System for CAADP Implementation_2010
Monitoring and Evaluation System for CAADP Implementation_2010
 
Results based management
Results based managementResults based management
Results based management
 
P724
P724P724
P724
 

Viewers also liked

Viewers also liked (14)

Pe powerpoint lenz 2010 11
Pe powerpoint lenz 2010 11Pe powerpoint lenz 2010 11
Pe powerpoint lenz 2010 11
 
Ushvocabtimeline
UshvocabtimelineUshvocabtimeline
Ushvocabtimeline
 
Tra
TraTra
Tra
 
Don and Valerie Keeton Listing presentation 2013
Don and Valerie Keeton Listing presentation 2013Don and Valerie Keeton Listing presentation 2013
Don and Valerie Keeton Listing presentation 2013
 
Where are you from1B
Where are you from1BWhere are you from1B
Where are you from1B
 
Open your books1C
Open your books1COpen your books1C
Open your books1C
 
Meetingonline 3C
Meetingonline 3CMeetingonline 3C
Meetingonline 3C
 
Assure lesson plan dr. hanter
Assure lesson plan dr. hanter Assure lesson plan dr. hanter
Assure lesson plan dr. hanter
 
Centro soluciones
Centro solucionesCentro soluciones
Centro soluciones
 
UNA HISTORIA DE AMOR
UNA HISTORIA DE AMORUNA HISTORIA DE AMOR
UNA HISTORIA DE AMOR
 
Trabajo de informatik
Trabajo de informatikTrabajo de informatik
Trabajo de informatik
 
Catalina Sea Ranch
Catalina Sea RanchCatalina Sea Ranch
Catalina Sea Ranch
 
LOS EFECTOS "PSICOLOGICOS" DEL TERROR.
LOS EFECTOS "PSICOLOGICOS" DEL TERROR.LOS EFECTOS "PSICOLOGICOS" DEL TERROR.
LOS EFECTOS "PSICOLOGICOS" DEL TERROR.
 
File4 b b2
File4 b b2File4 b b2
File4 b b2
 

Similar to Disaster Risk Management

Evaluation performance-monitoring
Evaluation performance-monitoringEvaluation performance-monitoring
Evaluation performance-monitoringRahul Bhargava
 
Presentation on M&E, Presented by Sushanta Kumar Sarker
Presentation on M&E, Presented by Sushanta Kumar SarkerPresentation on M&E, Presented by Sushanta Kumar Sarker
Presentation on M&E, Presented by Sushanta Kumar SarkerSushanta Kumar Sarker
 
Presentation on M&E, presented by Sushanta kumar sarker, Bangladesh
Presentation on M&E, presented by Sushanta kumar sarker, BangladeshPresentation on M&E, presented by Sushanta kumar sarker, Bangladesh
Presentation on M&E, presented by Sushanta kumar sarker, BangladeshSushanta Kumar Sarker
 
Zamfaran training for chairmen 4
Zamfaran training for chairmen 4Zamfaran training for chairmen 4
Zamfaran training for chairmen 4Maishanu Malami
 
Monitoring & Evaluation.pptx
Monitoring & Evaluation.pptxMonitoring & Evaluation.pptx
Monitoring & Evaluation.pptxGeorgeKabongah2
 
MEAL WORKSHOP ON ETH1224-draft.pptx
MEAL WORKSHOP ON ETH1224-draft.pptxMEAL WORKSHOP ON ETH1224-draft.pptx
MEAL WORKSHOP ON ETH1224-draft.pptxAbraham Lebeza
 
Monitoring and impact assessment tools
Monitoring and impact assessment toolsMonitoring and impact assessment tools
Monitoring and impact assessment toolsBrajendra Singh Meena
 
M&E of development projects and role of imed dr taibur
M&E of development projects and role of imed dr taiburM&E of development projects and role of imed dr taibur
M&E of development projects and role of imed dr taiburDr. Md. Taibur Rahman
 
Monitoring and Evaluation for Project management.
Monitoring and Evaluation for Project management.Monitoring and Evaluation for Project management.
Monitoring and Evaluation for Project management.Muthuraj K
 
Monitoring R&D functions
Monitoring R&D functionsMonitoring R&D functions
Monitoring R&D functionsNandita Das
 
MEAL ETH11171-draft.pptx
MEAL ETH11171-draft.pptxMEAL ETH11171-draft.pptx
MEAL ETH11171-draft.pptxAbraham Lebeza
 

Similar to Disaster Risk Management (20)

Evaluation performance-monitoring
Evaluation performance-monitoringEvaluation performance-monitoring
Evaluation performance-monitoring
 
Controlling
ControllingControlling
Controlling
 
Presentation on M&E, Presented by Sushanta Kumar Sarker
Presentation on M&E, Presented by Sushanta Kumar SarkerPresentation on M&E, Presented by Sushanta Kumar Sarker
Presentation on M&E, Presented by Sushanta Kumar Sarker
 
Presentation on M&E, presented by Sushanta kumar sarker, Bangladesh
Presentation on M&E, presented by Sushanta kumar sarker, BangladeshPresentation on M&E, presented by Sushanta kumar sarker, Bangladesh
Presentation on M&E, presented by Sushanta kumar sarker, Bangladesh
 
UNIT - I - ppe.pptx
UNIT - I - ppe.pptxUNIT - I - ppe.pptx
UNIT - I - ppe.pptx
 
M&E.ppt
M&E.pptM&E.ppt
M&E.ppt
 
Zamfaran training for chairmen 4
Zamfaran training for chairmen 4Zamfaran training for chairmen 4
Zamfaran training for chairmen 4
 
Monitoring & Evaluation.pptx
Monitoring & Evaluation.pptxMonitoring & Evaluation.pptx
Monitoring & Evaluation.pptx
 
Project Monitoring and Evaluation (M and E Plan) Notes
Project Monitoring and Evaluation (M and E Plan) NotesProject Monitoring and Evaluation (M and E Plan) Notes
Project Monitoring and Evaluation (M and E Plan) Notes
 
MEAL WORKSHOP ON ETH1224-draft.pptx
MEAL WORKSHOP ON ETH1224-draft.pptxMEAL WORKSHOP ON ETH1224-draft.pptx
MEAL WORKSHOP ON ETH1224-draft.pptx
 
Monitoring and impact assessment tools
Monitoring and impact assessment toolsMonitoring and impact assessment tools
Monitoring and impact assessment tools
 
Components of a monitoring and evaluation system
Components of a monitoring and evaluation system  Components of a monitoring and evaluation system
Components of a monitoring and evaluation system
 
M&E of development projects and role of imed dr taibur
M&E of development projects and role of imed dr taiburM&E of development projects and role of imed dr taibur
M&E of development projects and role of imed dr taibur
 
Monitoring and Evaluation for Project management.
Monitoring and Evaluation for Project management.Monitoring and Evaluation for Project management.
Monitoring and Evaluation for Project management.
 
Lfa
LfaLfa
Lfa
 
Monitoring and Evaluation
Monitoring and EvaluationMonitoring and Evaluation
Monitoring and Evaluation
 
Monitoring R&D functions
Monitoring R&D functionsMonitoring R&D functions
Monitoring R&D functions
 
MEAL ETH11171-draft.pptx
MEAL ETH11171-draft.pptxMEAL ETH11171-draft.pptx
MEAL ETH11171-draft.pptx
 
COURSEWORK.pdf
COURSEWORK.pdfCOURSEWORK.pdf
COURSEWORK.pdf
 
Monitoring & Evaluation Framework - Fiinovation
Monitoring & Evaluation Framework - FiinovationMonitoring & Evaluation Framework - Fiinovation
Monitoring & Evaluation Framework - Fiinovation
 

Disaster Risk Management

  • 1. Monitoring and Evaluation Basic Definitions, Institutional and Attributes to Establish an M&E System 8 Aug 2012 By Tariq Zaman 1
  • 2. What is M&E? M&E is about collecting, storing, analyzing and finally transforming data into strategic information “Do not COLLECT data unless somebody is going to USE IT” 2
  • 3. What Is M&E? Monitoring A continuing function that uses systematic collection of data on specified indicators to provide management with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds Evaluation The process of determining the worth or significance of a program to determine the relevance of objectives, the efficacy of design and implementation, the efficiency or resource use, and the sustainability of results M&E are synergistic…monitoring is a necessary, but not sufficient, input into evaluation 3
  • 4. What is Program Monitoring, Evaluation? Monitoring is the Evaluation is the use of social research routine process of methods to data collection and systematically measurement of investigate an progress toward achievement of a program objectives program’s results 4
  • 5. Why M&E? S Data & M y information & s t E e m Strategic Information Evidence informed decision making Reduced morbidity and mortality 5
  • 6. Importance of M&E  Provides information: a) on program progress and effectiveness b) for policy-making and advocacy c) to plan future resource needs  Improves program management and decision-making (managing by results)  Allows accountability to stakeholders, including donors  Ensures most effective and efficient use of resources 6
  • 7. Definition of Monitoring • Monitoring is the systematic and continuous collection and analysis of information about the progress of a project over time • It is a tool for identifying strengths and weaknesses in a project and for providing management with sufficient information to make the right decisions 7
  • 8. Differences  Monitoring  Evaluation  Short-term view  Long-term view  Immediate analysis of  Through analysis of achievement of ongoing activities specific objectives  Influences control of  Influences future ongoing activities planning  Periodical and regular  A detailed report with  Carried out by staff or suggestions donors  Carried out by external or internal (donors) 8
  • 9. DIFFERENCE BETWEEN MONITORING AND EVALUATION  Monitoring means tracking the key elements of programme performance on a regular basis (inputs, activities, results)  In contrast, evaluation is the episodic assessment of the change in targeted results that can be attributed to the programme/project intervention, or the analysis of inputs and activities to determine their contribution to results. 9
  • 10. The Purpose of M&E Program Improvement Data Sharing with Partners Reporting/ Accountability 10
  • 11. What is the purpose….?  Improve program implementation  Data on program progress and implementation  Improve program management and decision making  Inform future programming  Inform stakeholders  Accountability (donors, beneficiaries)  Advocacy 11
  • 12. Purpose of Monitoring  Produces timely, accurate and adequate information about the adherence of a project with its plan  Provides data so that plans can be adjusted and resources managed in answer to project needs and opportunities  Records information in sufficient detail to illustrate accountability and provide for future evaluations  Appropriate monitoring generates the minimum data necessary for analysis and uses the simplest effective data collection methods 12
  • 13. Purpose of Monitoring  Determining whether the inputs in the project are well utilized  Ensuring all activities are carried out properly by the right people and in time  Identifying problems facing the community or project and finding solutions  Determining whether the way the project was planned is the most appropriate way of solving the problem at hand 13
  • 14. Who needs, uses M&E Information? Managers To Improve program implementation…  Donors To Inform and improve  Governments future programs  Technocrats  Donors Inform stakeholders  Governments  Communities  Beneficiaries 14
  • 15. Who conducts M&E….? Program implementer Stakeholders Beneficiary Remember .. M&E Technical skills Participatory process 15
  • 16. M&E Results – RBM Approach SMART Results pecific: Results must describe a specific future condition S easurable: Results, whether quantitative or qualitative, must have M measurable indicators, making it possible to assess whether they were achieved or not A chievable: Results must be within the capacity to achieve elevant: Results must make a contribution to selected priorities of the R national development framework imebound: Results are never open-ended - there is an expected date of T accomplishment 16
  • 17. BENEFITS OF MONITORING & EVALUATION Monitoring and evaluation (M&E) helps programme implementers to: Determine the extent to which the programme /project is on track and to make any needed corrections accordingly Make informed decisions regarding operations management and service delivery; Ensure the most effective and efficient use of resources; Evaluate the extent to which the programme /project is having or has had the desired impact. 17
  • 18. Monitoring System  It is a system for collecting and using information about the progress of a project  It helps to take appropriate decisions  Provides a communication system, in which information flows in different directions between all the people involved 18
  • 19. Essential Components of a Monitoring System  Selection of indicators for each activity  Collection of data concerning the indicators  Analysis of data  Presenting information in an appropriate way  Using this information to improve the work 19
  • 20. Monitoring Tools and Templates  Monitoring and Evaluation Framework  LFA, RBM, and other formats of donors  Internal/External reviews reports  Progress reports and reviews  Work plan and reporting matrix  Budget utilization and audit reports, etc  Surveys (structured and un structured interviews)  Data – primary and secondary (office record, progress reports) 20
  • 21. Baseline Data  A collection of data about the characteristics, for example of a population or an area before a program or project is set up. The data can be compared with a study of the same characteristics carried out later in order to see what has changed. 21
  • 22. Baselines, Targets and Performance Commitment Performance Current level of achieveme nt Baseline Target Achievement 22
  • 23. Logical Framework Analysis/Approach  Logical Framework Analysis/Approach (LFA) is an analytical process for structuring and systematizing the analysis of a project or program 23
  • 24. LFA MATRIX Narrative Verifiable Means of Assumptions Summary Indicators Verification Goals Purpose Outcome/Output Activities 24
  • 25. The Logical approach Goal ofLong-term, widespread improvement  the Results Chain (Long in society Term “Big picture”(country longer term Impacts) strategy) Strategy  Effects or behavior changes Outcomes Results Planning resulting from a strategic program Outputs  Products and services that need to be delivered to achieve the expected Programming outcomes Activities  What actually was done with the available resources to produce the intended outputs Inputs  Critical resources (expertise, equipment, supplies) needed to 25 implement the planned activities
  • 26. Kinds of Monitoring  Process Monitoring  Effect Monitoring  Monitoring Significant Change 28
  • 27. Process Monitoring  Process monitoring includes information on the use of resources, progress of activities, and the way they are carried out, which is known as process monitoring 29
  • 28. Process Monitoring  Process monitoring is collecting information on the use of inputs, the progress of activities, and the way these are carried out  Process monitoring looks at why and how things have happened; it looks at relevance, effectiveness and the efficiency of processes  It involves stakeholders and beneficiaries in planning, in deciding what is to be monitored, and in developing and recording monitoring processes  Process monitoring requires documentation of how the process was carried out 30
  • 29. Effect Monitoring  Effect monitoring is collecting information on progress towards achieving objectives, and on what the effects are in relation to these objectives  Effect monitoring is a form of continuous self- evaluation.  If it is done well, formal evaluations will be needed less often, and if a formal evaluation is carried out, the program staff will already be familiar with their work in relation to their objectives.  They will be able to participate more fully in the evaluation, and find it less threatening.  All monitoring systems should include both process and effect monitoring. 31
  • 30. Monitoring Significant Change  The "significant change"; method of monitoring is not new, but it is not widely known  The method has been used by Australian Overseas Volunteers to assess their contribution in development agencies, during their overseas appointment Contd..... 32
  • 31. Monitoring Significant Change  The first step to take is for the staff of the implementing organization to identify what areas, or domains, of change they want to monitor using the significant change method.  The primary focus should be on two types of change: changes in the lives of individuals, and changes in the organization  The basis of the significant change method is a simple question: "Describe what you think was the most significant change that you contributed to your project" 33
  • 32. The Chain Of Results: Causal Sequence For An Intervention To Achieve Desired Objective INPUTS ACTIVITIES OUTPUTS OUTCOMES IMPACT Quality of life . Management . Services Conditions: .Infrastructure . Training . Counseling - Access - Quality . Behaviors . Practices . Human . Human . Economic . Logistic . Awareness . Decision . Finance . Civic management . Knowledge . Utilization . Equipment . Environment . Operation research . Attitude of services . Technology MDGs: . Capacities . Policy . Conference . Competency . Poverty . Time . Facilitation . Opinion . Morbidity . Volunteers . etc. . Aspiration . Mortality . Partners . Motivation . HIV prevalence . Education . Employment . Gender equality Measure process UNCT 26th February 2009 Measure impact 34
  • 33. Inputs  The human, financial and technical resources deployed, their effectiveness, cost effectiveness and opportunities can be assessed 35
  • 34. Output Monitoring  Output monitoring looks at the immediate results of a development programe or project OR  The immediate results the project achieves.It is sometimes called as “deliverables” 36
  • 35. Outcome Monitoring  Outcome monitoring is the regular reporting of program results in ways that stakeholders can use to understand and judge these results 37
  • 36. Impact Assessment  Significant lasting changes in people’s lives, brought about by a given action or a series of actions 38
  • 37. Roadmap: From Input to Impact • Child mortality reduced, poverty reduced, higher Goal income levels, improved Results (Impacts) gender equality, sustainable agriculture • Increased market access, Outcomes increased short term income • # producers in the system, # Outputs new products available, # Implementation producers supported, # consumers aware • Awareness raising campaigns, Activities development of standards, producer support, audits • Funds, staff & resources etc Inputs 39
  • 38. Multi-tiered Holistic M&E Rebuilding Lives and Communities Input Output Outcome Impact Monitoring Monitoring Monitoring Assessment CMTs SSTs • PC-I •TMRs of •Measurement of •Captures the services changes in • Planning and provided by ERRA peoples lives Designing physical reconstruction • Tendering works •Commencement •Carried out at •Carried out at CMTs facilities and SSTs the Household •Carried out at beneficiary level Level the Projects’ Site
  • 40. M&E Questions  Monitoring questions  What is being done?  By whom?  Target population?  When?  How much?  How often?  Additional outputs?  Resources used? (Staff, funds, materials, etc.) 42
  • 41. M&E Questions  Evaluation Questions?  Is the content of the intervention or the activity being delivered as planned?  Does the content of the intervention or the activity reflect the requisite standards?  Have the intervention achieved the expected results? 43
  • 42. What do we need to answer these questions…? INDICATORS …to take measurements. 44
  • 43. Indicators: Definition  Markers that help to measure change by showing progress towards meeting objectives  Observable, measurable, and agreed upon as valid markers of a less well-defined concept or objective  Indicators differ from objectives in that they address specific criteria that will be used to judge the success of the project or program. See comment for examples 45
  • 44. What is an indicator?  The United Nations World Food Programme's Office of Evaluation describes an as a indicator quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievement or to reflect the changes connected with an intervention. Indicators are compared over time in order to assess change. In the logical framework approach, an operation is broken down into design elements (inputs, activities, outputs, outcomes and impacts) and separate indicators are used to measure performance. 46
  • 45. General characteristics The desired properties of indicators, also known as variables, will depend on the approach adopted and the nature of the programme or project being evaluated. All indicators have specific characteristics:  Numeric = the values are numbers  Nominal = the values have names (e.g. male and female)  Continuous = the values are infinite or very large  Ordinal/categorical = the values have a known order (e.g. low to high) 47
  • 46. Characteristics of Good Indicator  Indicators will vary from one project to another, according to the work and its context, but in general they are often expected to be:  SMART (specific, measurable, attainable, relevant and time-bound)  SPICED (subjective, participatory, interpreted, cross-checked, empowering and diverse) 48
  • 47. Type and Level of Each Indicator  Type  Input/Process (Monitoring)  Outcome / Impact (Evaluation)  Level  Global level  Country level  Program level 49
  • 48. What Is a Good Indicator?  Valid: Measures the effect it is supposed to measure  Reliable: Gives same result if measured in the same way  Precise: Is operationally defined so people are clear about what they are measuring  Timely: Can be measured at an interval that is appropriate to the level of change expected  Comparable: Can be compared across different target groups or project approaches 50
  • 49. KPI = Key Performance Indicator 55
  • 50. Key Performance Indicators  Key performance indicator (KPI) is a measure that is employed to refer to a concept when no direct measure is available 57
  • 51. WFP Emergency Operation M&E Framework RESULTS Input Activities Output Outcomes Impact (Resources) ( Interventions, Targeted Increased Increased Services) women receiving household food consumption full family ration supply especially W, Ch & V Ind. X kg maize, Distribution of # of family % of target house Average # of X kg oil Family ration to ration recipients Hold with adequate Meals per day X Kg other Women disaggregated Food supply by gender by gender and age Program-based Data Population-based Survey Measure process Measure impact 58
  • 52. Evaluation  “Evaluation is a management tool which measures the change or results a project intervention  An assessment at one point in time of the effects of a piece of work and the extent to which stated OBJECTIVES have been achieved.” 60
  • 53. Purpose of Evaluation  Worth or significance of a development activity, policy or programme  The relevance of objectives of projects  The relevance and effectiveness of programme/project design and implementation  The efficiency of resource use  The sustainability of results  Incorporates lessons learned into the decision making process of both partner and donor organizations  Provides recommendations for policy formulation  Presents longer term implications in terms of sustainability of the proposed intervention 61
  • 54. Evaluation should not be done for  To justify a decision which has already been made for other reasons, for example, the decision to stop funding a place of work  To assign blame for a problem which has arisen 62
  • 55. Types of Questions Addressed by Evaluation  About how the programme could be improved  About how the aims and objectives should be modified or revised  About the work can be monitored and evaluated  About how the work could be made more cost effective 63
  • 56. Role of Evaluator  The evaluator is a CONTROLLER in an attempt to hold implementing agencies responsible for their decisions and actions  The evaluator is a MEDIATOR between divergent knowledge interests  The evaluator is a FACILITATOR in support of weak groups attempts to increase their decisions and influence their own lives 64
  • 57. Advantages and Disadvantages of Internal and External Evaluators External Internal Can a fresh look at the programme Knows the programme too well Not personally involved so it is easier to be Finds it hardest to be objective objective Is not part of the normal power structure Is part of the power and authority structure Gains nothing from the programme, but May be motivated by hopes of personal gain may gain prestige from the evaluation Trained in evaluation methods. May have May not be specially trained in evaluation experience in other evaluators. Regarded methods. Does not have more (or only a as an expert by the programme little more) training than others in the programme An outsider who may not understand the Is familiar with and understands the programme or the people involved. May programme and can interpret personal take a long time to read background behavior and attitudes information May cause anxiety as programme staff and Known to the programme, so poses no participants be not sure of his or her threat of anxiety or disruption. Final motives recommendations may appear 65 less threatening
  • 58. Methodology of evaluations  Combination of qualitative(desk reviews, key informants interviews, focus group discussions, observations) and quantitative(household surveys, health facility surveys or other special surveys) methods need to be used and spelled out in this section. 66
  • 59. Theory-based evaluation  Similar to Log Frame but more detailed in understanding programme logic  Seeks to identify causal or determining factors seen as important for success and then what should be monitored  Ultimately leads to the determination of critical success factors (CSFs)  Evaluation of CSFs used to inform likelihood of programme success 67
  • 60. Cost benefit evaluation  Cost-benefit and cost-effectiveness analyses are tools for assessing whether or not the costs of an activity can be justified by the outcomes and impacts  Cost-benefit analysis measures both inputs and outputs in monetary terms  Cost-effectiveness analysis estimates inputs in monetary terms and outcomes in non- monetary quantitative terms 69
  • 61. Impact Assessment  The systematic identification of the effects –positive or negative, intended or not – caused by a program or project  Impact evaluations can range fromlarge scale sample surveys to small-scale rapid assessment 71
  • 62. Impact Domains  Impact at Individual and House hold levels  Physical assets  Financial assets  Human assets  Income  Food security  Impact at Community Level  Physical assets  Natural resource base  Social capital  Higher-Level Impact  On Institutions  On Policies and Regulations 82
  • 63. M&E SYSTEM FOR QUALITY RESULTS  A robust M&E systems supports:  Compliance and mid-course correction  Organizational effectiveness  Efficient resource management and  Timely accomplishment of quality results 83
  • 64. Monitoring and Ealuation Framework Core Principles  Joint and Harmonized. Provides a robust M&E Umbrella for all stakeholders  Results Focused. Focuses on results and impacts as well as Budgets  Lesson Learning. Presents information for continuous lesson learning and programme planning as well as accountability  People Focused. Involves all stakeholders and includes direct feedback from beneficiaries  Transparency and Communication. Demonstration independence and communicates information to appropriate people 84
  • 65. Strategic Planning for M&E: Setting Realistic Expectations All Most Some Few Number of Projects Input/ Output Process Outcome Impact Monitoring Evaluation Monitoring / Monitoring / Evaluation Evaluation Levels of Monitoring & Evaluation Effort 93 93

Editor's Notes

  1. Indicators are more specifically defined than objectives, since they define the program attributes with a focus on expected effects translated into specific measures providing the basis collecting valid and reliable information for evaluation. ( CDC ‘Framework for Evaluation’ pg 15). Indicators spell out & provide guidance for periodic monitoring and non-periodic assessment of higher level outcomes (results). They are tools that tell the story of program progress and success. For example: increases in the contraceptive prevalence rate is an indicator for the increased use of family planning in a country. Other examples include: Indicators for measuring program activities such as program capacity to delivery services, participation rate, levels of client satisfaction, or amount of intervention exposure. Indicators for measuring program effects, such as changes in a particular behavior, community norms, or health status.
  2. Please review these and make suggestions Valid –For example, if we use the indicator “knows at least three modern methods of family planning,” this will give an indication over time of a changing level of knowledge. If, however, we are interested in whether people’s interest in using family planning is changing, this would NOT be a valid indicator. Reliable –If we use the same indicator, it should be reliable when asked by different people during different survey rounds. However, if women (or interviewers) are not clear about the definition of “modern,” the validity may be compromised since different people may count different methods as modern. Precise –As mentioned above, if we can clearly define our indicator by including a list of modern FP methods that are acceptable answers to be counted among the three, then the indicator is precise enough to determine whether the respondent can be counted among those who know three modern FP methods. Timely –With a concerted education effort, we certainly expect to be able to see change in this indicator within a relatively short time. However, if for example we have a 2–3-year project and want to measure change in family size, the indicator (family size decreased) will not be observable within the life of the project. Comparable –Our indicator on knowledge of three modern family planning methods is easily comparable across different groups. For example, it would be easy to compare whether husbands’ and wives’ knowledge levels are the same, or whether couples who received counseling vs. those who did not had the same knowledge level. In contrast, if we were to choose an indicator that is intervention-specific, such as those who receive counseling and know at least three modern methods, we could use this on the subgroup of people who received counseling but could not use this indicator with the population at large.
  3. Consistent with project design –This is related to validity. Is it measuring what we think it is measuring, and whether it is measuring what we think the project is impacting. If we are measuring change in knowledge because the project has an intensive community mobilization campaign, then the knowledge of three modern methods may be a good indicator. However, if the project is addressing quality of care and contraceptive availability, then change in knowledge may not be the best indicator, even if we are hoping that counseling on different methods is part of the quality improvement effort. Useful –Indicators serve both to evaluate the impact of a project and to monitor its progress in order to make program adjustments. We may select some indicators that primarily serve for evaluation. However, it is also important to have indicators that will provide information for program adjustment. For example, the knowledge of modern family planning methods is a population-based indicator that is probably more useful to evaluate the impact of a community-mobilization campaign. However, an indicator measuring the availability of three different kinds of contraceptives in the health center could be used to evaluate the access and logistics system, but it also provides routine information that can be used to adjust the management and logistics systems if a problem is identified. Available and Affordable –These criteria are essential and are often under-considered when planning. Information is useful, but it also costs time and money. We always need to balance the relative benefit of an indicator with the time and money it will cost to collect information on it. For example, if you are planning to do a baseline survey of mothers of children under 2 years old as part of a child survival project, you probably would not want to choose an indicator that asks about the percentage of men of reproductive age who know three modern methods of family planning. Even if you were targeting men and wanted some indicator of their level of involvement, if such an indicator is going to require that you survey men when all the other indicators target mothers, it is probably not worth it. An alternative might be the percentage of mothers of children under 2 years old (because this is the group you plan to survey) who report having discussed different contraceptive options with their husbands. While this is somewhat different, it might be valid as an indicator and a lot cheaper to collect.