Livestock and Fish monitoring, evaluation and learning framework
1. Livestock and Fish monitoring, evaluation and
learning framework
Keith Child
Livestock and Fish Monitoring, Evaluation and Learning
planning meeting, Nairobi, 27-28 November 2013
3. Purpose of the MEL Framework
A comprehensive, yet concise narrative of why the M&E
system is important, how it operates, what kinds of data it
will collect and by whom
• Provides a common vision of what an M&E system will
look like
• Provides a benchmark for measuring progress
• Provides a mandate and set of responsibilities
4. Clarity of Purpose: a
‘framework’ not an instruction
manual;
A Compelling Vision: how will
we achieve success;
Relevant: Appropriate to the
CRP
Economical: Achievable at an
affordable cost
Concision: as long as necessary,
as short as possible;
What makes a good framework?
Professional: must look and
read like a formal document.
5. Background
• November 2012: Incomplete rough draft
• June 2013: Senior Scientist (Impact and Learning)
• July-August: MEL Framework consultations
– 4 formal meetings, many informal consultations
• Impasse: RBM vs. Hybrid visions of the MEL Framework
• September 2013: Plan to finalize Framework presented to
PPMC
• November 2013: Presentation of draft Framework to MEL
Meeting
6. Forward
• Post MEL Meeting revisions to transform draft into
Working Paper
• December 2013: Presentation to PPMC in Tanzania
• Post PPMC Meeting revisions to transform Working
Paper into Finalized Framework
• April 2014: Presentation to PPMC in Penang for
Finalization
8. MEL Framework Introduction
•
Introduction to Framework
•
Background: CGIAR and Livestock and Fish
•
Challenges
•
Structure of MEL Framework
•
Integrated Phases
•
Attribution versus Contribution
9. Integrated Phases
Implementation Phases and Corresponding MEL Research Questions
Years 8-12 (IDO data collection)
Scaling-Out
Phase
(millions of
beneficiaries)
Years 5-8 (Progress towards IDOS)
How to Scale-Out?
Is the rationale for why it worked still sound?
Can outputs be transferred/generalized
to different settings?
How have innovations been adapted to local contexts?
Development
Phase
(100,000s
beneficiaries)
Ongoing
Functions
Research Phase
(10,000s
beneficiaries)
Years 12-20 (SLOs)
How to Scale-Up?
Will it continue to work?
Will it work somewhere else?
How has implementation contributed to the results?
Are Program benefits sustainable?
What Works?
Should it work?
How and Why did it work?
For whom will it work?
What level of attribution can be claimed?
International Public Goods
Program Monitoring
10. Attribution versus Contribution
• Attribution: whether or not and how much of a
particular change can be attributed to an intervention
• Contribution: whether or not and how an intervention
contributes to the change
11. Body: 5 Components
1. Organizational
Perfomance
2. Learning and Reflection
5. Research
Agenda
3. Outcome and Impact
Monitoring
4. Knowledge
Management
15. Component Two: Learning and Reflection
• Theory of Change and Impact Pathways
– Evidence Base
• Best Bet Selection Criteria
• Implementation Theory
• Ex ante Impact Assessments
• Ex post Impact Assessments
16. Theory of Change
• Calls for theory based approach that relies on development of
explicit ToC for each VC, by CRP Phase:
– Identify critical linkages between program inputs and impacts;
– Identify critical conditions for success (e.g., contextual factors
such as implementation theory, policy and economic
conditions, etc.);
– Identify alternative explanations of change;
– Facilitate the identification of research questions that need to
be tested in order to confirm the original program ToC.
• Step from ‘does it work’ to understanding what it is about the
program that makes it work
17. Component Three: Outcomes and Impact
Monitoring
• Development Indicators
– Harmonizing within the CRP and CGIAR
• Baselines and Benchmarking
• Targeting
• International Public Goods
18. Baselines and Benchmarking
• Baselines will be conducted for specific donor-funded
projects, including projects that are to be piloted during
the Research Phase.
• IDO data will be collected through ‘Benchmarking’
– benchmarking exercises in which the best possible
data will be used in order to establish estimated
values for IDO indicators; in some cases, this may
involve statistical modelling
19. Component Four: Knowledge Management
• Information Management
– CRP Monitoring Information System
– Performance Indicator Matrix
– Development Indicator Bank
– Evidence Base
• Communications
20. Component Five: Research Agenda
• Project Research (e.g., epIAs, project evaluations, project
ToCs, etc.)
– Responsibility of Project Managers
– Funded by projects
• Program Research (e.g., epIEs, IEEs, CCEEs, VC ToCs, etc.)
– Responsibility of MEL Steering Committee and CRP
Management
– Funded by CRP
• Research Quality and Ethics
22. CGIAR Research Program on Livestock and Fish
livestockfish.cgiar.org
CGIAR is a global partnership that unites organizations engaged in research for a food secure future. The CGIAR
Research Program on Livestock and Fish aims to increase the productivity of small-scale livestock and fish systems
in sustainable ways, making meat, milk and fish more available and affordable across the developing world.
Hinweis der Redaktion
The MEL Framework is the very heart of an M&E System. It provides… (read from list)
CREAMClearPrecise and unambiguousRelevantAppropriate to the taskEconomicAchievable at an affordable costAdequateAble to meet the needs of the ProgramMonitorableAmenable to independent validation
What I am going to propose is a Hybrid vision for the MEL Framework. We will have time to chat about this in more detail during the working lunch
The activities that follow are merely for discussion: at this stage everything is still on the table for discussion. The CG has indicated the need to develop a Performance Indicator Matrix (PIM). The PIM would include data related to (1) CRP level IDOs, (2) progress indicators (for 3 and 6 year time horizon), and (3) annual monitoring indicators.
IEA evaluation will begin at the end of 2014I will be discussing internally commissioned, independent evaluations shortly, so I will come back to this in the second half of my presentation. The generation of IPGs are recognized within the CRP Results Framework as a distinct impact pathway. Recording IPG outputs will necessarily be included as an indicator within the PIM; on the other hand, measuring IPG impact is a much more challenging task that will need to draw on an extensive body of evidence and will undoubtedly constitute a major evaluation challenge. What I am proposing here is that we make a regular, cumulative IPG evaluation a yearly activity, starting in the Development Phase.
During the Research Phase of the CRP, we do not expect to see significant development/ IDO impacts; measurable impacts will only be visible after the CRP moves into the Development Phase. However, even if impacts are not measurable during the first phase of the CRP, it is possible to assess the plausibility of critical linkages within an impact pathway logic This will be accomplished through the design and maintenance of an evidence-base for our interventions that gradually strengthens and validates our theory of change as we learn what works and how. Over time, our evidence base will grow into a densely packaged body of knowledge that will allow us to either validate our assumptions or re-formulate them. We would start by looking at our key assumptions with the RF and pick what we think is of the highest priority to follow up on. For example, we theorize a link between agriculture and nutritional and health impacts. The question is, how? Patrick Webb of Tuffs has recently tried to answer this question, but has focus primarily on field crops. We might do something similar but with a focus on livestock. One of the significant point of this proposal is that we would not collecting IDO data until we are closer to the Development Phase of the CRP. The logic being that during the research phase, we can’t really expect to see measurable impacts until out technologies are deployed as a package and on a wider scale. We will then start to collect baseline data, most likely through a survey. Surveys, however, only provide a snapshot in time of selected indicators. We will also need to conduct impact assessments in order to answer the how and why type questions.
Reflexive Monitoring is not a single method, but rather an umbrella approach designed to stimulate learning and contribute to the broader M&E system. Reflexive Process Description is used to review the development of a process within an innovation system. It is intended to serve as an input for analysis and reporting, but it can be used as a mechanism for sharing lessons learnt. There are other techniques like a learning agenda, timeline and eye opener workshops, etc.