This panel starts from the premise that development evaluation can do more to contribute to development goals. It explores matching evaluation to appropriate users, appropriate articulation of results, and appropriate methods to what is being evaluated, since evaluation is itself an intervention to support better policies and programs.
While donors typically control evaluation agendas, grantees may be better placed to commission and use evaluations. We will present experiences of handing over control of evaluation to grantees, with practical and political issues that arise.
In development as elsewhere, agencies are frustrated when evaluation does not accurately capture the results they aim to achieve. Often simple metrics and methods are inadequate in complex systems. We will describe challenges of articulating results appropriately so evaluation doesn’t miss, let alone undermine, results. We will also share experiences of using complex systems approach to assessing outcomes to match the values and purpose of the evaluand.
Evaluation for Development: matching evaluation to the right user, the right results, and the right approach
1. Evaluation for Development:
matching evaluation to the right user, the right
results, and the right approach
Sanjeev Sridharan, The Evaluation Centre for Complex Health Interventions
Tricia Wind & Amy Etherington, International Development Research Centre
CES Conference 2013
2. 2
Key messages:
• Evaluation is an intervention
• Mismatches:
• utility not matched to a key user
• measures not matched to desired results
• approaches not matched to the nature and
orientation of the programming under review
• Evaluation can do more!
3. Organizational context:
International Development Research Centre supports
research in developing countries to promote growth and
development
IDRC’s approach to evaluation:
• Shared responsibility
• Routine + strategic
• Accountability + learning
• Research on evaluation
3
4. 4
“The interest of funding agencies in evaluation has
been too narrow for too long, generally emphasizing
evaluation of development to the exclusion of
evaluation for development”
-Katherine Hay, 2010
• Evaluation for whom and by whom?
6. 6
1. Grantee-managed evaluation
• Typically donor - driven
• Quality concerns
2. Collaboratively commissioned
• Increased focus on use by grantee
Example: Users and intended uses of a co-commissioned evaluation
IDRC Program team – integrate lessons into programming and activities; share learning
with other projects; and feed into the upcoming external evaluation
Grantee organization – better understand the conditions for success as well as obstacles in
order to improve decision-making and programming; accountability to the Board.
7. 7
3. “Learning by doing” capacity building
Developing Evaluation Capacity in ICT4D (DECI)
• Action-research project - apply Utilization Focused
Evaluation to research projects
• Experiment – ensure a group of researchers had the
human, financial, and technical resources required to
be primary evaluation users
• Increased evaluation capacities of grantees, internal
evaluators, regional evaluators
• High-quality evaluations were conducted and used
8. 8
4. Facilitated “handover”
Nigeria Evidence-based Health System Initiative (NEHSI)
Country-led Evaluation
• “What is the value added of the NEHSI approach for
strengthening health systems?”
• Perspective of decision-makers and implementers in
the health system
• Primary users - NEHSI Project Advisory Committee
• Nigerian evaluation team
• Outcome Harvesting approach with mentors
9. 9
How? what made this work?
• Intentional, proactive, facilitated
• Negotiated process
• Focus on users
• Regional capacity of evaluators
• Learning, experimental agenda
• Accountability mechanism in-place
• Use of structured frameworks & approaches
• Use of mentors, technical expertise & guidance
• Realistic, flexible timelines
• Face tensions
11. Metrics to evaluate research quality
Peer reviewed publications
Journal ratings
Citation indices
11
… few journals for Southern research
… monodisciplinary journals tend to have
higher impact factors
… audiences for development research include
policy makers, practitioners
14. Field building: develop results from innovation in programs
14
programs
1 2 3 4 5
established research approach, methods
bodies of knowledge
capacity of researchers, networks
proof of influence
ongoing relationships with users
external validation
more, better coordinated funding
leadership development
internal comm's, quality control
developing careers
15. How can we better match evaluation approaches to
the nature and goals of the research programming?
• Complexity thinking
• Equity focused
• Feminist
• Systems thinking
15
Editor's Notes
Sanjeev Sridharan (keynote, call to action) is the Director of the Evaluation Centre for Complex Health Interventions at the Li KaShingKnowledge Institute at Toronto’s St. Michael’s Hospital. - Global Health Research Partnership ProgramThis panel addresses the strands and leading questions of the 2013 conference:Key question – Shelia (Von Sychowski Conference Chair), how evaluation can shape positive change, this morning Boris’ inspirational call to action to swallow the orange pill – push evaluation to help accelerate the impacts of projects, capture potential for transformative changeStrands -This panel critically assesses the role of those who are typically evaluated (grantees) and those who typically commission evaluations. suggests that those roles should be challenged, and explores the implications of doing so +++
Evaluation is itself an interventionIn IDRC’s context, evaluation is an intervention to support and further the mission of IDRC, which is to support research that influences improvements in the health and well-being of people in developing countries.From our experience, we’ve identified three types of mismatches can lead to evaluation missing its potential of being a really positive intervention: Utility of the evaluation not being matched to the right user; measures used in an evaluation not adequately matching desired results; and evaluation approaches not matching the nature and orientation of the programming under review.By examining our practice, and trying some new things in evaluation, we are trying to find ways in which evaluation can do a better job of supporting development. In this panel, you’ll hear a few undertones:How we think evaluation can make a contribution to the results we are trying to achieveWhy we like evaluationOn-going aspirational process - that we haven’t figured this allout, and we very open to receiving constructive critique and questions from youAnd that we expect that you, in your practice, may be dealing with similar issues as what we’ll describe; we’d love to hear about that in the discussion that follows. We should begin with a very brief introduction of the International Development Research Centre…
Organizational context:IDRC - Canadian crown corporation, with HQ in Ottawasupports research in developing countries to promote growth and development, eg:explores the positive and negative impacts of widespread access to mobile telephones and the Internet funds research that helps to redress health inequities and improve health services, systems, and policies,What evaluation looks like at IDRC:Decentralised, evaluation is a shared responsibilitySome evaluation happens in a very routine manner – eg, all of our programs are evaluated in a systematic way on regular cycles and serve a primarily accountability purpose, and at the project level, the decision to evaluate is flexible and based on utility and strategic decisionsMandate to support research on evaluation directly related to programming needs at the Centre
Evaluation Field Building in South Asia: Reflections, Anecdotes, and Questions. American Journal of Evaluation“The interest of funding agencies in evaluation has been too narrow for too long, generally emphasizing evaluation of development to the exclusion of evaluation for development.” She goes on to say that his has been coupled with an even more limiting tendency of donors to only focus on evaluation of “their” projects, with limited interest in building capacity in evaluation, and handing over control for evaluation.Evaluation is very prominent in many of the current debates on development effectiveness. However, the critical question of - evaluation for whom and by whom? - is only on the periphery of this debateSignificant momentum around this with the EvalPartners initiative, donor practise still have a way to goCollective impact
Approaches we have been experimenting with – real spectrum in practise (intentionality, willingness to really test boundarieshand-over or share the evaluation agenda and build evaluation capacity
1.(weakest/most problematic – but there is potential) grantee-managed but high risk that these evaluaitons generally reflects a donor agenda – typically budget line is created in the development of the project, evaluation is conducted because it is part of the plan, typically users are both IDRC and research team – grantee use is within the boundaries of donor use (single project focus)Tend to have quality problems – routine QA of all evaluations of IDRC supported work. Last year we did a review of the quality data from the previous 5 years and found that in general, grantee-commissioned evaluations tended to be of lower quality (often assess single projects, conducted at the end-of-project cycle) – utility weakness, users not identified and user participation is weak2. Increased focus on use by grantee- evaluation of major program expected outcome area, decided to do a case study of “flagship” projectResearch partner/grantee also expresses interest in having an externally validated view of its work and learn as an organizationNext 2 examples – getting more serouse about handing over that control…
Might be familiar with – Ricardo Ramirez and Dal Brodhead presented Developing Evaluation Capacity in ICT4D (DECI)Action-research project with an evaluation capacity development objective, findings ways to make UFE relevant to a set of very different research teamsOffer ICT4D researchers the option of learning UFE by applying it to their research projects – help them develop their own evaluations using UFE (be the primary users of the evaluation instead of implementing evaluations imposed by a funding organization) - RESOURCESEach evaluation was used by the managers and researchers in each project – step 11 of UFE calls for coaching in the use of evaluation findings, also primary intended users took ownership and had a stake in the findings
Large (19 M), 6 yr collaborative project between Gov of Nigeria, IDRC and CIDAGetting the health information system component of the health system to work proactively and positivelyImprove planning, access and utilization of primary health services delivery – and in turn, lead to improved health outcomes in 2 statesCORE – building the habit of evidence based planningEarly discussions on what this evaluation could look likeKey evaluation question posed – what is the value added of the NEHSI approach for strengthening health systems?Best approach to answering this question – not about evaluating the project per se, but understanding from the project what it takes to strengthen health systems in a sustainable wayAlso concluded that the best perspective for asking this question is not from the project, but from the perspective of decision-makers and implementers in the health system. (local ownership is critical, enabling scale-up or Set us on path of pursuing a country-led evaluation…Space to be creative – IA of one component built in from the beginning, CIDA managed evaluationStruggle – identify who could commission this and who could do this, watching and waiting and keeping this idea alive “mindfully opportunistic”, launch this once the conditions were rightPIUs – PAC (diverse user group - representatives from state level, federal level MoH, research collaborators, civil society, both funders) this means that the evaluation must be designed and carried out around the needs, values and intended uses of the PAC. As well, the PAC will be responsible for defining the evaluation terms of reference, engaging in the process of the evaluation, and responsible for using the evaluation process and findings to inform their decisions and actions – TORs that were negotiated with them - also agreed to be ambassadors Absolutely critical to have a Nigerian evaluation team – strengthening in-country evidenced-based decision making, evaluation use is part of that and there is a supply and demand side of that equationProposed using OH – pioneered by Ricardo Wilson-Graucollects evidence of what has been achieved, and works backward to determine whether and how the project contributed to the change
These efforts required us to be intentional, proactive and facilitative- Negotiated – important to recognize and address power dynamics – funder-grantee, internal organizational, politicalFinding the appropriate users, take on the role (TORs), facilitating their role – preliminary TORs (options and questions), build ownershipRegional evaluation capacityLearning/experimental agenda – but not without grounded purpose - at the same time concerned with quality and capacityAccountability mechanisms were built in – had the space to be creativeUFE & OH created an enabling structureSupport for users, evaluators throughout processTimelines are important – opportunistic and flexibleTensions = support and advance process while releasing control of content - DECI – none of the grantees chose to focus on work beyond their work that was funded by IDRC given the choiceNEHSI – idea of a comparative assessment (multiple donors)Focus of evaluation lens?DECI had great success, DECI II
INSTRUCTIONS:The font is Calibri (Body) , black.Sizes of each line is as follows:Title of Presentation – 32pt.Presenter’s Name – 24pt.Place – 18pt.Date – 14pt.
INSTRUCTIONS:The font is Calibri (Body), black.Sizes of each line is as follows:Title – Calibri (Body) 28pt.Body Text – Calibri (Body) 22pt.
Results are articulated differentlyRelative emphasis depends on the state of the field, and what programs see is neededBeyond these descriptions of what field building results are, the interventions vary, and how far the program thinks it can get to in a 5 year period varies enormously.This though would be the beginning for a conversation across programs to highlight similarities, sharpen differences, and perhaps ultimately come to an organizational-level framework for evaluating field biulding
INSTRUCTIONS:The font is Calibri (Body), black.Sizes of each line is as follows:Title – Calibri (Body) 28pt.Body Text – Calibri (Body) 22pt.