“Beneficiary feedback” means different things to different people. It is also under-utilised in development evaluation. There are no clear frameworks for engaging beneficiary feedback in evaluation. This has resulted in poor practice; confusion; a lack of rigour in application; lost opportunities for enhancing the quality of evaluations and insufficient attention given to ethical considerations for “beneficiaries” themselves. DFID commissioned a piece of work to develop understanding and guidance on how to improve beneficiary feedback in evaluation.
Objectives
The presentation will shed light on four frequently asked questions:
• How do we define beneficiary feedback in the context of evaluation?
• Is beneficiary feedback an approach, method or principle?
• What distinguishes beneficiary feedback from existing evaluation tools e.g. participatory evaluation?
• How do we meaningfully and ethically engage beneficiary feedback in evaluation?
Methods
This paper is based on:
• a literature review of over 100 documents;
• interviews with 36 key informants representing DFID, INGOs and evaluation consultants/consultancy firms and;
• contributions from 32 practitioners via e-distribution lists and through a blog set up for the purpose of the research.
The snowballing technique was used for data gathering and attempts were made to minimise North-North bias through posting in different forums.
Findings and Learning Points
• Beneficiary feedback is relevant to all types of evaluation design
• It is not a subset of participatory evaluation; and goes beyond data collection. It can engage both extractive and/ or participatory methods.
• There is scope to incorporate beneficiary feedback within formal evaluation quality assurance processes.
The paper outlines a structured, four step approach to beneficiary feedback in evaluation, which incorporates feedback as part of evaluation design, data collection, joint validation / analysis; and on end product / response and follow up. This will be discussed.
Beneficiary feedback in evaluation ukes methods workshop
1. INTEGRATING BENEFICIARY FEEDBACK
INTO EVALUATION- A STRUCTURED
APPROACH
Presentation to UKES Conference May 2015
Theme: Theory and practice of inclusion of stakeholders/
participants/ beneficiaries.
By Leslie Groves, Independent Consultant
lesliecgroves@gmail.com
3. Evidence Base
• 130 documents
• Interviews– 50 people
• Online contributions from 33 practitioners
(https://beneficiaryfeedbackinevaluationand
research.wordpress.com/)
• Analysis of 32 shortlisted evaluations
Leslie Groves UKES 2015
4. 1. How do we define beneficiary
feedback in the context of evaluation?
We haven’t!
= Vastly differing
interpretations and
levels of ambition
within evaluation
Leslie Groves UKES 2015
5. Proposed Working Definition
“A beneficiary feedback approach to development
evaluation involves a one way or two way flow of
information between beneficiaries and evaluators for the
purpose of improving evaluation process, findings and
use.”
Leslie Groves UKES 2015
6. Typology of beneficiary feedback
One-way feedback to beneficiaries
One-way feedback from beneficiaries
Two-way feedback: inter active conversation between
beneficiaries and evaluators but with evaluation team
retaining independence and power and;
Two-way feedback through participatory evaluation with
beneficiaries as part of the evaluation team.
Leslie Groves UKES 2015
7. 2. Is beneficiary feedback an
approach, method or principle?
It is a structured and systematic approach that cuts
across all stages of evaluation - from design to
dissemination. It is relevant to all types of evaluation
design.
Approach supports us to meet evaluation principles and
select most appropriate methods
Leslie Groves UKES 2015
8. 3. How do we meaningfully and ethically
engage beneficiary feedback in
evaluation? Currently?
Mostly limited to data collection stage of evaluation- Lost
opportunities and risks
Design Data Collection
Data validation
and analysis
Dissemination
and
Communication
Leslie Groves UKES 2015
9. How could we meaningfully and ethically
engage beneficiary feedback in evaluation?
Possible to have meaningful, appropriate and robust
approach to beneficiary feedback at key stages of the
evaluation process, if not in all of them.
Leslie Groves UKES 2015
11. Minimum standard advisable
= Evaluation commissioners and
evaluators give due consideration to
different types of beneficiary feedback
in each of the four key stages of the
evaluation process.
Leslie Groves UKES 2015
12. It is an approach rather than a method or principle.
It encompasses the range of types of feedback and the full
evaluation cycle.
It encompasses both quantitative and qualitative methods.
Leslie Groves UKES 2015
4. What distinguishes this approach
from existing evaluation tools?
13. Checklist
Evaluation Stage Considerations
Preparing for an
evaluation:
Developing the
Terms of
Reference
Sufficiently strong commitment? Adaptive programming possible?
Does context section clarify who the beneficiaries are, programme relationship with
beneficiaries, and whether there has already been a process of beneficiary feedback during
programme implementation.
Linking with other data/ evaluations by other donors to minimise beneficiary burden?
Does methodology section include consideration of different types of beneficiary feedback
in each of the four stages of the evaluation process?
Does the target audience section include beneficiaries,? Should it?
Do the competencies required support meaningful and ethical beneficiary feedback?
Would it be reasonable to include representatives of the beneficiary population (e.g. town
mayor or other leaders) on the advisory group/ evaluation reference group?
Have you required a dissemination and communication plan that includes beneficiaries/
beneficiary evaluation participants?
Do the outputs include appropriate products for feeding back to beneficiaries living in
poverty e.g. a youth friendly summary? Radio show? Poster?
Will evaluation questions include how well project staff listened and responded?
Is there any scope for beneficiary input into the Terms of Reference?
14. Checklist (Cont’d)
Design Do processes of quality assurance of inception reports and methodological
papers:
a) Assess
b) Verify
c) Validate choices made
Evidence
gathering,
analysis and
validation
Do processes of quality assurance of draft and final reports:
monitor the quality of beneficiary feedback- both methodologically and ethically
and
ensure that commitments made in design are followed through and that
beneficiary feedback is not the first thing to “drop off” the list as often happens.
Dissemination
and
communication
Are necessary resources invested in ensuring that dissemination and
communication, including of management responses, occurs in a meaningful
manner- including to beneficiaries and to decision makers within and outside of
the organisation?
Is there scope for supporting a commitment to ensuring that dissemination
goes all the way down the chain, including beneficiary representatives who
might have responsibility for feeding findings back to their communities? Are
implementing or other partners prepared to support dissemination activities? If
so, is it possible to agree a joint strategy?
15. Concluding Thoughts
Time to move beyond normative positioning of beneficiary
feedback as “good thing”
And beyond “Beneficiary = data provider”
Could you:
• Use and test the definition?
• Use the framework?
• Think about current evaluations- where could you
improve?
• Engage through the blog?
Leslie Groves UKES 2015
Today, I would like to share a snapshot of findings of a working paper that DFID commissioned me to do on Beneficiary Feedback in evaluation. DFID were interested in understanding what exactly beneficiary feedback in the context of evaluation involves, what the state of play is both in terms of practice but also in terms of evaluation standards and principles. They also wanted to look at how they and their partners might be able to enhance their approach to beneficiary feedback. The report covers all of this whereas today I can only touch on some the content. So if you are interested in knowing more after this discussion, do take a look at the report. You can see the link in the slide.
Engaging beneficiaries in evaluation is obviously not new. We have decades of experience on stakeholder engagement, participatory evaluation, beneficiary assessments and others. This session builds on the work that has gone before it. And inevitably lots will come after it as we continue to learn and improve our evaluation practice.
In this presentation we will explore 4 questions:
How do we define beneficiary feedback in the context of evaluation?
Is beneficiary feedback an approach, method or principle?
How do we meaningfully and ethically engage beneficiary feedback in evaluation?
What distinguishes beneficiary feedback from existing evaluation tools e.g. participatory evaluation?
There will be an activity that I would like us to do together half way through the presentation.
Documents: (DFID and other development agencies), including policy and practice reports, evaluations and their Terms of Reference, web pages, blogs, journal articles and books.
interviews with 36 key informants representing DFID, INGOs and evaluation consultants/consultancy firms and a focus group with 13 members of the Beneficiary Feedback Learning Partnership;
Contributions from 33 practitioners via email and through a blog set up for the purpose of the research (https://beneficiaryfeedbackinevaluationandresearch.wordpress.com/) and;
Analysis of 32 shortlisted evaluations containing examples of different types of beneficiary feedback.
Snowballing and backward snowballing techniques were used for data gathering. Requests for contributions in terms of documents and undocumented experiences were distributed via DFID’s Evaluation Newsletter to over 170 evaluation cadre members; posts on DFID’s internal Yammer discussion platform; via the UK BOND e-list; via the Pelican Platform for Evidence-based Learning & Communication for Social Change and MandE group (over 2,500 users) e-lists.
Blog: 1000+ views
Lack of definitional clarity has led to a situation where the term beneficiary feedback is subject to vastly differing interpretations and levels of ambition within evaluation
Starting point was therefore to develop a typology of BF so as to be sure what we are talking about. Hopefully this will allow us to be very clear about the ambitions, or lack of, that we have.
It is important to note the distinction in this paper between participatory evaluation (as a specific evaluative approach, with a clear set of guiding principles that seeks to empower and engage beneficiaries as joint owners of the evaluation process) and participatory methods which may involve beneficiary feedback but not be participatory evaluation due to not having a joint ownership approach.
No judgement is provided as to which type of feedback is better or worse. Decisions require an informed decision based on evaluation context. The position taken is that feedback is still relevant where it is one-way (and may be necessary for pragmatic reasons), although it may not represent best practice. Sometime two way feedback may not be appropriate/ possible. It may be unethical. I recently refused to interview beneficiaries as we were not in a position to do it appropriately. The do no harm principle needs to prevail. Always. And this is where ethics need to come first and foremost in our evaluation design, implementation and follow up.
Principles: e.g. Ethics, Dissemination and Participation (OECD DAC 91), partnership approach, ownership,
Report shows there is a shared, normative value that it is important to hear from those who are affected by an intervention about their experiences. In reality, this has been translated into beneficiary = data provider. This largely extractive process risks de-humanising the beneficiary experience, with associated risks for rights based working, learning, evaluation rigour and robustness, as well as the meeting of ethical standards that one might expect.
The review of current practice shows that:
despite a renewed interest in developing more systematic approaches to enhancing beneficiary voice in development efforts, through feedback as well as through other methods.
there is still a way to go to make concrete efforts in the context of evaluation.
nearly all evaluation specific examples of current practice are limited to one-way feedback from beneficiaries and to the evidence gathering stage of the evaluation. This shows a very limited application of beneficiary feedback in the evaluation context, despite the potential for engaging both one way and two way feedback at the different stages of the evaluation process.
Revealed that evaluations analysed have frequently failed to line up with the beneficiary feedback principles of the programme being evaluated.
Key Message 3: It is possible to adopt a meaningful, appropriate and robust approach to beneficiary feedback at key stages of the evaluation process, if not in all of them. The report proposes a simple, practical framework for beneficiary feedback in evaluation that can support decision making for all types of evaluation and at each stage of the evaluation process. The framework proposed in this report is both reasonable and achievable.
The paper proposes a simple, practical framework for beneficiary feedback in evaluation that can be used to apply a structured and systematic approach that cuts across all stages of evaluation - from design to dissemination.
The framework can be used to enable evaluation commissioners and practitioners to map different types of beneficiary feedback onto each of the different stages of evaluation to support them in making choices as to which type of beneficiary feedback is most appropriate in the given evaluation context.
The report proposes a simple, practical framework for beneficiary feedback in evaluation that can support decision making for all types of evaluation and at each stage of the evaluation process. The framework proposed in this report is both reasonable and achievable.
The paper proposes a simple, practical framework for beneficiary feedback in evaluation that can be used to apply a structured and systematic approach that cuts across all stages of evaluation - from design to dissemination.
The framework can be used to enable evaluation commissioners and practitioners to map different types of beneficiary feedback onto each of the different stages of evaluation to support them in making choices as to which type of beneficiary feedback is most appropriate in the given evaluation context.
ASK THEM TO TAKE HANDOUT AND QUICKLY MAP OUT WHERE THEY THINK THEY ARE AT:
1. By themselves, think about an evaluation they are / have recently been involved in
2. Map out where they would put themselves in terms of type of feedback engaged in at different stages of the evaluation process. They can put an X in the relevant box on the handout. 2 minutes
3. Turn to neighbour and feedback on what they put. 2 minute per person.
4. Come back to the group. 2 minutes.
I will ask for show of hands of how many people felt that they hada) engaged beneficiary feedback in 2 or more stages of the evaluation
b) engaged beneficiary feedback in 3 or more stages. And I will ask this latter group to share whether this was one way, two way or participatory evaluation.
Key Message 4: It is recommended that a minimum standard is put in place. This minimum standard requires that evaluation commissioners and evaluators give due consideration to beneficiary feedback in each of the four key stages of the evaluation process: design, data collection, validation and analysis and dissemination and communication.
Where decisions are taken not to solicit beneficiary feedback at one or more stages, it is reasonable to expect that this is justified in evaluation design to be clear that this decision to exclude beneficiaries from the evaluation process is one of design rather than of omission. Quality assurance processes should integrate this standard, and methodology papers should explain the rationale.
The framework fits in with existing evaluation principles, as well as within DFID’s systems and policies. It does not require a new set of principles. It does, however, require explicit consideration of these principles, particularly ethical principles. This will enhance the chances of moving away from extractive data collection to ethical and meaningful feedback.
Context: This section should answer questions such as:
Does the programme work directly with women, men, girls and boys living in poverty? If so, has there been beneficiary feedback in programme implementation that can be built on? Is this predominantly qualitative or quantitative? Is this reasonably robust or not?
Does the programme work indirectly for the benefit of women and men living in poverty? If so, are beneficiaries traceable? Are there existing relationships with beneficiaries that can be built on?
Concluding thoughts: How reasonable are the proposals laid down in this paper?
It is time to move beyond the normative positioning around beneficiary feedback as “a good thing” towards explicit and systematic application of different types of beneficiary feedback throughout the evaluation process. The current approach to beneficiary as data provider raises important methodological and ethical questions for evaluators. The paper highlights these and shows that it is possible to adopt a meaningful, appropriate and robust approach to beneficiary feedback at key stages of the evaluation process, if not in all of them.
It is reasonable to expect evaluation commissioners and practitioners to give due consideration to beneficiary feedback in each of the four key stages of the evaluation process: design, data collection, validation and analysis and dissemination and communication. Where decisions are taken not to solicit beneficiary feedback at one or more stages, it is reasonable to expect that this is justified in evaluation design to be clear that this decision to exclude beneficiaries from the evaluation process is one of design rather than of omission.