SlideShare a Scribd company logo
1 of 40
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
MSc (Development Management)
TU874 Project
How successfully are evaluations contributing to
learning by market development practitioners?
Douglas Pearman
October 2013
-1-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Executive summary
Pro-poor market development initiatives are seeing growing overseas development investment amid strong
demand from donors for rigorous evaluation to ensure value for money, and from practitioners for clear
learning about effective project delivery. Presently, there are a lack of reviews to determine whether the
monitoring and evaluation expenditure on-project and learning initiatives off-project are being used
effectively.
This study reviews 25 practitioners’ (19 male, 6 female) views on the effectiveness of the learning process
using a novel LinkedIn analysis and direct questionnaire approach. Participants worked across 22
organisations in the non-profit, for-profit sectors, government development agencies and international
organisations. Four key findings were identified – (1) Market development practitioners consider their
learning environments to be above-average; (2) Both non-use of evaluations and incorrect content of
evaluations are seen as significant issues; (3) Practitioners benefit most from interactive learning and (4)
Learning incentives form a better predictor of learning success than do rapid access to evaluations through
a high-powered IT system.
Recommendations are provided including a pledge for meta-evaluation to consistently measure the
learning achievements which practitioners find most influential; for umbrella organisations to provide
funding for wide-ranging and longitudinal impact assessments of project impacts which are unlikely to fall
within standard budgets; and for interactive learning opportunities to be promoted wherever possible on
projects as these were rated the most effective for learning by the panel of practitioners.
-2-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Contents
Executive summary...................................................................................................................................................2
Contents....................................................................................................................................................................3
(i)Acknowledgements................................................................................................................................................4
1Introduction and background..................................................................................................................................5
1.1Importance of summative evaluations in development funding of commercial sector projects...............................5
1.2Debates in evaluation for market development.........................................................................................................6
1.3Market development initiatives: the context for the study.......................................................................................6
2Nature of the problem............................................................................................................................................8
2.1Importance of learning for market development practitioners.................................................................................8
2.2Status of learning on market development projects..................................................................................................9
2.3Use of evaluations for learning in market development projects..............................................................................9
2.4Causes of failure to learn..........................................................................................................................................10
2.5Fostering a culture that encourages learning...........................................................................................................11
2.6Questionnaire development....................................................................................................................................12
2.7Data collection.........................................................................................................................................................13
2.8Participants..............................................................................................................................................................13
2.9Limitations of study design.......................................................................................................................................14
3Analysis and findings ............................................................................................................................................16
3.1Data screening / cleaning.........................................................................................................................................16
3.2Hypothesis 1 – quality of learning environment......................................................................................................16
3.3Hypothesis 2 – learning in relation to evaluation use and quality ...........................................................................17
3.4Hypothesis 3 – learning by conducting evaluations vs learning by reading .............................................................18
3.5Hypothesis 4 – learning incentives vs learning through access to evaluation reports..............................................19
3.6Additional findings...................................................................................................................................................20
4Conclusions and implications................................................................................................................................23
4.1Overcoming the critique of evaluation reports........................................................................................................23
4.2Targeted investment by delivery organisations into practitioner learning...............................................................23
4.3Targeted investment by umbrella groups into practitioner learning.......................................................................24
4.4Learning by beneficiary groups and local stakeholders............................................................................................25
4.5Meta-evaluation.......................................................................................................................................................26
4.6Further research.......................................................................................................................................................26
5Bibliography..........................................................................................................................................................28
Appendix 1 - Questionnaire.....................................................................................................................................33
Questionnaire | Use of Market Development Project Reviews...............................................................................33
MSc Researcher: Doug Pearman...................................................................................................................................33
Section 1: Demographics...............................................................................................................................................33
Section 2 - Facilitators of- and Barriers to Learning.......................................................................................................34
Section 3: Learning in your working environment.........................................................................................................36
Thank you for your participation...................................................................................................................................40
-3-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
(i) Acknowledgements
I would like to thank all those who have participated in and supported this study.
I am grateful to Alexis Morcrette and Luis Osorio-Cortes for their guidance during the data collection
process and granting of access to the Market Facilitation Initiative (MaFI) forum. My tutor, Angela Everitt
has provided helpful critical commentary; Jonathon Bird gave insights into the topic area; Craig Warland
offered a helpful moderating influence.
-4-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
1 Introduction and background
1.1 Importance of summative evaluations in development funding of commercial
sector projects
It is estimated that 33% of the growing $120bn annual overseas development assistance (ODA) is spent through
technical contractors (ActionAid, 2011), ~20% is spent through NGOs (ODI, 2013) while the remainder is
channelled through direct bilateral budget support, administration, debt relief, and others (ActionAid, 2011).
Given rapid growth in the number of NGOs and contractors since the 1990s vying for this money (e.g. Mensah,
2001; Intrac, 2013), competition has increased and consequently donor demands for accountability and
evidence of effective delivery have grown. Market development consulting activities are carried out by actors
both in the NGO and private sectors.
Most projects budget for post-project evaluations, often conducted by external parties, with methods frequently
selected to specifically meet the donors’ administrative and learning needs (Wallace, 1997). While these may
perform a useful exercise in reporting outcomes for donors, predetermined logframe matrices result in reduced
flexibility in program design; and evaluations are often polished to provide a favourable perspective on projects
instead of being written to focus on identification of failures and on drawing lessons. In light of this, initiatives
have arisen to realign evaluations to focus on such failures, e.g. the Engineers without Borders ‘Admitting
Failure’ initiative (EWB, 2013). This realignment is one stage in ongoing improvement of evaluations – where
the evaluation judgment is refocused onto the process of evaluation itself. In judging whether projects have
achieved their objectives and suggesting improvements, evaluators’ outputs are themselves being increasingly
scrutinised. This paper is particularly interested whether one objective of the evaluation process is being
successfully achieved: are lessons in market development being identified, successfully learned, and then
applied?
The influence of project evaluations can be seen at the donor level. Microcredit was the “darling of international
donor agencies for over two decades”, and due to early positive evaluations received increasing international
investment year on year. However, microcredit now faces accusations of “causing havoc with borrowers’ lives”
(SSIR, 2012). Organisations providing small loans to the poor, many of which had been set up with ODA
funding, reached 205 million clients globally in 2011 (IFC, 2013), and had annual newly invested philanthropic
capital of $3.6 billion in 2010 (Washington Post, 2012). However, funding and numbers of microcredit borrowers
declined by 5% between 2010 and 2011 (Economic Times, 2013). In light of evaluations detailing no aggregate
impact on health, education, women’s empowerment or income (e.g. Banerjee et al., 2009), and subsequent
withering meta-analyses which indicate that microcredit initiatives do not on average reduce poverty (e.g.
Roodman, 2011; DfID, 2011; Washington Post, 2012), ODA funders are coming under increasing scrutiny from
their citizens about the merits of further funding such programmes. DfID has committed funding for microcredit
programmes (e.g. India until 2015 – see DfID, 2012), but no public pronouncements relating to microcredit have
been made by DfID since 2011, and DfID’s continued publication of papers questioning the effectiveness of
-5-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
microcredit suggest a refocus of programme spending is likely. This opens a gap for more effective business-
focused initiatives to capitalise on this budget.
1.2 Debates in evaluation for market development
‘Summative’ evaluations can therefore play an important (if delayed) role in determining long-term investment
initiatives. Aggregated programme impact reviews provide a measure of accountability to donors that their
money is having the desired effect, and the effectiveness of programme subcomponents (e.g. the number of
farmers with veterinary inoculations for their cattle) can provide some insight into the function of certain elements
of the programmes.
This ‘formative’ element may be applied to making adjustments to programmes that are underway; it may also
add to the system of knowledge in market development programming so that future programmes can benefit.
However this paper will argue that formal reports to aid formative learning have received significantly less focus
– and that in volatile project environments where often practitioners are casting around for best practice, this lack
of structured resources containing easy-access project guidance is an area of concern.
M4P is still a relatively young discipline (with little more than 10 years for DfID-sponsored projects) and while
pro-poor market development approaches are still under development, so are important debates about their
delivery. These include whether the evaluation mechanism for projects should be predetermined (e.g. DCED,
2010) or emergent (e.g. Osorio-Cortes & Jenal, 2013); whether or not there is a place for pro-poor market
development in fragile post-conflict environments with their associated weak governance (e.g. M4P Hub, 2013);
whether given their vulnerability to change, the poor will suffer or benefit from the negative effects of market
change and its associated replacement of informal mechanisms with commercial services (e.g. DfID, 2008).
These debates are set in a global context of thousands of ongoing systemic market development projects, with
most of the delivery teams operating at locations physically remote from their headquarters, and guided by
bodies of expertise which themselves are still very much emerging (e.g. DCED, M4PHub, USAID Microlinks,
Springfield Centre). As in many complex and growing disciplines, the field is therefore in need of supportive
tools to enable the learning. In addition, this domain is in need of rigorous research to understand the learning
processes and to assess those tools.
1.3 Market development initiatives: the context for the study
Poor people are reliant on markets to provide opportunities to sell their labour and products, and to use the
revenue to buy essential goods and services. A recent UN survey of 60,000 poor people revealed that most see
their path out of poverty “through jobs or economic opportunities” (World Bank, 2000). Despite this motivation,
for many the opportunities do not present themselves: “many work extremely hard but at levels of low
productivity, receiving low financial recompense, and thus remaining in relative poverty” (Allen & Thomas, 2000,
-6-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
p.123), and these markets are often uncompetitive, and difficult or costly to access for poor people (SDC/DfID,
2008).
Private sector development initiatives aim to promote economic growth in poor countries, and a subset of these
‘Making Markets Work for the Poor’ (M4P) projects aim to propel this growth while bucking the trend of
increasing intra-country inequality (Milanovic, 2012; IMF, 2007). They also facilitate opportunities for poor
people to develop a living “within their own economy, without the risk of depending on outsiders for continuous
assistance” (DCED, 2013a).
Figure 1 Private Sector Development Framework (DCED, 2013b)
M4P and other poverty-focused market development programmes focus on ‘strengthening market systems so
that they function more sustainably and beneficially for poor people’ (ILO, undated). These interventions aim to
encourage business activity that is profitable for individual firms, as well as inclusive for the poor. Value-chain
development work commences with an analysis of the market systems and identification of underlying causes of
market weakness (e.g. need for technological innovation on the supply-side, increasing demand-side
awareness). The work then delivers interventions which aim to address this underperformance (e.g. facilitating
networks between producers and purchasers to identify a suitable marketplace for transactions; product quality
training). In particular, interventions aim for lasting sustainability beyond the departure of external project teams,
and aim to be facilitative of existing structures rather than imposing new institutions. DfID will spend £42m
($66m) on such market development projects in FY2013 (DfiD, 2013); and annual expenditure by the United
States on such projects totalled approximately $400m per year between 1998 and 2010 (USAID, 2011), so
forming approximately 1% of the $44bn ODA by these two largest donor countries. Private sector development
spend has been a significant part of ODA since the 1990s (Christian Aid, 2009), with explicit use of the M4P
approach by DfID since 2002 (DfID, 2008; KPMG, 2012).
-7-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
2 Nature of the problem
2.1 Importance of learning for market development practitioners
This research focuses on how effectively learning takes place on market development projects, and how
effectively this learning is shared with and applied by practitioners through evaluations and other means. This
paper then aims to review current and potential methods for better dissemination of results, and will make
appropriate recommendations.
Market development projects are usually heavily participatory and exploratory. Projects initiation phases
typically include identifying key stakeholders and working with them to identify gaps in value chains during the
course of a project, thus learning and developing the approach as the project continues. There is a strong
theoretical basis to intervention (e.g. the M4P framework), but given that projects involve multiple inter-related
interventions, there is concern among practitioners about a lack of knowledge of which sub-interventions have
been effective and which have not. A peer learning event report about M4P revealed “The conceptual
framework behind M4P had been well established; practical experience in its implementation is, however, less
widely available…Several [participants] felt that the lack of a broader knowledge and understanding about
successes and failures [in traditional market development approaches] seemed stifling” (DCED, 2013d).
Although roles may differ on market development projects (from more simple price-data collection to more
complex negotiation management), in general the environment could be considered as learning-intensive (Skule,
2004).
For summative evaluation of all interventions together the ‘Donor Committee for Enterprise Development’
(DCED) have implemented a standardised results measurement approach. This fosters consistency, identifies a
stepwise results-chain through which these changes occur, and reports both on project-intended changes and
externalities. These studies have reported various summative outcomes, for example an increase in farm yield
or price per ton (DCED, 2011a; DCED, 2011b), or conversely questioning any direct benefit from generalised
business training programmes (e.g. DCED, 2013e). When shared effectively, this results-focus enables ‘single-
loop’ learning with a focus on making minor adjustments or refinements to entire projects to better achieve
specific outcomes.
‘Double-loop’ learning is also required and is still more formative. Argyris & Schoen (1974) describe such
double-loop learning as a reflective process, which enables an improved understanding of goals and values of
any intervention, and hence potentially a wholesale cancellation of some interventions in preference for others.
Argyris goes on also to describe triple-loop learning – learning about how learning is taking place both by
individual practitioners, the organisations they work for, and the beneficiaries (or indeed maleficiaries) of their
interventions. These double-loop and triple-loops of learning occur most effectively in an openly sharing
environment.
-8-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
2.2 Status of learning on market development projects
Lessons learned from market development projects have the potential to be shared beyond the immediate
implementers / donor organisations. Existing ways of sharing the information include online discussion groups
(e.g. LinkedIn’s MaFI and M4PHub groups); evaluation report and discussion paper repositories (e.g. DCED,
M4P Hub, USAID Microlinks Library and bookmarks through MaFI-licious), training courses (e.g. the ILO market
development course, SEEP network), published thought leadership on consultancy websites (e.g. The
Springfield Centre), conference organising (e.g. USAID Microlinks), and academic journals (e.g. Enterprise
Development & Microfinance). However, the existence of these initiatives does not mean that they are
necessarily used by practitioners.
A measure of the ‘triple-loop’ learning status provides a baseline to reveal how concerned practitioners are about
the state of their learning environment, and therefore to what extent there is an appetite for change. While there
is evidence that there are a variety of potential learning opportunities available to market development
practitioners, it also seems likely that practitioners would not all have access to these learning tools. A starting
position would be to anticipate that on average, market development practitioners should consider their learning
environments to be averagely good (with some above and some below average), i.e. a rating of 3 on a scale of 1
to 5. This forms Hypothesis 1 for the study.
2.3 Use of evaluations for learning in market development projects
Project evaluations are clearly not the only component of an effective learning strategy, but their individually
identified lessons are important for feeding a measured debate. In addition, on-project expenditure is estimated
at approximately 10% to 15% (e.g. DCED 2011a, DCED 2011b) on monitoring and evaluation (M&E), indicating
approximately $50m annually being spent on M&E of these market development projects. If (as is expected) the
proportion of ODA invested in market development increases, so the potential wealth of knowledge these
evaluations will hold will also increase. This begs an important question – how effectively are the existing
evaluation reports contributing to learning in market development?
Criticism from practitioners reveals that evaluation has been ‘professionalised’, with many reports conducted by
external specialists in evaluation, rather than those intimately involved with the delivery of the projects. While
this contributes to objectivity and frankness, and capitalises on experience to spot similar problems on different
projects (Agralytica, 2013), professional evaluators’ focus on predefined criteria may show the emerging market
development projects in a bad light, as often project objectives diverge from their initial intentions (Osorio-Cortes
& Jenal, 2013). In addition, even if internal staff conduct the evaluations often there is sufficient turnover that
those staff are no longer around by the time it comes to implement the changes (ODI, 2010).
In addition, evaluation reports are usually written primarily for donor accountability purposes, and hence may
simply extract evidence from the beneficiaries rather then involving them in the learning process or writing
reports to suit their needs as consumers (see Goetz, 2003 for a discussion). This has a further knock-on effect
for learning: if non-donor customers of evaluation reports are not identified at the time of writing the report, and
-9-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
then their use of the report is not followed-up, it makes it significantly less likely that the results will end up being
used (Preskill & Caracelli, 1997), as they may not be posted online, stored in the correct repositories, advertised,
or even written with those alternative audiences’ needs in mind.
Given these criticisms of project evaluations, and others, there is much discussion of how to improve the content
of evaluations in market development projects (e.g. Osorio-Cortes & Jenal, 2013), but little discussion of how to
improve their circulation and availability. This implies that either a stronger failure has been noted in the content
of the evaluations, or more likely (given the recent growth in this field), that circulation has simply not caught the
attention of practitioners. This paper will therefore review whether practitioners identify that barriers to learning
are more to do with existing evaluation reports not being used, or because the quality of the existing evaluations
are insufficient (or a combination of both). Hypothesis 2 of the study is therefore that practitioners consider
evaluations to be underused (e.g. unread) more than practitioners consider the content of evaluations to be
unsuitable (e.g. not measuring the right information).
2.4 Causes of failure to learn
According to Roche (1995), “All learning proceeds from difference; difference between planned and actual
findings, difference between people’s points of view, etc”, and is considered to be a cycle of action, reflection,
replanning and new action (Kolb, 1984). This can be partly achieved by setting targets, measuring against those
targets and then identifying a discrepancy (single loop learning). Double-loop learning is likely to be facilitated
by identifying a difference between credible points of view in ways of managing the projects, and making a
comparison that things should change.
However, a study in the private sector (Dell & Grayson, 1998) indicated that the major causes of a failure to
learn included ignorance that other skilled practitioners did hold the required knowledge, and a lack of credible
connections to those individuals, i.e. even if information were to be published about more effective management
techniques, it was not always clear that those other knowledge holders would be trusted. The authors made
another interesting finding which may help set the expectations for how effective learning could be expected to
be within the multiple diffuse organisations who operate in the market development space. They found that even
in the best single companies, on a single site, a lesson learned in one part of the organisation would take 27
months to filter through to another part of the organisation. With more diffuse operating groups, working more
remotely, this could be expected to take significantly longer.
Additional major causes of a failure to learn can be understood in terms of depth-of-processing theory (e.g. Craik
& Lockhart, 1972). A more fragile understanding is gained simply from reading about or uncovering the ‘face
characteristics’ of a new way of working, for example by accessing an evaluation report on an online database.
A deeper understanding, and indeed one more likely to influence practice, comes from personal involvement in
the learning process (e.g. conducting an evaluation yourself), an ability to learn through discussion so that the
new information embeds within existing knowledge (e.g. meeting or communicating in real time with practitioners
who hold the alternative knowledge).
-10-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Hypothesis 3 of this study is therefore that practitioners are likely to gain more from the personal process of
conducting evaluations than from consuming formal reports of findings of those evaluations.
2.5 Fostering a culture that encourages learning
Organisational culture can be difficult to manufacture and to change. ODI (2010) reported in the case of DfID
that one important predictor of whether learning would take place within an organisation were the direct
incentives given to individual workers for using evaluation reports and their ‘lessons learned’; as opposed to
those workers being incentivised for focusing on original thought, or on using ideas that were in vogue
elsewhere. It is unclear whether these learning incentives currently exist within the multiple systemic market
development organisations, but we might anticipate that to maintain agility, and given significant existing
criticism from delivery organisations that too much time is spent on bureaucratic measurement of effectiveness,
that those organisations which incorporate lesson-learning review this informally where possible.
ODI (2011) report that a series of organisational initiatives have consistently encouraged learning, including
intra-organisational secondments, secondments to external research institutes, and opportunities for to better
understand colleagues’ realities by cross-departmental working (e.g. evaluators working on delivery
programmes, and programme staff conducting evaluations of other projects). In a similar fashion to the market
development programmes operating on a number of small interventions to make a macro-level change,
organisational culture is also the product of a number of smaller cultural change initiatives.
Data-sharing on a shared drive could also be considered a practical method for encouraging learning, so that
different practitioners within an organisation can pool data. With suitable investment, organisations can take
advantage of a full knowledge management system, whereby documentation can have associated metadata,
can be indexed and searched, has gatekeepers who ensure the quality of the content, and has a systematic
format so that like can be compared with like (see Hansen et al, 1999 for a discussion). There is little evidence
available about the prevalence of in-house knowledge management systems by systemic market development
practitioners. Reflecting again on the depth-of-processing theory (e.g. Craik & Lockhart, 1972), we might
anticipate that the simple presence of the data would be insufficient for learning to occur, but that whether it is
used is important. In contrast, we would expect to see an effect on learning if incentives to engage were present
in tandem with a good repository of learning documentation.
These considerations lead to Hypothesis 4 – that in combination both good access to written evaluations
(through a Knowledge Management System) and incentives to learn will be a significant predictor of a good
learning environment, but that availability of a knowledge management system by itself would have no effect.
To test these hypotheses, initial engagement was made with a suitable partner organisation who had both
access to practitioners in the field, as well as helpful advice to smooth the research method. Practical Action
(www.practicalaction.org) is a market development charity with a publishing arm as well as a consulting arm)
-11-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
were a suitable candidate, particularly given their ongoing facilitation of the MaFI online market development
learning forum, and related evidence of being a highly developed learning organisation.
A web forum review was identified as the best approach for accessing the latest thinking on the topics of
‘learning’ and ‘evaluation’ in market development. The MaFI web forum (LinkedIn, 2013) has been active since
October 2009, is accessible only with permission of the group moderator to maintain standards; and as of 20
September 2013 retains 319 members and contains approximately 700 discussion threads, with 3,154
comments in total. Membership has increased consistently over this period, but the engagement from
participants reached a peak of 20 comments per week in 2011-2012, and has declined to approximately 10
comments per week in 2013. Forum comments including the words ‘learning’ AND ‘evaluation’ were reviewed
(x27), as well as the titles of all discussions containing the word ‘learning’ (x280), and references to learning
resources, academic papers discussing learning/evaluation in market development, and good practice in
evaluations were recorded.
Subsequently, these data were reviewed and initial informal discussions were conducted with Practical Action
staff members to validate the main learning organisations and repositories of market development knowledge.
The ideal methodology identified was one of mixed methods – formalised and structured data collection through
questionnaire completion, associated with informal qualitative research through case interviews.
2.6 Questionnaire development
Reviews of the existing literature (e.g. Preskill & Caracelli, 1997, Dell & Grayson, 1998), revealed that some
existing questions could be reused and thus enable comparison between past results and current results. A
series of new questions were also generated to test the hypotheses: demographic questions (x6) to gauge
whether the sample was representative, questions relating to what facilitates and what blocks learning (x4) and
questions about learning in the participants personal work environments (x7).
The demographic questions were informed by representativeness commentary in the Preskill & Caracelli (1997)
paper - to ensure that the sample was representative by gender, age, job role, level of experience, and
employing organisation. Further questions were informed by the review of the MaFI forum and associated
literature, which revealed a series of example barriers to learning and processes through which people learned,
which were used as prompts in the multiple choice questioning. For flexibility, an additional category of ‘other’ in
most questions was included.
This questionnaire was then piloted with the staff from Practical Action, was refined to make the language
clearer to participants, to correctly categorise the types of organisations they worked in, and to reduce the
overall length of the questionnaire and remove any qualitative questioning. The questionnaire was timed at ~10
minutes using the Versta Research Tool (Versta, 2013), which was under the 15 minute threshold identified at
which response quality becomes poorer. The final version of the questionnaire is included in Appendix 1. The
recommendation was made that to align with the prevailing culture of the MaFI forum, no financial incentive
-12-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
should be given to participants for their engagement, but instead active contribution on the forum to raise the
profile of the researcher and research project was best.
2.7 Data collection
Power calculations were then conducted based on a confidence level of +/- 10% at the 95% level, revealing that
a sample size of 75+ would be required for statistical rigour and to make regression analyses on the outcome
variable (current perceived level of learning in their organisation) possible. This was therefore set as the initial
target response rate for the questionnaire, and this was advertised actively on the MaFI forum, and on another
associated social media pages.
Initial response to the questionnaire was very limited, partly due to data collection taking place over the summer
months. When engagement on the forum and building of a research profile showed little sign of rapidly
improving this (and leading on discussions of ‘learning’ in the market developments space risked biasing the
results), a more active and direct approach was taken to engage each potential participant directly. 75
participants from the MaFI forum who listed ‘M4P’ as one of their skill areas were then individually researched,
as well as 10 participants from a recent high profile market development debate featured in The Guardian
newspaper (Guardian, using the LinkedIn business profiles associated with membership of the forum. The 52
participants for whom the relevancy of their interests and experience to the project were apparent were emailed
directly, including multiple statements about how their knowledge was relevant and personalising the mails to
mention their name and their employer’s name. This led to an improved response rate of ~50%.
Questionnaires were collected electronically using the GoogleForms tool, thus avoiding transcription errors. Of
the 26 questionnaires received from participants, 25 had completed all sections. Thus – statistical significance
was rendered unlikely, but major trends would be likely visible.
2.8 Participants
The participants were 19 male and 6 females, broadly reflecting the 70:30 male:female ratio of members of the
MaFI forum. Participants had a mean age of 39 years, which was normally distributed. This was not unusual,
given that many market development professionals have had work experience in the private sector prior to
entering the market development space, although a more representative sample might expect more younger
participants. No information was gathered on ethnicity or country of background – this could be a useful
addition in an extension study.
Participants had a mean of 8.4 years’ experience of market development evaluations, broadly normally
distributed, which may reflect the nature of the MaFI pool for which entrants are assessed prior to entry for their
commitment to market development. Typically a pyramid distribution would be expected with a greater number
of individuals who have less experience, particularly given younger professionals’ greater experience with social
media. These participants were of a mixture of employment levels, including managers, directors, an owner, but
-13-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
mostly (50%) consultants, and spanned 23 organisations, including for-profit; non-profit, international
organisation and governmental categories.
Figure 2 Q1.5/1.6 What is the name of your current or most recent main employer/client? Which of
the following best describes the role of your current or most recent employer/client?
The study was planned with the expectation that the sample should be representative of the population of
market development professionals on demographic characteristics. Broadly these were satisfactory and
representative so no further proactive measures were taken to target underrepresented groups.
2.9 Limitations of study design
One limitation was revealed in a recommendation from a study participant “your instrument might reveal some
more interesting insights if you had some room for qualitative responses”, and for a more detailed understanding
of precisely which online learning tools are used, for useful stories of how learning has taken place, and for
understanding the particular nuances of learning within the market development environment. In much the same
way that a deeper form of learning can proceed through discursive engagement, this has the potential to add
significant weight to the research process.
A second limitation is that the pool of participants was naturally self-selected as those who were in some way
engaged with online social media (e.g. MaFI), and who had self-defined on their social media profiles their
specialist interest in market development activities. This has the potential to bias the results towards individuals
who are more engaged in the community of market development learning, which could influence the conclusions
drawn. A more balanced participant pool could have been accessed more slowly through engaging with all
practitioners at one organisation, and by broadening the reach to include donors and beneficiaries as well, and
would have enabled practitioners to list the learning tools they were aware of.
A final limitation comes from my cultural bias in having learned largely through formal academic structures, and
this will likely constrain my imagination with the predefined response categories I provided to learning questions.
My western psychology background is likely to predispose me to see social learning (e.g. through mimicry of
others) in contrast to potentially more common forms of repetitive learning or other forms which might be familiar
to other participants.
-14-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
-15-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
3 Analysis and findings
Findings relating to the four hypotheses are discussed below – hypothesis 1 is rejected, but hypotheses 2, 3,
and 4 received some support. In addition to these, data relating to the users of evaluations and the most
effective ways in which participants learn are also presented.
3.1 Data screening / cleaning
The data was cleansed for duplicates, incomplete entries, and responses requiring recoding to aid analysis and
testing of the four hypotheses. Duplication was minimised due to electronic submission (errors were only found
in free-text organisational names); one respondent (participant #10) who had submitted only half of the
questions was omitted from relevant analyses, and participants who had reported ‘other’ as a response to
questions were checked and where relevant recoded where relevant into the predefined categories.
3.2 Hypothesis 1 – quality of learning environment
H1: Moderate learning environments: People consider their working environments to be generally moderate in
terms of learning about market development projects (i.e. 3 on a scale of 1 to 5).
The results from Q3.1 revealed an average rating of exactly 4/5, which can be understood as “My personal
working environment is effective in helping me deliver better market development programmes.” This was
higher than anticipated, with no individuals rating the opportunities to learn in their working environments as
below average. This indicates that despite the remote, emergent and small-organisation model of the working
environment, no participants perceived a workplace which was so flawed that it fundamentally constrained their
learning. It also implies that the participants were broadly satisfied with their workplace (a ‘halo’ effect tends to
reduce satisfaction across all measures when asking workers about their organisations). Despite these positive
characteristics, the results do clearly show that 77% of the participants could imagine a more effective learning
environment, and so while opportunities to learn are available, there is still significant room for improvement.
Figure 3 Q3.1 How effective is your personal working environment in helping you learn how to better
deliver market development programmes?
-16-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
When stratified by the demographic characteristics, there appears to be no difference by gender, age, or
seniority against how well the learning environment is perceived, although there may be an organisational type
dimension - government workers reported less satisfaction (3.3/5) than did for profit (3.9/5) or non-profit (4.2/5)
workers. This is unlikely to reflect an ‘illusory superiority’ effect (Kruger & Dunning, 1999), where all individuals
consider their ability to learn to be ‘above average’, as the individuals are commenting on their organisation’s
learning environment, and not their own abilities. Instead – it may result from the nature of the learning-intensive
requirements of the work. Hypothesis 1 is therefore refuted.
3.3 Hypothesis 2 – learning in relation to evaluation use and quality
Hypothesis 2: Practitioners consider evaluations to be underused (e.g. unread) more than practitioners consider
the content of evaluations to be unsuitable (e.g. not measuring the right information).
The results from Q2.3 revealed balanced opinion from participants about the greatest barriers in learning how to
deliver market development programs. 12 respondents considered the way the results were distributed / used to
be mostly at fault, e.g. “a lack of credible relationships between the users and the writers of the evaluation
reports”, while 9 respondents considered the quality to be the issue at fault, indicating “project reviews do not
contain the right information”.
Meaningful alternative statements falling into neither category were provided by 3 of the remaining 4
respondents, e.g. “The timeframes for evaluating results and for delivering the project are not well aligned”
(Participant #2); “Most programs are not considering the complexity of markets” (Participant #4) and “[Delivery
personnel] remain very much on the outside of any micro, small to medium enterprise and often do not have
private sector experience themselves… in many cases, are just newly named programmed run by non-profits
and pushed by donors.” (Participant #14); “where evaluators are involved in measurement, they are rarely
around often enough to keep measurement apace with changes in programme strategy.” (Participant #11)
Q2.4 delved more into the detail of non-use of evaluation findings, and found that 68% of respondents saw non-
use of findings as a major problem. In addition, when contrasted with a previous survey of professional
evaluators where these questions were asked (Preskill & Caracelli, 1997), only 46% of respondents saw non-
use as a major problem. This suggests a major underlying problem in light of the massively increased internet
connectivity and availability of free webspace to publish data in 2013 as compared to 1996 when the comparison
paper data was collected – when only 10% of US adults had internet connectivity (Slate, 2009). If findings are
still not being used, is this now as a result of information overload?
-17-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Figure 4 Q2.4 How would you characterise non-use of evaluation results when evaluations are
carried out on market development projects?
Taken together, these results indicate that neglecting to use evaluations and a lack of evaluation quality were
major issues, but in addition that multiple inter-related barriers to learning exist, and thus Hypothesis 2 is only
weakly supported.
3.4 Hypothesis 3 – learning by conducting evaluations vs learning by reading
Hypothesis 3: practitioners are likely to gain more from the personal process of conducting evaluations than from
consuming formal reports of findings of those evaluations.
The responses to Q2.2 support this – 81% of respondents either agree or strongly agree with the statement, with
no respondents disagreeing with the statement.
Figure 5 Q2.1 the process of conducting an evaluation is more important for learning how to deliver
better programmes than considering the findings of completed evaluations?
Q3.4 considers whether individuals had access to particular learning approaches, which revealed that in addition
to participants preferring interactive processes for learning, most also reported having time and ability to work
first hand on projects (96%); to discuss issues with colleagues face-to-face and at a distance (76%, 60%); to
conduct reviews (68%). In contrast, the more passive learning approaches were less available to participants:
-18-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
with reading reviews about their own project most common (64%), reading reviews by other projects (44%) and
attending formal academic courses (16%).
The one interactive learning approach which was unavailable to the majority were structured non-academic
training courses, which only 48% had the time or ability to engage with.
Given that individuals do agree that they learn more through active methods, the hypothesis is supported; the
clear gap in interactive learning identified is for individuals to attend structured discursive training courses.
3.5 Hypothesis 4 – learning incentives vs learning through access to evaluation
reports
Hypothesis 4: In combination both good access to written evaluations (through a Knowledge Management
System) and incentives to learn will be a significant predictor of a good learning environment, but that availability
of a knowledge management system by itself would have no effect.
If the responses to Q3.7 (learning incentives) are scored with 1 point for each incentive, the responses to Q3.7
shows a small, non-significant positive correlation (Pearson’s r = 0.22, ns) between learning incentives and
perceived quality of the learning environment, i.e. suggesting that greater incentives (e.g. learning incorporated
into job description; personal recognition/praise from management) may be associated with an enhanced
environment for learning.
If the responses to Q3.6 (IT system access) are scored with 1 point for each component of system functionality,
the responses show a small, non-significant negative correlation (Pearson’s r = -0.28, ns) between access to
evaluation through IT systems and perceived quality of their learning environment, i.e. suggesting that a more
effective IT evaluation repository (e.g. ‘fast and responsive’, ‘data quality is well managed’), may be associated
with an weaker environment for learning.
An independent-samples t-test was conducted to compare the perceived quality of the learning
environment where good learning incentives (Incentive score >=4/7) were combined with good IT system
access (IT system access score >=4/7); and the perceived quality of the learning environment where either
incentives or good system access were lacking (either score <4/7). There was no significant difference in the
scores for the combined condition (M=4.1, SD = 0.78) and the lacking condition (M=4.0, SD=0.67);
t(16)=0.33, p = 0.74. In short - having incentives to learn at work and a method of accessing project
evaluations did not have a major effect on whether individuals feel they are part of learning organisation.
-19-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Figure 6 Q3.6/3.7/3.1 In which of the following ways is learning incentivised in your working
environment? If you search for market development project evaluations using an IT system, what
functionality does that system have? How effective is your personal working environment in helping
you learn how to better deliver market development programmes
Given the sample size is too small to be able to identify these as statistically significant results, conclusions
should be cautious.
The absolute values indicate that some participants were missing basic learning incentives in their workplace
(e.g. recognition of learning – 3 lacked this; time allocated for learning – 5 lacked this; learning reviewed as part
of the performance appraisal process – 5 lacked this), assessment at performance management review). This is
consistent with the DfID study (ODI, 2010) which indicates a need for enhancement in learning incentives across
the organisations.
3.6 Additional findings
The results thus far have indicated that while learning is not fundamentally failing, to improve the process in the
majority of cases, practitioners should be more incentivised to learn, should participate in more evaluations, and
that data from those evaluations need to be both spread more widely and have their content improved.
Q2.1 investigates which are the most effective mechanisms for learning. A comparison of the results provides
the following data
-20-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Figure 7 Q2.1 Which are the most effective mechanisms for learning about how to better deliver
market development projects?
These data (associated with the results from Q3.2 which indicate that 68% of participants consider evaluations
were either extremely important or very important in learning how to better deliver market development
programmes), indicate that evaluations remain a particularly important element in the learning mix, but that
facilitation of face-to-face discussions is more important than access to evaluations. In particular, this reveals
that even though interactive structured training courses are valued by participants, there may be a reason why
this more expensive form of learning is not prioritised – independent forms of learning including reading
evaluations are rated equally as important for learning by participants. Considering the pattern of responses
allied to interactive learning, this may bring into the question the quality of interactive training courses available.
Q3.3 investigates the outcomes from evaluations conducted at the participant’s organisation. These data
corroborate the theory that donors’ administrative requirements are satisfied most, and that beneficiaries benefit
least directly from evaluations. Surprisingly, and in contrast to critics who claim that evaluations are purely an
obfuscating paper exercise for donors, the majority of practitioners also believe that donors learn from the
outcomes of the evaluations and use the results to make better funding decisions.
Figure 8 Q3.3 Project reviews achieve the following in my working environment…” ”
-21-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
The diagram below summarises these findings – and provides an expandable framework ready to incorporate
additional information from the literature. In particular, this single view allows for consideration of where budget
could most effectively be spent to encourage learning among either donor, practitioner or beneficiary groups.
Figure 9 Influence diagram of factors leading to learning in market development
-22-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
4 Conclusions and implications
Many different approaches for encouraging learning have been discussed in the market development space, and
development managers may be uncertain about which of these are the most suitable candidates for allocating
limited internal organisational development resources. Most programs will have a contractual requirement with
donors to supply a satisfactory evaluation to meet administrative requirements. This paper has disentangled
those additional activities beyond such an evaluation which practitioners considered most effective for learning,
supporting the view proposed in ODI (2011) that “evidence is a necessary but not sufficient component of
improved performance and practice”, and that sole focus should not be levelled at the evaluation reports
themselves.
4.1 Overcoming the critique of evaluation reports
The results have indicated that written evaluations have only a moderate impact on practitioner learning,
combined with a view from practitioners that they are underused. Availability through technology systems does
not seem to be the major obstacle, and where they have been effectively targeted (e.g. towards satisfying the
donor requirements), they have been very effective.
A malaria-prophylaxis metaphor illustrates the deficiencies in the approach that is being taken with evaluations.
Simply making antimalarial drugs, or mosquito nets, or ‘IRS’ (Indoor residual pesticide sprays) available
throughout an area affected by malarial mosquitos may well reach fortunate individuals who have the time,
acquired knowledge, or money to opt for the preventative measure. However, given the multiple competing
drains on individuals’ resources, reaching those enlightened individuals is likely not enough to ensure
widespread protection. In the malaria example, high-risk districts and vulnerable individuals are targeted by
relevant health-promoting organisations to maximise the impact of the available protective measures.
Similarly, simply storing evaluation reports onto an ever-growing database is likely to reach a limited minority of
informed market development practitioners, and especially limited if those evaluation reports are not packaged
invitingly. If, however, evaluation-generating organisations were measured on their ability to seek out and
provide learning results to those most in need of the valuable lessons, evaluation content and storage would be
designed to achieve much greater influence. This indicates that evaluation reports should be written actively
with the most needy markets or the most vulnerable customers in mind. In addition, to incentivise evaluators’
performance, part of their terms of reference for delivering the project should ensure that the evaluation
outcomes have been passed onto the relevant parties.
4.2 Targeted investment by delivery organisations into practitioner learning
The data collected as part of this study suggests that interactive initiatives which would most likely encourage
learning include recruiting experts into the organisation to work alongside practitioners on project, ensuring each
-23-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
practitioner participates in a regular process of evaluating their own work on project, and facilitation of
opportunities for practitioners to attend face-to-face conferences, small-group lunches, social events with other
delivery organisations, and locally organised seminars. Investment choices which would be less desirable are
interactive training courses (perhaps a reflection on their current quality), academic courses, and IT solutions to
enable individuals to access evaluation reports. There may indeed be opportunities for organising improved
training initiatives, credibility is clearly important in these so internationally-recognised trainers should be used
where possible.
Although initiatives for learning are important, reflection is just as important (Skule, 2004), and individuals may
require time and space for some of the learning to take place. In social contexts, reflection can lead to “Fear of
exposing oneself”, or be constrained by “Loyalty to colleagues and friends” (Pasteur, 2004). This could be
facilitated through the incentive schemes which participants in the current study have indicated are associated
with a better learning environment – for example an organisational expectation that their teams complete
periodic self-reflective writing exercises. If seeking this as a result, organisations may also need to focus on
effective reflection itself as a skill area. Those organisations may need to invest in “awareness raising, training
and skills development…to ensure that organisational policy is effectively transformed into practice” (Pasteur,
2004).
4.3 Targeted investment by umbrella groups into practitioner learning
Delving deeper into the existing evaluation reports, individuals’ reluctance to trust written findings becomes
apparent. DCED (2012) publish prominently a project evaluation paper entitled ‘Learning from Experience’
based on a Tanzanian project. The “Summary of Good Facilitation Practices” in the paper contains somewhat
unclear guidance, e.g. “Achieving Systemic Change: 1) Work on business environment, 2) High number of
stakeholders involved; 3) Sequencing and interlinking interventions” and “Summary of Bad Facilitation Practices”
“Achieving Systemic Change 1) Low number of stakeholders”. For time-poor practitioners to be rewarded and
become repeat users of the ‘lessons learned’ sections of evaluation reports, these do need to be concise, but
also specific, challenging, and actionable, otherwise the lessons learned serve only as an internal memory aid
for those writing the reports, and not an externally usable asset. While limiting barriers to publication of
evaluations is laudable and academic journal paywalls may prove to be such a barrier, the internet ‘information
overload’ phenomenon requires evaluation gatekeepers on the leading market development websites to be more
rigorous than ever and ensure content providers are aware of the quality thresholds.
The Centre for Global Development (a US think-tank) also shed some light on why the practitioners in this study
identified learning from other organisations’ evaluation reports as a particularly weak area. They identify that
individual project evaluations are often relevant to the setting in which they were conducted (and associated
practitioners), but more distant practitioners and other organisations are more interested in considering the long-
term impact of interventions (e.g. stable changes in income generation) before implementing changes. These
are often not completed as they demand “studies that are different from project monitoring or process
evaluations”, and the incentives for producing such a ‘public good’ may require external funding “the cost of
-24-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
producing such studies are borne by individual institutions or agencies, yet once the results of such studies are
available they can be accessed by anyone to improve policy” (CGDev, 2006).
There is evidence that groups including the SEEP network are taken a cross-organisational role in facilitating
learning forums for market development, including their funding for MaFI.
4.4 Learning by beneficiary groups and local stakeholders
It is helpful to distinguish the poor end-users from organisational stakeholders (can include individual firms,
regulatory agencies). While project activities may involve capacity building of both groups, and that evaluation
data collection must involve both groups, it is unclear whether the products of the evaluations as well as the
process of evaluations are used by either group. Typically each individual firm may come into contact only with
a limited number of M4P initiatives, so the relatively formulaic learning may be unrewarding for them to learn as
they will have limited opportunities to apply it. Particularly where lessons learned are outside their product
operating domain, they are unlikely to be interested. Furthermore, given the political nature of engaging
organisational stakeholders (many of whom may be suspicious of the intentions of the market development
professionals), frank discussion and recording of the effective political approaches to engage them and the
problems faced/overcome may be too sensitive to be shared. This may disincline practitioners from promoting
frank evaluation reports to those organisational stakeholders as this may prejudice future engagements.
Much discussion by Mayoux (2005), and Taylor & Soal (2003) perceives participatory evaluation as enabling and
empowering for the end-user if conducted in an appropriately suitably time-sensitive manner, without raising
unnecessary expectations. This also aligns well with the market development mantra of sustainability beyond
the project engagement. However, the donor business milieu in which market development projects operate is
likely to be significantly bottom-line driven, and there will exist some sensitivity about M4P interventions not
showing immediate results. Tangible financial outcomes are therefore likely to be prioritised which can fit with
the requirements of the subsector of donor agencies who support such initiatives. Capacity building with end-
user participants through the generation of project evaluations may well highlight deficiencies in local market
regulation, and help identify the importance of collective bargaining, but when capacity for M&E investment is
limited, this may naturally not be the first priority for practitioners. In addition, the complex written reports are
themselves likely to be relatively inaccessible to small scale producers. Potentially in projects which have
multiple rounds of funding (e.g. Katalyst 2010-11, 2011-12; Chars Livelihoods Programme 2004-10; 2010-2016),
investment in post-phase training of end-users may be more valid, but there is no reason why this could not be
incorporated into a subsequent project phase.
While this research has therefore highlighted the limited learning by stakeholders beyond donors and
practitioners, it would appear that this gap is filled where necessary by directed capacity building initiatives and
while a distant future hope for a shared resource portal for such groups may be desirable, it does not seem a
priority at this stage.
-25-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
4.5 Meta-evaluation
Learning processes should be subject to the same ongoing evaluation methods as the initiatives they are
facilitating, given the significant investment in evaluation reporting and benefactors’ growing focus on value-for-
money. One current market development paper about a learning conference (DCED, 2013d) is representative of
the current state of market development reflection. The paper contains detailed consolidated guidelines on six
topics (e.g. negotiating deals, designing and implementing strategies), based on requests from participants.
However, it offers no attempt to measure the learning from the conference or to identify how well the information
is used.
Summative project performance measured by impact on stakeholders cannot be considered a sufficient
measure of learning: there is a substantial impact of external market conditions on the success of market
development projects, for example if the local economy, commodity prices, or the competitive environment
changes. However, informal learning (where set curricula and qualifications do not exist) is notoriously
challenging to measure.
One solution to this would be to set a curriculum for practitioners. Alternatively, meta-evaluation can be done at
the local, informal level through regular staff appraisals about what has proven most useful for them in the
previous 6 months / year. In addition, internal IT systems could incorporate electronic feedback loops into each
evaluation report that has been written internally, with technology solutions identifying how many individuals
have accessed it, and allowing users the opportunity to feed back on the quality of written materials.
Meta-evaluation could also be done more formally, sector-wide, by a regular ‘pulse’ research initiative first by
collating a pilot data pool about the major learning initiatives that took place across the sector in the period, and
then asking participants to rate each of the learning initiatives they have engaged with over the past year and
providing feedback on the effectiveness. Using this approach, a market could be set up within which individuals
rate their learning experiences, leading to poorly performing initiatives being adapted and effectively performing
initiatives being better funded. No doubt such an approach exists informally, where practitioners discuss about
what has been good and less good and therefore gravitate towards better performing initiatives. No doubt also
that avoiding excessive bureaucracy and form-filling is desirable. But as the market development sector
matures and certain organisations (perhaps even current leaders, e.g. DCED, SEEP) take the lead in co-
ordinating learning, this could be the next step towards aligning the different cell-based initiatives which are
currently active.
4.6 Further research
There is significant scope for expanding the world view of the influencers of effective learning as detailed in
Figure 9 to validate the unexpected negative influence of high-performing IT repositories on learning, to clarify
distinctions between types of organisation, and to move towards generating a benchmark rating of those
organisations which are perceived as better or worse facilitators of learning. Some exploration of the specific
learning media currently used by each practitioner would also extend the results which have been reported here.
-26-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Further to reduce the risk of tainting of the results received by cultural bias, data capture using non-directive
techniques espoused by David Snowden’s Cognitive Edge, using an approach called ‘SenseMaker’ (Cognitive
Edge, 2013) is recommended. In practice this would likely involve minimally structured interviews with
practitioners, inviting them to recall a story of a good learning experience and one of a timewasting learning
activity they have done. The content of a series of these stories could then be thematically analysed according
to factors identified by the participants, and rated against those themes by the participants.
-27-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
5 Bibliography
ActionAid (2011) Real Aid: Ending Aid Dependency. Retrieved from
http://www.actionaid.org.uk/sites/default/files/doc_lib/real_aid_3.pdf (Accessed 09 September 2013)
Agralytica (2013) Evaluating Market Development Programs. Retrieved from http://www.agralytica.com/Agralytica
%20program%20evaluation%20guide.pdf (Accessed 20 September 2013)
Arygyris, C & Schoen, DA (1974) Theory in Practice: Increasing professional effectiveness. Jossey-Bass, Oxford.
Banerjee, A; Duflo, E; Glennerster, R; Kinnan C (2009). The miracle of microÖnance? Evidence from a randomized
evaluation. Retrieved from http://economics.mit.edu/files/4162 (Accessed 10 September 2013)
Christian Aid (2009) Getting Back on the Rails: The Private Sector and Development. Retrieved from
http://www.christianaid.org.uk/images/private-sector-report.pdf (Accessed 10 September 2013)
CGDev (2006) When will we ever learn? Improving lives through impact evaluation. Retrieved from
http://international.cgdev.org/sites/default/files/7973_file_WillWeEverLearn.pdf (Accessed 24 September 2013)
Cognitive Edge (2013) Sensemaker. Retrieved from http://www.sensemaker-suite.com/smsite/index.gsp (Accessed
26 September 2013)
Craik, FIM; Lockhart RS (1972). "Levels of processing: A framework for memory research". Journal of Verbal
Learning & Verbal Behavior 11 (6): 671–84
DCED (2010) DCED Standard for Results Measurement. Retrieved from http://www.enterprise-
development.org/page/measuring-and-reporting-results (Accessed 10 September 2013)
DCED (2011a) DCED Case Study Katalyst. Retrieved from http://www.enterprise-development.org/page/download?
id=1696 (Accessed 20 September 2013)
DCED (2011b) DCED Case Study GIZ Thailand. Retrieved from http://www.enterprise-
development.org/page/download?id=1671 (Accessed 20 September 2013)
DCED (2012) RLDC’s role as a Facilitator of Market Development: Learning from experience. Retrieved from
http://www.enterprise-development.org/page/download?id=2212 (Accessed 24 September 2013)
-28-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
DCED (2013a) The Rationale for PSD. Retrieved from http://www.enterprise-development.org/page/whypsd
(Accessed 09 September 2013)
DCED (2013b) How Private Sector Development leads to Pro-Poor Impacts: A Framework for Evidence. Retrieved
from http://www.enterprise-development.org/page/framework-evidence (Accessed 26 September 2013)
DCED (2013c) Success Stories. Retrieved from http://www.enterprise-development.org/page/stories#PAZim
(Accessed 09 September 2013)
DCED (2013d) Report on the First M4P Learning Event: Bangkok 8-10 May 2013. Retrieved from
http://www.enterprise-development.org/page/download?id=2199 (Accessed 20 September 2013)
DCED (2013e) Evidence Framework: What do we know about the effectiveness of Business Management Training?
Retrieved from http://www.enterprise-development.org/page/download?id=2177 (Accessed 20 September 2013)
DfID (2008) A Synthesis of the Making Markets Work for the Poor (M4P) Approach. Retrieved from
www.deza.admin.ch/ressources/resource_en_172765.pdf (Accessed 10 September 2013)
DfID (2011) What is the evidence of the impact of microfinance on the well-being of poor people? Retrieved from
http://www.givedirectly.org/pdf/DFID_microfinance_evidence_review.pdf (Accessed 10 September 2013)
DfID (2012) Operational Plan 2011-2015: DfID India (June 2012 update). Retrieved from
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/67379/india-2011.pdf (Accessed
10 September 2013)
DfID (2013) Aid by Sector: Business Sector Breakdown. Retrieved from http://devtracker.dfid.gov.uk/sector/10/
(Accessed 10 September 2013)
Economic Times (2013) Microcredit recipients decline for the first time: Study. Retrieved from
http://articles.economictimes.indiatimes.com/2013-02-06/news/36949727_1_microfinance-clients-microcredit-
summit-campaign-report-bank-account (Accessed 09 September 2013)
EWB (2013) Admitting Failure. Retrieved from http://www.admittingfailure.com/ (Accessed 09 September 2013)
Goetz, AM (2003) Reinventing accountability – making democracy work for the poor: Community of Practice on
Social Accountability Launch. Retrieved from http://info.worldbank.org/etools/docs/voddocs/511/982/goetz.doc
(Accessed 20 September 2013)
Hansen, MT; Nohria, N; Tierney, T (1999) What’s your strategy for managing knowledge? Retrieved from
http://www.itu.dk/~kristianskriver/b9/Whats%20your%20strategy%20for%20managing%20knowledge.pdf
(Accessed 22 September 2013)
-29-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
IFC (2013) Financial Markets: Microfinance. Retrieved from http://articles.economictimes.indiatimes.com/2013-02-
06/news/36949727_1_microfinance-clients-microcredit-summit-campaign-report-bank-account (Accessed 09
September 2013)
ILO (undated) Practical guidelines for a more systemic approach to sustainable enterprise development. Retrieved
from
http://www.ilo.org/wcmsp5/groups/public/@ed_emp/@emp_ent/documents/instructionalmaterial/wcms_143123.
pdf (Accessed 07 September 2013)
IMF (2007) World Economic Outlook. Chapter 4: Globalization and Inequality. Retrieved from
http://www.imf.org/external/pubs/ft/weo/2007/02/ (Accessed 25 June 2013)
Intrac (2013) The use of consultants in development. Retrieved from http://www.intrac.org/blog.php/34/the-use-
of-consultants-in-development (Accessed 09 September 2013)
Kolb, D. (1984) Experiential Learning: Experience as the Source of Learning and Development.
Englewood Cliffs: Prentice-Hall.
KPMG (2012) Financial Deepening and M4P: Lessons from Kenya and Rwanda. Retrieved from
http://www.kpmg.com/eastafrica/en/services/Advisory/Development-Advisory-
Services/Thought_Leadership_at_DAS/Documents/Financial%20Deepening%20and%20M4P
%20%E2%80%93%20Lessons%20from%20Kenya%20and%20Rwanda.pdf (Accessed 10 September 2013)
Kruger, Justin; David Dunning (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own
Incompetence Lead to Inflated Self-Assessments". Journal of Personality and Social Psychology 77 (6): 1121–34.
LinkedIn (2013) The Market Facilitation Initiative (MaFI). Retrieved from http://www.linkedin.com/groups/MaFI-
Market-Facilitation-Initiative-2441757 (Accessed 23 September 2013)
The Holy Bible: King James Version. New York: American Bible Society: 1999
Mensah (2001) The Rise and Rise of NGOs: Implications for Research. Retrieved from
http://www.svt.ntnu.no/iss/issa/0101/010109.shtml (Accessed 09 September 2013)
-30-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Milanovic (2012) Global Income Equality by the Numbers: in History and Now. Retrieved from http://www-
wds.worldbank.org/servlet/WDSContentServer/WDSP/IB/2012/11/06/000158349_20121106085546/Rendered/PDF
/wps6259.pdf (Accessed 27 June 2013)
M4PHub (2013) The M4P approach has limited utility in post-conflict environments. Retrieved from
http://www.m4phub.org/debates/ (Accessed 10 September 2013)
ODI (2010) Strengthening Learning from Research and Evaluation: Going with the Grain. Retrieved from
http://www.odi.org.uk/publications/5154-learning-research-evaluation-dfid (Accessed 22 September 2013)
ODI (2011) Learning how to Learn: eight lessons for impact evaluations that make a difference. Retrieved from
http://www.odi.org.uk/publications/5716-impact-evaluation-assessment-lesson-learning (Accessed 22 September
2013)
ODI (2013) Localising aid to NGOs: issues and challenges highlighted by a multi-country study. Retrieved from
http://www.odi.org.uk/opinion/7578-local-capacity-localising-aid-ngos-donors-hiv-aids (Accessed 19 July 2013)
Practical Action (2008) Promising Practices in Participatory Market System Development: Transforming Livestock
Markets in Northern Zimbabwe Retrieved from
http://practicalaction.org/docs/ia2/promising_practices_pmsd_livestock_zim.pdf (Accessed 09 September 2013)
Roche (1995) Institutional learning in Oxfam: some thoughts. Oxford, Oxfam (Internal Discussion Paper), p.55. In
Open University (2012) Capacities for Managing Development: Part 3. Open University: Wakefield.
Roodman (2011) Due Diligence: An Impertinent Enquiry into Microfinance. Centre for Global Development:
Washington
SDC/DfID (2008) A Synthesis of the Making Markets work for the Poor (M4P) approach. Retrieved from
http://www.value-chains.org/dyn/bds/docs/681/Synthesis_2008.pdf (Accessed 26 September 2013)
Skule (2004) Learning conditions at work: a framework to understand and assess informal learning in the workplace.
Retrieved from
http://www.researchgate.net/publication/228253834_Learning_Conditions_at_Work_A_Framework_to_Understan
d_and_Assess_Informal_Learning_in_the_Workplace/file/32bfe511e48eebedcb.pdf (Accessed 23 September 2013)
-31-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Slate (2009) Jurassic Web: The Internet of 1996 is almost unrecognisable compared with what we have today.
Retrieved from http://www.slate.com/articles/technology/technology/2009/02/jurassic_web.html (Accessed 23
September 2013)
The Guardian (2013) Top tips to crack market-based development. Retrieved from
http://www.theguardian.com/global-development-professionals-network/2013/aug/26/market-development-best-
bits (Accessed 23 September 2013)
USAID (2011) Agribusiness and Agriculture Value Chain Assessment: Final Report. Retrieved from
http://pdf.usaid.gov/pdf_docs/PDACR715.pdf (Accessed 10 September 2013)
Versta (2013) How to estimate the length of a survey. Retrieved from
http://www.verstaresearch.com/newsletters/how-to-estimate-the-length-of-a-survey.html#how-to-estimate-the-
length-of-a-survey (Accessed 23 September 2013)
Walker (2012) Why is it so tempting for livelihood projects to ignore poor people Retrieved from
http://blogs.ucl.ac.uk/dpublog/tag/m4p-framework/ (Accessed 09 September 2013)
Wallace, T (1997) New Development Agendas: Changes in UK NGO Policies and Procedures. Review of African
Political Economy, no. 71.
Washington Post (2012) Microcredit doesn’t end poverty, despite all the hype. Retrieved from
http://www.washingtonpost.com/opinions/microcredit-doesnt-end-poverty-despite-all-the-
hype/2012/01/20/gIQAtrfqzR_story.html (Accessed 09 September 2013)
World Bank (2000) Voices of the Poor. Retrieved from
http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTPOVERTY/0,,contentMDK:20622514~menuPK:336998~pa
gePK:148956~piPK:216618~theSitePK:336992,00.html (Accessed 09 September 2013)
-32-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Appendix 1 - Questionnaire
Questionnaire | Use of Market Development Project Reviews
Thank you for agreeing to take part in this 15-minute survey.
The survey is targeted at individuals with experience of systemic, facilitative market- and value chain development projects, and is
focused on your thoughts about reviews/evaluations and learning on those projects.
The questionnaire is in 3 sections:
Section 1: demographic information - 6 questions
Section 2: facilitators of- and barriers to learning from reviews/evaluations - 4 questions
Section 3: learning in your working environment - 7 questions
The results will be used for an MSc research dissertation, and will be published in aggregated and anonymised form online.
* Required
MSc Researcher: Doug Pearman
Section 1: Demographics
What is your gender? *
Male
Female
Other:
What is your age range? *
Which of the following best describes your current employment level? *
Student
Entry Level
Researcher/lecturer
-33-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Consultant
Manager
Senior Manager / Director
Owner
Other:
For how many years have you had exposure to reviews/evaluations on market development projects? *
0-2 years
3-4 years
5-9 years
10-14 years
15-19 years
20+ years
What is the name of your current or most recent main employer/client (in relation to market development
projects)? *
Which of the following best describes the role of your current or most recent employer/client (in relation to market
development projects) *
This will be the working environment you are questioned about later in this survey
Governmental body (e.g. DfID, USAID)
International organisation (e.g. World Bank, UN)
Academic institution / thinktank (e.g. ODI, CGD)
For-profit implementer (e.g. ASI, GRM international)
Non-profit implementer (e.g. Save the Children)
Other:
Section 2 - Facilitators of- and Barriers to Learning
2.1 Which are the most effective mechanisms for learning about how to better deliver market development
projects?
(5) Most
effective
(4) (3) (2)
(1) Least
effective
No
exposure
to this
First-hand experience working on
projects (NB: excluding the process of
conducting formal reviews)
First-hand experience working with
capable colleagues on projects (NB:
excluding the process of conducting
-34-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
(5) Most
effective
(4) (3) (2)
(1) Least
effective
No
exposure
to this
formal reviews)
Face-to-face discussions with others
about projects (e.g. at your
workplace, at conferences)
Remote discussions with others
about projects (e.g. email, online
communities of practice)
Conducting your own project reviews
(including formal evaluations)
Formal academic courses (e.g.
diploma / degree)
Structured training courses (i.e. non-
academic)
Reading project reviews (including
formal evaluations) conducted by
others on your project
Reading other projects’ reviews
2.2 From a personal point of view, the process of conducting an evaluation is more important for learning how to
deliver better programmes than considering the findings of completed evaluations
(5) Strongly Agree
(4) Agree
(3) Neither Agree nor Disagree
(2) Disagree
(1) Strongly Disagree
2.3 The greatest barrier to learning how to better deliver market development programmes is that
the content of project reviews is unsuitable (i.e. project reviews do not contain the right information)
practitioners have insufficient time to properly draw lessons from the available evidence
the processes by which evaluations are disseminated are insufficient
a lack of credible and trusting relationships between the users and the writers of the evaluation reports
Other:
-35-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
2.4 How would you characterise the following potential areas of problems when evaluations are carried out on
market development projects?
Major problem Minor problem No problem Don't know
Non-use of evaluation results
Intentional misuse of evaluation
results
Unintentional misuse of evaluation
results
Section 3: Learning in your working environment
3.1 How effective is your personal working environment in helping you learn how to better deliver market
development programmes?
(5) Very effective
(4) Effective
(3) Neither effective nor ineffective
(2) Ineffective
(1) Very ineffective
3.2 How important are project reviews in helping you to learn about how to better deliver market development
programmes?
(5) Extremely important
(4) Very important
(3) Moderately important
(2) Somewhat important
(1) Not at all important
3.3 Project reviews (including formal evaluations) achieve the following in my working environment:
(5)
Strongly
agree
(4) Agree
(3)
Neither
agree nor
disagree
(2)
Disagree
(1)
Strongly
disagree
No
exposure
to this
satisfaction of donors’ administrative
requirements for measuring activity
and impact
individual learning by policy makers
about which project approaches are
more and which are less effective
-36-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
(5)
Strongly
agree
(4) Agree
(3)
Neither
agree nor
disagree
(2)
Disagree
(1)
Strongly
disagree
No
exposure
to this
individual learning by project
beneficiaries which is used to
improve the project under evaluation
individual learning by staff delivering
the project which is used to improve
the project under evaluation
individual learning by staff delivering
the project which is used to improve
other projects
individual learning by staff working
on other projects which is used to
improve other projects
learning by donors and donor
organisations to make funding
decisions / alter grant requirements
learning by project delivery
organisations to alter their
expectations when delivering
projects
3.4 Do you have the time and the ability to learn how to better deliver market development projects through:
(5)
Strongly
agree
(4) Agree
(3)
Neither
agree nor
disagree
(2)
Disagree
(1)
Strongly
disagree
No
exposure
to this
first-hand experience working on
projects (NB: excluding the process
of conducting formal reviews)?
first-hand experience working with
capable colleagues on projects (NB:
excluding the process of conducting
-37-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
(5)
Strongly
agree
(4) Agree
(3)
Neither
agree nor
disagree
(2)
Disagree
(1)
Strongly
disagree
No
exposure
to this
formal reviews)?
face-to-face discussions with others
about projects (e.g. at your
workplace, at conferences)?
remote discussions with others about
projects (e.g. email, online
communities of practice)?
conducting your own project reviews
(including formal evaluations)?
formal academic courses (e.g.
diploma / degree)?
structured training courses (i.e. non-
academic)?
reading project reviews (including
formal evaluations) conducted by
others on your project?
reading other projects’ reviews?
3.5 Have you made changes to your work which have made a measurable positive effect as a result of learning
through:
(5)
Strongly
agree
(4) Agree
(3)
Neither
agree nor
disagree
(2)
Disagree
(1)
Strongly
disagree
No
exposure
to this
first-hand experience working on
projects (NB: excluding the process
of conducting formal reviews)?
first-hand experience working with
capable colleagues on projects (NB:
-38-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
(5)
Strongly
agree
(4) Agree
(3)
Neither
agree nor
disagree
(2)
Disagree
(1)
Strongly
disagree
No
exposure
to this
excluding the process of conducting
formal reviews)?
face-to-face discussions with others
about projects (e.g. at your
workplace, at conferences)?
remote discussions with others about
projects (e.g. email, online
communities of practice)?
conducting your own project reviews
(including formal evaluations)?
formal academic courses (e.g.
diploma / degree)?
structured training courses (i.e. non-
academic)?
reading project reviews (including
formal evaluations) conducted by
others on your project?
reading other projects’ reviews?
3.6 If you search for market development project evaluations using an IT system, what functionality does that
system have?
Yes No Not applicable
Easily searchable on keywords
Searchable metadata about the
evaluation author and/or project team
members
-39-
Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874
Yes No Not applicable
Searchable metadata about the type
of project
Searchable metadata about project
locations
Searchable metadata about project
dates
Fast and responsive IT platform and
network connection
Evaluation data quality is well
managed
3.7 In which of the following ways is learning incentivised in your working environment?
Yes No Not applicable
Feature of job description
Assessed at periodic performance
management reviews
A component of promotion criteria
Time allocated for learning activities
Personal recognition (e.g. praise from
management)
Internal recognition (e.g. quarterly
newsletter exposure)
Tangible reward (e.g. invitation to a
conference)
Thank you for your participation
-40-

More Related Content

More from MaFI (The Market Facilitation Initiative)

Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
MaFI (The Market Facilitation Initiative)
 
Mafi Work Plan 2013, short version (March 2013)
Mafi Work Plan 2013, short version (March 2013)Mafi Work Plan 2013, short version (March 2013)
Mafi Work Plan 2013, short version (March 2013)
MaFI (The Market Facilitation Initiative)
 

More from MaFI (The Market Facilitation Initiative) (20)

Activate: A tool to help project teams understand and influence behaviour
Activate: A tool to help project teams understand and influence behaviourActivate: A tool to help project teams understand and influence behaviour
Activate: A tool to help project teams understand and influence behaviour
 
Ideas for the new phase of MaFI
Ideas for the new phase of MaFIIdeas for the new phase of MaFI
Ideas for the new phase of MaFI
 
Poll about the future of MaFI
Poll about the future of MaFIPoll about the future of MaFI
Poll about the future of MaFI
 
MaFI Meeting 2016 (slides)
MaFI Meeting 2016 (slides)MaFI Meeting 2016 (slides)
MaFI Meeting 2016 (slides)
 
MaFI Vision and Strategic Principles, Updated Sep16
MaFI Vision and Strategic Principles, Updated Sep16MaFI Vision and Strategic Principles, Updated Sep16
MaFI Vision and Strategic Principles, Updated Sep16
 
Market Facilitation Clinics - SEEP Conference 2016
Market Facilitation Clinics - SEEP Conference 2016Market Facilitation Clinics - SEEP Conference 2016
Market Facilitation Clinics - SEEP Conference 2016
 
MaFI Meeting at SEEP Annual Conference 2015 - Report
MaFI Meeting at SEEP Annual Conference 2015 - ReportMaFI Meeting at SEEP Annual Conference 2015 - Report
MaFI Meeting at SEEP Annual Conference 2015 - Report
 
MaFI Session during the SEEP AC 2014 - slides/report
MaFI Session during the SEEP AC 2014 - slides/reportMaFI Session during the SEEP AC 2014 - slides/report
MaFI Session during the SEEP AC 2014 - slides/report
 
Are evaluations contributing to learning by market development practitioners?...
Are evaluations contributing to learning by market development practitioners?...Are evaluations contributing to learning by market development practitioners?...
Are evaluations contributing to learning by market development practitioners?...
 
Adapting Lean Thinking to Market Systems Development
Adapting Lean Thinking to Market Systems DevelopmentAdapting Lean Thinking to Market Systems Development
Adapting Lean Thinking to Market Systems Development
 
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
 
CSOs and VCscs, Blum-Samuelsen 29Oct13
CSOs and VCscs, Blum-Samuelsen 29Oct13CSOs and VCscs, Blum-Samuelsen 29Oct13
CSOs and VCscs, Blum-Samuelsen 29Oct13
 
Market Learning Event Report Funded by EU and Hosted by Oxfam, UK, Mar 2013
Market Learning Event Report Funded by EU and Hosted by Oxfam, UK, Mar 2013Market Learning Event Report Funded by EU and Hosted by Oxfam, UK, Mar 2013
Market Learning Event Report Funded by EU and Hosted by Oxfam, UK, Mar 2013
 
Concept Note for Local Learning Groups - Second Pilot
Concept Note for Local Learning Groups - Second PilotConcept Note for Local Learning Groups - Second Pilot
Concept Note for Local Learning Groups - Second Pilot
 
Mafi Work Plan 2013, short version (March 2013)
Mafi Work Plan 2013, short version (March 2013)Mafi Work Plan 2013, short version (March 2013)
Mafi Work Plan 2013, short version (March 2013)
 
Who is listening to who, how well and with what effect? By Daniel Ticehurst
Who is listening to who, how well and with what effect? By Daniel TicehurstWho is listening to who, how well and with what effect? By Daniel Ticehurst
Who is listening to who, how well and with what effect? By Daniel Ticehurst
 
Systemic M&E Synthesis, Feb2013
Systemic M&E Synthesis, Feb2013Systemic M&E Synthesis, Feb2013
Systemic M&E Synthesis, Feb2013
 
Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09
Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09
Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09
 
FAN approach, Wielinga, Apr2011
FAN approach, Wielinga, Apr2011FAN approach, Wielinga, Apr2011
FAN approach, Wielinga, Apr2011
 
Systemic M&E discussion paper, version 2 - 9 Oct 2012
Systemic M&E discussion paper, version 2 - 9 Oct 2012Systemic M&E discussion paper, version 2 - 9 Oct 2012
Systemic M&E discussion paper, version 2 - 9 Oct 2012
 

Recently uploaded

An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
SanaAli374401
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
MateoGardella
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 

Recently uploaded (20)

microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 

Are evaluations contributing to learning dj pearman oct-13

  • 1. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 MSc (Development Management) TU874 Project How successfully are evaluations contributing to learning by market development practitioners? Douglas Pearman October 2013 -1-
  • 2. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Executive summary Pro-poor market development initiatives are seeing growing overseas development investment amid strong demand from donors for rigorous evaluation to ensure value for money, and from practitioners for clear learning about effective project delivery. Presently, there are a lack of reviews to determine whether the monitoring and evaluation expenditure on-project and learning initiatives off-project are being used effectively. This study reviews 25 practitioners’ (19 male, 6 female) views on the effectiveness of the learning process using a novel LinkedIn analysis and direct questionnaire approach. Participants worked across 22 organisations in the non-profit, for-profit sectors, government development agencies and international organisations. Four key findings were identified – (1) Market development practitioners consider their learning environments to be above-average; (2) Both non-use of evaluations and incorrect content of evaluations are seen as significant issues; (3) Practitioners benefit most from interactive learning and (4) Learning incentives form a better predictor of learning success than do rapid access to evaluations through a high-powered IT system. Recommendations are provided including a pledge for meta-evaluation to consistently measure the learning achievements which practitioners find most influential; for umbrella organisations to provide funding for wide-ranging and longitudinal impact assessments of project impacts which are unlikely to fall within standard budgets; and for interactive learning opportunities to be promoted wherever possible on projects as these were rated the most effective for learning by the panel of practitioners. -2-
  • 3. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Contents Executive summary...................................................................................................................................................2 Contents....................................................................................................................................................................3 (i)Acknowledgements................................................................................................................................................4 1Introduction and background..................................................................................................................................5 1.1Importance of summative evaluations in development funding of commercial sector projects...............................5 1.2Debates in evaluation for market development.........................................................................................................6 1.3Market development initiatives: the context for the study.......................................................................................6 2Nature of the problem............................................................................................................................................8 2.1Importance of learning for market development practitioners.................................................................................8 2.2Status of learning on market development projects..................................................................................................9 2.3Use of evaluations for learning in market development projects..............................................................................9 2.4Causes of failure to learn..........................................................................................................................................10 2.5Fostering a culture that encourages learning...........................................................................................................11 2.6Questionnaire development....................................................................................................................................12 2.7Data collection.........................................................................................................................................................13 2.8Participants..............................................................................................................................................................13 2.9Limitations of study design.......................................................................................................................................14 3Analysis and findings ............................................................................................................................................16 3.1Data screening / cleaning.........................................................................................................................................16 3.2Hypothesis 1 – quality of learning environment......................................................................................................16 3.3Hypothesis 2 – learning in relation to evaluation use and quality ...........................................................................17 3.4Hypothesis 3 – learning by conducting evaluations vs learning by reading .............................................................18 3.5Hypothesis 4 – learning incentives vs learning through access to evaluation reports..............................................19 3.6Additional findings...................................................................................................................................................20 4Conclusions and implications................................................................................................................................23 4.1Overcoming the critique of evaluation reports........................................................................................................23 4.2Targeted investment by delivery organisations into practitioner learning...............................................................23 4.3Targeted investment by umbrella groups into practitioner learning.......................................................................24 4.4Learning by beneficiary groups and local stakeholders............................................................................................25 4.5Meta-evaluation.......................................................................................................................................................26 4.6Further research.......................................................................................................................................................26 5Bibliography..........................................................................................................................................................28 Appendix 1 - Questionnaire.....................................................................................................................................33 Questionnaire | Use of Market Development Project Reviews...............................................................................33 MSc Researcher: Doug Pearman...................................................................................................................................33 Section 1: Demographics...............................................................................................................................................33 Section 2 - Facilitators of- and Barriers to Learning.......................................................................................................34 Section 3: Learning in your working environment.........................................................................................................36 Thank you for your participation...................................................................................................................................40 -3-
  • 4. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 (i) Acknowledgements I would like to thank all those who have participated in and supported this study. I am grateful to Alexis Morcrette and Luis Osorio-Cortes for their guidance during the data collection process and granting of access to the Market Facilitation Initiative (MaFI) forum. My tutor, Angela Everitt has provided helpful critical commentary; Jonathon Bird gave insights into the topic area; Craig Warland offered a helpful moderating influence. -4-
  • 5. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 1 Introduction and background 1.1 Importance of summative evaluations in development funding of commercial sector projects It is estimated that 33% of the growing $120bn annual overseas development assistance (ODA) is spent through technical contractors (ActionAid, 2011), ~20% is spent through NGOs (ODI, 2013) while the remainder is channelled through direct bilateral budget support, administration, debt relief, and others (ActionAid, 2011). Given rapid growth in the number of NGOs and contractors since the 1990s vying for this money (e.g. Mensah, 2001; Intrac, 2013), competition has increased and consequently donor demands for accountability and evidence of effective delivery have grown. Market development consulting activities are carried out by actors both in the NGO and private sectors. Most projects budget for post-project evaluations, often conducted by external parties, with methods frequently selected to specifically meet the donors’ administrative and learning needs (Wallace, 1997). While these may perform a useful exercise in reporting outcomes for donors, predetermined logframe matrices result in reduced flexibility in program design; and evaluations are often polished to provide a favourable perspective on projects instead of being written to focus on identification of failures and on drawing lessons. In light of this, initiatives have arisen to realign evaluations to focus on such failures, e.g. the Engineers without Borders ‘Admitting Failure’ initiative (EWB, 2013). This realignment is one stage in ongoing improvement of evaluations – where the evaluation judgment is refocused onto the process of evaluation itself. In judging whether projects have achieved their objectives and suggesting improvements, evaluators’ outputs are themselves being increasingly scrutinised. This paper is particularly interested whether one objective of the evaluation process is being successfully achieved: are lessons in market development being identified, successfully learned, and then applied? The influence of project evaluations can be seen at the donor level. Microcredit was the “darling of international donor agencies for over two decades”, and due to early positive evaluations received increasing international investment year on year. However, microcredit now faces accusations of “causing havoc with borrowers’ lives” (SSIR, 2012). Organisations providing small loans to the poor, many of which had been set up with ODA funding, reached 205 million clients globally in 2011 (IFC, 2013), and had annual newly invested philanthropic capital of $3.6 billion in 2010 (Washington Post, 2012). However, funding and numbers of microcredit borrowers declined by 5% between 2010 and 2011 (Economic Times, 2013). In light of evaluations detailing no aggregate impact on health, education, women’s empowerment or income (e.g. Banerjee et al., 2009), and subsequent withering meta-analyses which indicate that microcredit initiatives do not on average reduce poverty (e.g. Roodman, 2011; DfID, 2011; Washington Post, 2012), ODA funders are coming under increasing scrutiny from their citizens about the merits of further funding such programmes. DfID has committed funding for microcredit programmes (e.g. India until 2015 – see DfID, 2012), but no public pronouncements relating to microcredit have been made by DfID since 2011, and DfID’s continued publication of papers questioning the effectiveness of -5-
  • 6. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 microcredit suggest a refocus of programme spending is likely. This opens a gap for more effective business- focused initiatives to capitalise on this budget. 1.2 Debates in evaluation for market development ‘Summative’ evaluations can therefore play an important (if delayed) role in determining long-term investment initiatives. Aggregated programme impact reviews provide a measure of accountability to donors that their money is having the desired effect, and the effectiveness of programme subcomponents (e.g. the number of farmers with veterinary inoculations for their cattle) can provide some insight into the function of certain elements of the programmes. This ‘formative’ element may be applied to making adjustments to programmes that are underway; it may also add to the system of knowledge in market development programming so that future programmes can benefit. However this paper will argue that formal reports to aid formative learning have received significantly less focus – and that in volatile project environments where often practitioners are casting around for best practice, this lack of structured resources containing easy-access project guidance is an area of concern. M4P is still a relatively young discipline (with little more than 10 years for DfID-sponsored projects) and while pro-poor market development approaches are still under development, so are important debates about their delivery. These include whether the evaluation mechanism for projects should be predetermined (e.g. DCED, 2010) or emergent (e.g. Osorio-Cortes & Jenal, 2013); whether or not there is a place for pro-poor market development in fragile post-conflict environments with their associated weak governance (e.g. M4P Hub, 2013); whether given their vulnerability to change, the poor will suffer or benefit from the negative effects of market change and its associated replacement of informal mechanisms with commercial services (e.g. DfID, 2008). These debates are set in a global context of thousands of ongoing systemic market development projects, with most of the delivery teams operating at locations physically remote from their headquarters, and guided by bodies of expertise which themselves are still very much emerging (e.g. DCED, M4PHub, USAID Microlinks, Springfield Centre). As in many complex and growing disciplines, the field is therefore in need of supportive tools to enable the learning. In addition, this domain is in need of rigorous research to understand the learning processes and to assess those tools. 1.3 Market development initiatives: the context for the study Poor people are reliant on markets to provide opportunities to sell their labour and products, and to use the revenue to buy essential goods and services. A recent UN survey of 60,000 poor people revealed that most see their path out of poverty “through jobs or economic opportunities” (World Bank, 2000). Despite this motivation, for many the opportunities do not present themselves: “many work extremely hard but at levels of low productivity, receiving low financial recompense, and thus remaining in relative poverty” (Allen & Thomas, 2000, -6-
  • 7. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 p.123), and these markets are often uncompetitive, and difficult or costly to access for poor people (SDC/DfID, 2008). Private sector development initiatives aim to promote economic growth in poor countries, and a subset of these ‘Making Markets Work for the Poor’ (M4P) projects aim to propel this growth while bucking the trend of increasing intra-country inequality (Milanovic, 2012; IMF, 2007). They also facilitate opportunities for poor people to develop a living “within their own economy, without the risk of depending on outsiders for continuous assistance” (DCED, 2013a). Figure 1 Private Sector Development Framework (DCED, 2013b) M4P and other poverty-focused market development programmes focus on ‘strengthening market systems so that they function more sustainably and beneficially for poor people’ (ILO, undated). These interventions aim to encourage business activity that is profitable for individual firms, as well as inclusive for the poor. Value-chain development work commences with an analysis of the market systems and identification of underlying causes of market weakness (e.g. need for technological innovation on the supply-side, increasing demand-side awareness). The work then delivers interventions which aim to address this underperformance (e.g. facilitating networks between producers and purchasers to identify a suitable marketplace for transactions; product quality training). In particular, interventions aim for lasting sustainability beyond the departure of external project teams, and aim to be facilitative of existing structures rather than imposing new institutions. DfID will spend £42m ($66m) on such market development projects in FY2013 (DfiD, 2013); and annual expenditure by the United States on such projects totalled approximately $400m per year between 1998 and 2010 (USAID, 2011), so forming approximately 1% of the $44bn ODA by these two largest donor countries. Private sector development spend has been a significant part of ODA since the 1990s (Christian Aid, 2009), with explicit use of the M4P approach by DfID since 2002 (DfID, 2008; KPMG, 2012). -7-
  • 8. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 2 Nature of the problem 2.1 Importance of learning for market development practitioners This research focuses on how effectively learning takes place on market development projects, and how effectively this learning is shared with and applied by practitioners through evaluations and other means. This paper then aims to review current and potential methods for better dissemination of results, and will make appropriate recommendations. Market development projects are usually heavily participatory and exploratory. Projects initiation phases typically include identifying key stakeholders and working with them to identify gaps in value chains during the course of a project, thus learning and developing the approach as the project continues. There is a strong theoretical basis to intervention (e.g. the M4P framework), but given that projects involve multiple inter-related interventions, there is concern among practitioners about a lack of knowledge of which sub-interventions have been effective and which have not. A peer learning event report about M4P revealed “The conceptual framework behind M4P had been well established; practical experience in its implementation is, however, less widely available…Several [participants] felt that the lack of a broader knowledge and understanding about successes and failures [in traditional market development approaches] seemed stifling” (DCED, 2013d). Although roles may differ on market development projects (from more simple price-data collection to more complex negotiation management), in general the environment could be considered as learning-intensive (Skule, 2004). For summative evaluation of all interventions together the ‘Donor Committee for Enterprise Development’ (DCED) have implemented a standardised results measurement approach. This fosters consistency, identifies a stepwise results-chain through which these changes occur, and reports both on project-intended changes and externalities. These studies have reported various summative outcomes, for example an increase in farm yield or price per ton (DCED, 2011a; DCED, 2011b), or conversely questioning any direct benefit from generalised business training programmes (e.g. DCED, 2013e). When shared effectively, this results-focus enables ‘single- loop’ learning with a focus on making minor adjustments or refinements to entire projects to better achieve specific outcomes. ‘Double-loop’ learning is also required and is still more formative. Argyris & Schoen (1974) describe such double-loop learning as a reflective process, which enables an improved understanding of goals and values of any intervention, and hence potentially a wholesale cancellation of some interventions in preference for others. Argyris goes on also to describe triple-loop learning – learning about how learning is taking place both by individual practitioners, the organisations they work for, and the beneficiaries (or indeed maleficiaries) of their interventions. These double-loop and triple-loops of learning occur most effectively in an openly sharing environment. -8-
  • 9. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 2.2 Status of learning on market development projects Lessons learned from market development projects have the potential to be shared beyond the immediate implementers / donor organisations. Existing ways of sharing the information include online discussion groups (e.g. LinkedIn’s MaFI and M4PHub groups); evaluation report and discussion paper repositories (e.g. DCED, M4P Hub, USAID Microlinks Library and bookmarks through MaFI-licious), training courses (e.g. the ILO market development course, SEEP network), published thought leadership on consultancy websites (e.g. The Springfield Centre), conference organising (e.g. USAID Microlinks), and academic journals (e.g. Enterprise Development & Microfinance). However, the existence of these initiatives does not mean that they are necessarily used by practitioners. A measure of the ‘triple-loop’ learning status provides a baseline to reveal how concerned practitioners are about the state of their learning environment, and therefore to what extent there is an appetite for change. While there is evidence that there are a variety of potential learning opportunities available to market development practitioners, it also seems likely that practitioners would not all have access to these learning tools. A starting position would be to anticipate that on average, market development practitioners should consider their learning environments to be averagely good (with some above and some below average), i.e. a rating of 3 on a scale of 1 to 5. This forms Hypothesis 1 for the study. 2.3 Use of evaluations for learning in market development projects Project evaluations are clearly not the only component of an effective learning strategy, but their individually identified lessons are important for feeding a measured debate. In addition, on-project expenditure is estimated at approximately 10% to 15% (e.g. DCED 2011a, DCED 2011b) on monitoring and evaluation (M&E), indicating approximately $50m annually being spent on M&E of these market development projects. If (as is expected) the proportion of ODA invested in market development increases, so the potential wealth of knowledge these evaluations will hold will also increase. This begs an important question – how effectively are the existing evaluation reports contributing to learning in market development? Criticism from practitioners reveals that evaluation has been ‘professionalised’, with many reports conducted by external specialists in evaluation, rather than those intimately involved with the delivery of the projects. While this contributes to objectivity and frankness, and capitalises on experience to spot similar problems on different projects (Agralytica, 2013), professional evaluators’ focus on predefined criteria may show the emerging market development projects in a bad light, as often project objectives diverge from their initial intentions (Osorio-Cortes & Jenal, 2013). In addition, even if internal staff conduct the evaluations often there is sufficient turnover that those staff are no longer around by the time it comes to implement the changes (ODI, 2010). In addition, evaluation reports are usually written primarily for donor accountability purposes, and hence may simply extract evidence from the beneficiaries rather then involving them in the learning process or writing reports to suit their needs as consumers (see Goetz, 2003 for a discussion). This has a further knock-on effect for learning: if non-donor customers of evaluation reports are not identified at the time of writing the report, and -9-
  • 10. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 then their use of the report is not followed-up, it makes it significantly less likely that the results will end up being used (Preskill & Caracelli, 1997), as they may not be posted online, stored in the correct repositories, advertised, or even written with those alternative audiences’ needs in mind. Given these criticisms of project evaluations, and others, there is much discussion of how to improve the content of evaluations in market development projects (e.g. Osorio-Cortes & Jenal, 2013), but little discussion of how to improve their circulation and availability. This implies that either a stronger failure has been noted in the content of the evaluations, or more likely (given the recent growth in this field), that circulation has simply not caught the attention of practitioners. This paper will therefore review whether practitioners identify that barriers to learning are more to do with existing evaluation reports not being used, or because the quality of the existing evaluations are insufficient (or a combination of both). Hypothesis 2 of the study is therefore that practitioners consider evaluations to be underused (e.g. unread) more than practitioners consider the content of evaluations to be unsuitable (e.g. not measuring the right information). 2.4 Causes of failure to learn According to Roche (1995), “All learning proceeds from difference; difference between planned and actual findings, difference between people’s points of view, etc”, and is considered to be a cycle of action, reflection, replanning and new action (Kolb, 1984). This can be partly achieved by setting targets, measuring against those targets and then identifying a discrepancy (single loop learning). Double-loop learning is likely to be facilitated by identifying a difference between credible points of view in ways of managing the projects, and making a comparison that things should change. However, a study in the private sector (Dell & Grayson, 1998) indicated that the major causes of a failure to learn included ignorance that other skilled practitioners did hold the required knowledge, and a lack of credible connections to those individuals, i.e. even if information were to be published about more effective management techniques, it was not always clear that those other knowledge holders would be trusted. The authors made another interesting finding which may help set the expectations for how effective learning could be expected to be within the multiple diffuse organisations who operate in the market development space. They found that even in the best single companies, on a single site, a lesson learned in one part of the organisation would take 27 months to filter through to another part of the organisation. With more diffuse operating groups, working more remotely, this could be expected to take significantly longer. Additional major causes of a failure to learn can be understood in terms of depth-of-processing theory (e.g. Craik & Lockhart, 1972). A more fragile understanding is gained simply from reading about or uncovering the ‘face characteristics’ of a new way of working, for example by accessing an evaluation report on an online database. A deeper understanding, and indeed one more likely to influence practice, comes from personal involvement in the learning process (e.g. conducting an evaluation yourself), an ability to learn through discussion so that the new information embeds within existing knowledge (e.g. meeting or communicating in real time with practitioners who hold the alternative knowledge). -10-
  • 11. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Hypothesis 3 of this study is therefore that practitioners are likely to gain more from the personal process of conducting evaluations than from consuming formal reports of findings of those evaluations. 2.5 Fostering a culture that encourages learning Organisational culture can be difficult to manufacture and to change. ODI (2010) reported in the case of DfID that one important predictor of whether learning would take place within an organisation were the direct incentives given to individual workers for using evaluation reports and their ‘lessons learned’; as opposed to those workers being incentivised for focusing on original thought, or on using ideas that were in vogue elsewhere. It is unclear whether these learning incentives currently exist within the multiple systemic market development organisations, but we might anticipate that to maintain agility, and given significant existing criticism from delivery organisations that too much time is spent on bureaucratic measurement of effectiveness, that those organisations which incorporate lesson-learning review this informally where possible. ODI (2011) report that a series of organisational initiatives have consistently encouraged learning, including intra-organisational secondments, secondments to external research institutes, and opportunities for to better understand colleagues’ realities by cross-departmental working (e.g. evaluators working on delivery programmes, and programme staff conducting evaluations of other projects). In a similar fashion to the market development programmes operating on a number of small interventions to make a macro-level change, organisational culture is also the product of a number of smaller cultural change initiatives. Data-sharing on a shared drive could also be considered a practical method for encouraging learning, so that different practitioners within an organisation can pool data. With suitable investment, organisations can take advantage of a full knowledge management system, whereby documentation can have associated metadata, can be indexed and searched, has gatekeepers who ensure the quality of the content, and has a systematic format so that like can be compared with like (see Hansen et al, 1999 for a discussion). There is little evidence available about the prevalence of in-house knowledge management systems by systemic market development practitioners. Reflecting again on the depth-of-processing theory (e.g. Craik & Lockhart, 1972), we might anticipate that the simple presence of the data would be insufficient for learning to occur, but that whether it is used is important. In contrast, we would expect to see an effect on learning if incentives to engage were present in tandem with a good repository of learning documentation. These considerations lead to Hypothesis 4 – that in combination both good access to written evaluations (through a Knowledge Management System) and incentives to learn will be a significant predictor of a good learning environment, but that availability of a knowledge management system by itself would have no effect. To test these hypotheses, initial engagement was made with a suitable partner organisation who had both access to practitioners in the field, as well as helpful advice to smooth the research method. Practical Action (www.practicalaction.org) is a market development charity with a publishing arm as well as a consulting arm) -11-
  • 12. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 were a suitable candidate, particularly given their ongoing facilitation of the MaFI online market development learning forum, and related evidence of being a highly developed learning organisation. A web forum review was identified as the best approach for accessing the latest thinking on the topics of ‘learning’ and ‘evaluation’ in market development. The MaFI web forum (LinkedIn, 2013) has been active since October 2009, is accessible only with permission of the group moderator to maintain standards; and as of 20 September 2013 retains 319 members and contains approximately 700 discussion threads, with 3,154 comments in total. Membership has increased consistently over this period, but the engagement from participants reached a peak of 20 comments per week in 2011-2012, and has declined to approximately 10 comments per week in 2013. Forum comments including the words ‘learning’ AND ‘evaluation’ were reviewed (x27), as well as the titles of all discussions containing the word ‘learning’ (x280), and references to learning resources, academic papers discussing learning/evaluation in market development, and good practice in evaluations were recorded. Subsequently, these data were reviewed and initial informal discussions were conducted with Practical Action staff members to validate the main learning organisations and repositories of market development knowledge. The ideal methodology identified was one of mixed methods – formalised and structured data collection through questionnaire completion, associated with informal qualitative research through case interviews. 2.6 Questionnaire development Reviews of the existing literature (e.g. Preskill & Caracelli, 1997, Dell & Grayson, 1998), revealed that some existing questions could be reused and thus enable comparison between past results and current results. A series of new questions were also generated to test the hypotheses: demographic questions (x6) to gauge whether the sample was representative, questions relating to what facilitates and what blocks learning (x4) and questions about learning in the participants personal work environments (x7). The demographic questions were informed by representativeness commentary in the Preskill & Caracelli (1997) paper - to ensure that the sample was representative by gender, age, job role, level of experience, and employing organisation. Further questions were informed by the review of the MaFI forum and associated literature, which revealed a series of example barriers to learning and processes through which people learned, which were used as prompts in the multiple choice questioning. For flexibility, an additional category of ‘other’ in most questions was included. This questionnaire was then piloted with the staff from Practical Action, was refined to make the language clearer to participants, to correctly categorise the types of organisations they worked in, and to reduce the overall length of the questionnaire and remove any qualitative questioning. The questionnaire was timed at ~10 minutes using the Versta Research Tool (Versta, 2013), which was under the 15 minute threshold identified at which response quality becomes poorer. The final version of the questionnaire is included in Appendix 1. The recommendation was made that to align with the prevailing culture of the MaFI forum, no financial incentive -12-
  • 13. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 should be given to participants for their engagement, but instead active contribution on the forum to raise the profile of the researcher and research project was best. 2.7 Data collection Power calculations were then conducted based on a confidence level of +/- 10% at the 95% level, revealing that a sample size of 75+ would be required for statistical rigour and to make regression analyses on the outcome variable (current perceived level of learning in their organisation) possible. This was therefore set as the initial target response rate for the questionnaire, and this was advertised actively on the MaFI forum, and on another associated social media pages. Initial response to the questionnaire was very limited, partly due to data collection taking place over the summer months. When engagement on the forum and building of a research profile showed little sign of rapidly improving this (and leading on discussions of ‘learning’ in the market developments space risked biasing the results), a more active and direct approach was taken to engage each potential participant directly. 75 participants from the MaFI forum who listed ‘M4P’ as one of their skill areas were then individually researched, as well as 10 participants from a recent high profile market development debate featured in The Guardian newspaper (Guardian, using the LinkedIn business profiles associated with membership of the forum. The 52 participants for whom the relevancy of their interests and experience to the project were apparent were emailed directly, including multiple statements about how their knowledge was relevant and personalising the mails to mention their name and their employer’s name. This led to an improved response rate of ~50%. Questionnaires were collected electronically using the GoogleForms tool, thus avoiding transcription errors. Of the 26 questionnaires received from participants, 25 had completed all sections. Thus – statistical significance was rendered unlikely, but major trends would be likely visible. 2.8 Participants The participants were 19 male and 6 females, broadly reflecting the 70:30 male:female ratio of members of the MaFI forum. Participants had a mean age of 39 years, which was normally distributed. This was not unusual, given that many market development professionals have had work experience in the private sector prior to entering the market development space, although a more representative sample might expect more younger participants. No information was gathered on ethnicity or country of background – this could be a useful addition in an extension study. Participants had a mean of 8.4 years’ experience of market development evaluations, broadly normally distributed, which may reflect the nature of the MaFI pool for which entrants are assessed prior to entry for their commitment to market development. Typically a pyramid distribution would be expected with a greater number of individuals who have less experience, particularly given younger professionals’ greater experience with social media. These participants were of a mixture of employment levels, including managers, directors, an owner, but -13-
  • 14. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 mostly (50%) consultants, and spanned 23 organisations, including for-profit; non-profit, international organisation and governmental categories. Figure 2 Q1.5/1.6 What is the name of your current or most recent main employer/client? Which of the following best describes the role of your current or most recent employer/client? The study was planned with the expectation that the sample should be representative of the population of market development professionals on demographic characteristics. Broadly these were satisfactory and representative so no further proactive measures were taken to target underrepresented groups. 2.9 Limitations of study design One limitation was revealed in a recommendation from a study participant “your instrument might reveal some more interesting insights if you had some room for qualitative responses”, and for a more detailed understanding of precisely which online learning tools are used, for useful stories of how learning has taken place, and for understanding the particular nuances of learning within the market development environment. In much the same way that a deeper form of learning can proceed through discursive engagement, this has the potential to add significant weight to the research process. A second limitation is that the pool of participants was naturally self-selected as those who were in some way engaged with online social media (e.g. MaFI), and who had self-defined on their social media profiles their specialist interest in market development activities. This has the potential to bias the results towards individuals who are more engaged in the community of market development learning, which could influence the conclusions drawn. A more balanced participant pool could have been accessed more slowly through engaging with all practitioners at one organisation, and by broadening the reach to include donors and beneficiaries as well, and would have enabled practitioners to list the learning tools they were aware of. A final limitation comes from my cultural bias in having learned largely through formal academic structures, and this will likely constrain my imagination with the predefined response categories I provided to learning questions. My western psychology background is likely to predispose me to see social learning (e.g. through mimicry of others) in contrast to potentially more common forms of repetitive learning or other forms which might be familiar to other participants. -14-
  • 15. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 -15-
  • 16. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 3 Analysis and findings Findings relating to the four hypotheses are discussed below – hypothesis 1 is rejected, but hypotheses 2, 3, and 4 received some support. In addition to these, data relating to the users of evaluations and the most effective ways in which participants learn are also presented. 3.1 Data screening / cleaning The data was cleansed for duplicates, incomplete entries, and responses requiring recoding to aid analysis and testing of the four hypotheses. Duplication was minimised due to electronic submission (errors were only found in free-text organisational names); one respondent (participant #10) who had submitted only half of the questions was omitted from relevant analyses, and participants who had reported ‘other’ as a response to questions were checked and where relevant recoded where relevant into the predefined categories. 3.2 Hypothesis 1 – quality of learning environment H1: Moderate learning environments: People consider their working environments to be generally moderate in terms of learning about market development projects (i.e. 3 on a scale of 1 to 5). The results from Q3.1 revealed an average rating of exactly 4/5, which can be understood as “My personal working environment is effective in helping me deliver better market development programmes.” This was higher than anticipated, with no individuals rating the opportunities to learn in their working environments as below average. This indicates that despite the remote, emergent and small-organisation model of the working environment, no participants perceived a workplace which was so flawed that it fundamentally constrained their learning. It also implies that the participants were broadly satisfied with their workplace (a ‘halo’ effect tends to reduce satisfaction across all measures when asking workers about their organisations). Despite these positive characteristics, the results do clearly show that 77% of the participants could imagine a more effective learning environment, and so while opportunities to learn are available, there is still significant room for improvement. Figure 3 Q3.1 How effective is your personal working environment in helping you learn how to better deliver market development programmes? -16-
  • 17. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 When stratified by the demographic characteristics, there appears to be no difference by gender, age, or seniority against how well the learning environment is perceived, although there may be an organisational type dimension - government workers reported less satisfaction (3.3/5) than did for profit (3.9/5) or non-profit (4.2/5) workers. This is unlikely to reflect an ‘illusory superiority’ effect (Kruger & Dunning, 1999), where all individuals consider their ability to learn to be ‘above average’, as the individuals are commenting on their organisation’s learning environment, and not their own abilities. Instead – it may result from the nature of the learning-intensive requirements of the work. Hypothesis 1 is therefore refuted. 3.3 Hypothesis 2 – learning in relation to evaluation use and quality Hypothesis 2: Practitioners consider evaluations to be underused (e.g. unread) more than practitioners consider the content of evaluations to be unsuitable (e.g. not measuring the right information). The results from Q2.3 revealed balanced opinion from participants about the greatest barriers in learning how to deliver market development programs. 12 respondents considered the way the results were distributed / used to be mostly at fault, e.g. “a lack of credible relationships between the users and the writers of the evaluation reports”, while 9 respondents considered the quality to be the issue at fault, indicating “project reviews do not contain the right information”. Meaningful alternative statements falling into neither category were provided by 3 of the remaining 4 respondents, e.g. “The timeframes for evaluating results and for delivering the project are not well aligned” (Participant #2); “Most programs are not considering the complexity of markets” (Participant #4) and “[Delivery personnel] remain very much on the outside of any micro, small to medium enterprise and often do not have private sector experience themselves… in many cases, are just newly named programmed run by non-profits and pushed by donors.” (Participant #14); “where evaluators are involved in measurement, they are rarely around often enough to keep measurement apace with changes in programme strategy.” (Participant #11) Q2.4 delved more into the detail of non-use of evaluation findings, and found that 68% of respondents saw non- use of findings as a major problem. In addition, when contrasted with a previous survey of professional evaluators where these questions were asked (Preskill & Caracelli, 1997), only 46% of respondents saw non- use as a major problem. This suggests a major underlying problem in light of the massively increased internet connectivity and availability of free webspace to publish data in 2013 as compared to 1996 when the comparison paper data was collected – when only 10% of US adults had internet connectivity (Slate, 2009). If findings are still not being used, is this now as a result of information overload? -17-
  • 18. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Figure 4 Q2.4 How would you characterise non-use of evaluation results when evaluations are carried out on market development projects? Taken together, these results indicate that neglecting to use evaluations and a lack of evaluation quality were major issues, but in addition that multiple inter-related barriers to learning exist, and thus Hypothesis 2 is only weakly supported. 3.4 Hypothesis 3 – learning by conducting evaluations vs learning by reading Hypothesis 3: practitioners are likely to gain more from the personal process of conducting evaluations than from consuming formal reports of findings of those evaluations. The responses to Q2.2 support this – 81% of respondents either agree or strongly agree with the statement, with no respondents disagreeing with the statement. Figure 5 Q2.1 the process of conducting an evaluation is more important for learning how to deliver better programmes than considering the findings of completed evaluations? Q3.4 considers whether individuals had access to particular learning approaches, which revealed that in addition to participants preferring interactive processes for learning, most also reported having time and ability to work first hand on projects (96%); to discuss issues with colleagues face-to-face and at a distance (76%, 60%); to conduct reviews (68%). In contrast, the more passive learning approaches were less available to participants: -18-
  • 19. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 with reading reviews about their own project most common (64%), reading reviews by other projects (44%) and attending formal academic courses (16%). The one interactive learning approach which was unavailable to the majority were structured non-academic training courses, which only 48% had the time or ability to engage with. Given that individuals do agree that they learn more through active methods, the hypothesis is supported; the clear gap in interactive learning identified is for individuals to attend structured discursive training courses. 3.5 Hypothesis 4 – learning incentives vs learning through access to evaluation reports Hypothesis 4: In combination both good access to written evaluations (through a Knowledge Management System) and incentives to learn will be a significant predictor of a good learning environment, but that availability of a knowledge management system by itself would have no effect. If the responses to Q3.7 (learning incentives) are scored with 1 point for each incentive, the responses to Q3.7 shows a small, non-significant positive correlation (Pearson’s r = 0.22, ns) between learning incentives and perceived quality of the learning environment, i.e. suggesting that greater incentives (e.g. learning incorporated into job description; personal recognition/praise from management) may be associated with an enhanced environment for learning. If the responses to Q3.6 (IT system access) are scored with 1 point for each component of system functionality, the responses show a small, non-significant negative correlation (Pearson’s r = -0.28, ns) between access to evaluation through IT systems and perceived quality of their learning environment, i.e. suggesting that a more effective IT evaluation repository (e.g. ‘fast and responsive’, ‘data quality is well managed’), may be associated with an weaker environment for learning. An independent-samples t-test was conducted to compare the perceived quality of the learning environment where good learning incentives (Incentive score >=4/7) were combined with good IT system access (IT system access score >=4/7); and the perceived quality of the learning environment where either incentives or good system access were lacking (either score <4/7). There was no significant difference in the scores for the combined condition (M=4.1, SD = 0.78) and the lacking condition (M=4.0, SD=0.67); t(16)=0.33, p = 0.74. In short - having incentives to learn at work and a method of accessing project evaluations did not have a major effect on whether individuals feel they are part of learning organisation. -19-
  • 20. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Figure 6 Q3.6/3.7/3.1 In which of the following ways is learning incentivised in your working environment? If you search for market development project evaluations using an IT system, what functionality does that system have? How effective is your personal working environment in helping you learn how to better deliver market development programmes Given the sample size is too small to be able to identify these as statistically significant results, conclusions should be cautious. The absolute values indicate that some participants were missing basic learning incentives in their workplace (e.g. recognition of learning – 3 lacked this; time allocated for learning – 5 lacked this; learning reviewed as part of the performance appraisal process – 5 lacked this), assessment at performance management review). This is consistent with the DfID study (ODI, 2010) which indicates a need for enhancement in learning incentives across the organisations. 3.6 Additional findings The results thus far have indicated that while learning is not fundamentally failing, to improve the process in the majority of cases, practitioners should be more incentivised to learn, should participate in more evaluations, and that data from those evaluations need to be both spread more widely and have their content improved. Q2.1 investigates which are the most effective mechanisms for learning. A comparison of the results provides the following data -20-
  • 21. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Figure 7 Q2.1 Which are the most effective mechanisms for learning about how to better deliver market development projects? These data (associated with the results from Q3.2 which indicate that 68% of participants consider evaluations were either extremely important or very important in learning how to better deliver market development programmes), indicate that evaluations remain a particularly important element in the learning mix, but that facilitation of face-to-face discussions is more important than access to evaluations. In particular, this reveals that even though interactive structured training courses are valued by participants, there may be a reason why this more expensive form of learning is not prioritised – independent forms of learning including reading evaluations are rated equally as important for learning by participants. Considering the pattern of responses allied to interactive learning, this may bring into the question the quality of interactive training courses available. Q3.3 investigates the outcomes from evaluations conducted at the participant’s organisation. These data corroborate the theory that donors’ administrative requirements are satisfied most, and that beneficiaries benefit least directly from evaluations. Surprisingly, and in contrast to critics who claim that evaluations are purely an obfuscating paper exercise for donors, the majority of practitioners also believe that donors learn from the outcomes of the evaluations and use the results to make better funding decisions. Figure 8 Q3.3 Project reviews achieve the following in my working environment…” ” -21-
  • 22. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 The diagram below summarises these findings – and provides an expandable framework ready to incorporate additional information from the literature. In particular, this single view allows for consideration of where budget could most effectively be spent to encourage learning among either donor, practitioner or beneficiary groups. Figure 9 Influence diagram of factors leading to learning in market development -22-
  • 23. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 4 Conclusions and implications Many different approaches for encouraging learning have been discussed in the market development space, and development managers may be uncertain about which of these are the most suitable candidates for allocating limited internal organisational development resources. Most programs will have a contractual requirement with donors to supply a satisfactory evaluation to meet administrative requirements. This paper has disentangled those additional activities beyond such an evaluation which practitioners considered most effective for learning, supporting the view proposed in ODI (2011) that “evidence is a necessary but not sufficient component of improved performance and practice”, and that sole focus should not be levelled at the evaluation reports themselves. 4.1 Overcoming the critique of evaluation reports The results have indicated that written evaluations have only a moderate impact on practitioner learning, combined with a view from practitioners that they are underused. Availability through technology systems does not seem to be the major obstacle, and where they have been effectively targeted (e.g. towards satisfying the donor requirements), they have been very effective. A malaria-prophylaxis metaphor illustrates the deficiencies in the approach that is being taken with evaluations. Simply making antimalarial drugs, or mosquito nets, or ‘IRS’ (Indoor residual pesticide sprays) available throughout an area affected by malarial mosquitos may well reach fortunate individuals who have the time, acquired knowledge, or money to opt for the preventative measure. However, given the multiple competing drains on individuals’ resources, reaching those enlightened individuals is likely not enough to ensure widespread protection. In the malaria example, high-risk districts and vulnerable individuals are targeted by relevant health-promoting organisations to maximise the impact of the available protective measures. Similarly, simply storing evaluation reports onto an ever-growing database is likely to reach a limited minority of informed market development practitioners, and especially limited if those evaluation reports are not packaged invitingly. If, however, evaluation-generating organisations were measured on their ability to seek out and provide learning results to those most in need of the valuable lessons, evaluation content and storage would be designed to achieve much greater influence. This indicates that evaluation reports should be written actively with the most needy markets or the most vulnerable customers in mind. In addition, to incentivise evaluators’ performance, part of their terms of reference for delivering the project should ensure that the evaluation outcomes have been passed onto the relevant parties. 4.2 Targeted investment by delivery organisations into practitioner learning The data collected as part of this study suggests that interactive initiatives which would most likely encourage learning include recruiting experts into the organisation to work alongside practitioners on project, ensuring each -23-
  • 24. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 practitioner participates in a regular process of evaluating their own work on project, and facilitation of opportunities for practitioners to attend face-to-face conferences, small-group lunches, social events with other delivery organisations, and locally organised seminars. Investment choices which would be less desirable are interactive training courses (perhaps a reflection on their current quality), academic courses, and IT solutions to enable individuals to access evaluation reports. There may indeed be opportunities for organising improved training initiatives, credibility is clearly important in these so internationally-recognised trainers should be used where possible. Although initiatives for learning are important, reflection is just as important (Skule, 2004), and individuals may require time and space for some of the learning to take place. In social contexts, reflection can lead to “Fear of exposing oneself”, or be constrained by “Loyalty to colleagues and friends” (Pasteur, 2004). This could be facilitated through the incentive schemes which participants in the current study have indicated are associated with a better learning environment – for example an organisational expectation that their teams complete periodic self-reflective writing exercises. If seeking this as a result, organisations may also need to focus on effective reflection itself as a skill area. Those organisations may need to invest in “awareness raising, training and skills development…to ensure that organisational policy is effectively transformed into practice” (Pasteur, 2004). 4.3 Targeted investment by umbrella groups into practitioner learning Delving deeper into the existing evaluation reports, individuals’ reluctance to trust written findings becomes apparent. DCED (2012) publish prominently a project evaluation paper entitled ‘Learning from Experience’ based on a Tanzanian project. The “Summary of Good Facilitation Practices” in the paper contains somewhat unclear guidance, e.g. “Achieving Systemic Change: 1) Work on business environment, 2) High number of stakeholders involved; 3) Sequencing and interlinking interventions” and “Summary of Bad Facilitation Practices” “Achieving Systemic Change 1) Low number of stakeholders”. For time-poor practitioners to be rewarded and become repeat users of the ‘lessons learned’ sections of evaluation reports, these do need to be concise, but also specific, challenging, and actionable, otherwise the lessons learned serve only as an internal memory aid for those writing the reports, and not an externally usable asset. While limiting barriers to publication of evaluations is laudable and academic journal paywalls may prove to be such a barrier, the internet ‘information overload’ phenomenon requires evaluation gatekeepers on the leading market development websites to be more rigorous than ever and ensure content providers are aware of the quality thresholds. The Centre for Global Development (a US think-tank) also shed some light on why the practitioners in this study identified learning from other organisations’ evaluation reports as a particularly weak area. They identify that individual project evaluations are often relevant to the setting in which they were conducted (and associated practitioners), but more distant practitioners and other organisations are more interested in considering the long- term impact of interventions (e.g. stable changes in income generation) before implementing changes. These are often not completed as they demand “studies that are different from project monitoring or process evaluations”, and the incentives for producing such a ‘public good’ may require external funding “the cost of -24-
  • 25. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 producing such studies are borne by individual institutions or agencies, yet once the results of such studies are available they can be accessed by anyone to improve policy” (CGDev, 2006). There is evidence that groups including the SEEP network are taken a cross-organisational role in facilitating learning forums for market development, including their funding for MaFI. 4.4 Learning by beneficiary groups and local stakeholders It is helpful to distinguish the poor end-users from organisational stakeholders (can include individual firms, regulatory agencies). While project activities may involve capacity building of both groups, and that evaluation data collection must involve both groups, it is unclear whether the products of the evaluations as well as the process of evaluations are used by either group. Typically each individual firm may come into contact only with a limited number of M4P initiatives, so the relatively formulaic learning may be unrewarding for them to learn as they will have limited opportunities to apply it. Particularly where lessons learned are outside their product operating domain, they are unlikely to be interested. Furthermore, given the political nature of engaging organisational stakeholders (many of whom may be suspicious of the intentions of the market development professionals), frank discussion and recording of the effective political approaches to engage them and the problems faced/overcome may be too sensitive to be shared. This may disincline practitioners from promoting frank evaluation reports to those organisational stakeholders as this may prejudice future engagements. Much discussion by Mayoux (2005), and Taylor & Soal (2003) perceives participatory evaluation as enabling and empowering for the end-user if conducted in an appropriately suitably time-sensitive manner, without raising unnecessary expectations. This also aligns well with the market development mantra of sustainability beyond the project engagement. However, the donor business milieu in which market development projects operate is likely to be significantly bottom-line driven, and there will exist some sensitivity about M4P interventions not showing immediate results. Tangible financial outcomes are therefore likely to be prioritised which can fit with the requirements of the subsector of donor agencies who support such initiatives. Capacity building with end- user participants through the generation of project evaluations may well highlight deficiencies in local market regulation, and help identify the importance of collective bargaining, but when capacity for M&E investment is limited, this may naturally not be the first priority for practitioners. In addition, the complex written reports are themselves likely to be relatively inaccessible to small scale producers. Potentially in projects which have multiple rounds of funding (e.g. Katalyst 2010-11, 2011-12; Chars Livelihoods Programme 2004-10; 2010-2016), investment in post-phase training of end-users may be more valid, but there is no reason why this could not be incorporated into a subsequent project phase. While this research has therefore highlighted the limited learning by stakeholders beyond donors and practitioners, it would appear that this gap is filled where necessary by directed capacity building initiatives and while a distant future hope for a shared resource portal for such groups may be desirable, it does not seem a priority at this stage. -25-
  • 26. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 4.5 Meta-evaluation Learning processes should be subject to the same ongoing evaluation methods as the initiatives they are facilitating, given the significant investment in evaluation reporting and benefactors’ growing focus on value-for- money. One current market development paper about a learning conference (DCED, 2013d) is representative of the current state of market development reflection. The paper contains detailed consolidated guidelines on six topics (e.g. negotiating deals, designing and implementing strategies), based on requests from participants. However, it offers no attempt to measure the learning from the conference or to identify how well the information is used. Summative project performance measured by impact on stakeholders cannot be considered a sufficient measure of learning: there is a substantial impact of external market conditions on the success of market development projects, for example if the local economy, commodity prices, or the competitive environment changes. However, informal learning (where set curricula and qualifications do not exist) is notoriously challenging to measure. One solution to this would be to set a curriculum for practitioners. Alternatively, meta-evaluation can be done at the local, informal level through regular staff appraisals about what has proven most useful for them in the previous 6 months / year. In addition, internal IT systems could incorporate electronic feedback loops into each evaluation report that has been written internally, with technology solutions identifying how many individuals have accessed it, and allowing users the opportunity to feed back on the quality of written materials. Meta-evaluation could also be done more formally, sector-wide, by a regular ‘pulse’ research initiative first by collating a pilot data pool about the major learning initiatives that took place across the sector in the period, and then asking participants to rate each of the learning initiatives they have engaged with over the past year and providing feedback on the effectiveness. Using this approach, a market could be set up within which individuals rate their learning experiences, leading to poorly performing initiatives being adapted and effectively performing initiatives being better funded. No doubt such an approach exists informally, where practitioners discuss about what has been good and less good and therefore gravitate towards better performing initiatives. No doubt also that avoiding excessive bureaucracy and form-filling is desirable. But as the market development sector matures and certain organisations (perhaps even current leaders, e.g. DCED, SEEP) take the lead in co- ordinating learning, this could be the next step towards aligning the different cell-based initiatives which are currently active. 4.6 Further research There is significant scope for expanding the world view of the influencers of effective learning as detailed in Figure 9 to validate the unexpected negative influence of high-performing IT repositories on learning, to clarify distinctions between types of organisation, and to move towards generating a benchmark rating of those organisations which are perceived as better or worse facilitators of learning. Some exploration of the specific learning media currently used by each practitioner would also extend the results which have been reported here. -26-
  • 27. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Further to reduce the risk of tainting of the results received by cultural bias, data capture using non-directive techniques espoused by David Snowden’s Cognitive Edge, using an approach called ‘SenseMaker’ (Cognitive Edge, 2013) is recommended. In practice this would likely involve minimally structured interviews with practitioners, inviting them to recall a story of a good learning experience and one of a timewasting learning activity they have done. The content of a series of these stories could then be thematically analysed according to factors identified by the participants, and rated against those themes by the participants. -27-
  • 28. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 5 Bibliography ActionAid (2011) Real Aid: Ending Aid Dependency. Retrieved from http://www.actionaid.org.uk/sites/default/files/doc_lib/real_aid_3.pdf (Accessed 09 September 2013) Agralytica (2013) Evaluating Market Development Programs. Retrieved from http://www.agralytica.com/Agralytica %20program%20evaluation%20guide.pdf (Accessed 20 September 2013) Arygyris, C & Schoen, DA (1974) Theory in Practice: Increasing professional effectiveness. Jossey-Bass, Oxford. Banerjee, A; Duflo, E; Glennerster, R; Kinnan C (2009). The miracle of microÖnance? Evidence from a randomized evaluation. Retrieved from http://economics.mit.edu/files/4162 (Accessed 10 September 2013) Christian Aid (2009) Getting Back on the Rails: The Private Sector and Development. Retrieved from http://www.christianaid.org.uk/images/private-sector-report.pdf (Accessed 10 September 2013) CGDev (2006) When will we ever learn? Improving lives through impact evaluation. Retrieved from http://international.cgdev.org/sites/default/files/7973_file_WillWeEverLearn.pdf (Accessed 24 September 2013) Cognitive Edge (2013) Sensemaker. Retrieved from http://www.sensemaker-suite.com/smsite/index.gsp (Accessed 26 September 2013) Craik, FIM; Lockhart RS (1972). "Levels of processing: A framework for memory research". Journal of Verbal Learning & Verbal Behavior 11 (6): 671–84 DCED (2010) DCED Standard for Results Measurement. Retrieved from http://www.enterprise- development.org/page/measuring-and-reporting-results (Accessed 10 September 2013) DCED (2011a) DCED Case Study Katalyst. Retrieved from http://www.enterprise-development.org/page/download? id=1696 (Accessed 20 September 2013) DCED (2011b) DCED Case Study GIZ Thailand. Retrieved from http://www.enterprise- development.org/page/download?id=1671 (Accessed 20 September 2013) DCED (2012) RLDC’s role as a Facilitator of Market Development: Learning from experience. Retrieved from http://www.enterprise-development.org/page/download?id=2212 (Accessed 24 September 2013) -28-
  • 29. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 DCED (2013a) The Rationale for PSD. Retrieved from http://www.enterprise-development.org/page/whypsd (Accessed 09 September 2013) DCED (2013b) How Private Sector Development leads to Pro-Poor Impacts: A Framework for Evidence. Retrieved from http://www.enterprise-development.org/page/framework-evidence (Accessed 26 September 2013) DCED (2013c) Success Stories. Retrieved from http://www.enterprise-development.org/page/stories#PAZim (Accessed 09 September 2013) DCED (2013d) Report on the First M4P Learning Event: Bangkok 8-10 May 2013. Retrieved from http://www.enterprise-development.org/page/download?id=2199 (Accessed 20 September 2013) DCED (2013e) Evidence Framework: What do we know about the effectiveness of Business Management Training? Retrieved from http://www.enterprise-development.org/page/download?id=2177 (Accessed 20 September 2013) DfID (2008) A Synthesis of the Making Markets Work for the Poor (M4P) Approach. Retrieved from www.deza.admin.ch/ressources/resource_en_172765.pdf (Accessed 10 September 2013) DfID (2011) What is the evidence of the impact of microfinance on the well-being of poor people? Retrieved from http://www.givedirectly.org/pdf/DFID_microfinance_evidence_review.pdf (Accessed 10 September 2013) DfID (2012) Operational Plan 2011-2015: DfID India (June 2012 update). Retrieved from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/67379/india-2011.pdf (Accessed 10 September 2013) DfID (2013) Aid by Sector: Business Sector Breakdown. Retrieved from http://devtracker.dfid.gov.uk/sector/10/ (Accessed 10 September 2013) Economic Times (2013) Microcredit recipients decline for the first time: Study. Retrieved from http://articles.economictimes.indiatimes.com/2013-02-06/news/36949727_1_microfinance-clients-microcredit- summit-campaign-report-bank-account (Accessed 09 September 2013) EWB (2013) Admitting Failure. Retrieved from http://www.admittingfailure.com/ (Accessed 09 September 2013) Goetz, AM (2003) Reinventing accountability – making democracy work for the poor: Community of Practice on Social Accountability Launch. Retrieved from http://info.worldbank.org/etools/docs/voddocs/511/982/goetz.doc (Accessed 20 September 2013) Hansen, MT; Nohria, N; Tierney, T (1999) What’s your strategy for managing knowledge? Retrieved from http://www.itu.dk/~kristianskriver/b9/Whats%20your%20strategy%20for%20managing%20knowledge.pdf (Accessed 22 September 2013) -29-
  • 30. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 IFC (2013) Financial Markets: Microfinance. Retrieved from http://articles.economictimes.indiatimes.com/2013-02- 06/news/36949727_1_microfinance-clients-microcredit-summit-campaign-report-bank-account (Accessed 09 September 2013) ILO (undated) Practical guidelines for a more systemic approach to sustainable enterprise development. Retrieved from http://www.ilo.org/wcmsp5/groups/public/@ed_emp/@emp_ent/documents/instructionalmaterial/wcms_143123. pdf (Accessed 07 September 2013) IMF (2007) World Economic Outlook. Chapter 4: Globalization and Inequality. Retrieved from http://www.imf.org/external/pubs/ft/weo/2007/02/ (Accessed 25 June 2013) Intrac (2013) The use of consultants in development. Retrieved from http://www.intrac.org/blog.php/34/the-use- of-consultants-in-development (Accessed 09 September 2013) Kolb, D. (1984) Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs: Prentice-Hall. KPMG (2012) Financial Deepening and M4P: Lessons from Kenya and Rwanda. Retrieved from http://www.kpmg.com/eastafrica/en/services/Advisory/Development-Advisory- Services/Thought_Leadership_at_DAS/Documents/Financial%20Deepening%20and%20M4P %20%E2%80%93%20Lessons%20from%20Kenya%20and%20Rwanda.pdf (Accessed 10 September 2013) Kruger, Justin; David Dunning (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments". Journal of Personality and Social Psychology 77 (6): 1121–34. LinkedIn (2013) The Market Facilitation Initiative (MaFI). Retrieved from http://www.linkedin.com/groups/MaFI- Market-Facilitation-Initiative-2441757 (Accessed 23 September 2013) The Holy Bible: King James Version. New York: American Bible Society: 1999 Mensah (2001) The Rise and Rise of NGOs: Implications for Research. Retrieved from http://www.svt.ntnu.no/iss/issa/0101/010109.shtml (Accessed 09 September 2013) -30-
  • 31. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Milanovic (2012) Global Income Equality by the Numbers: in History and Now. Retrieved from http://www- wds.worldbank.org/servlet/WDSContentServer/WDSP/IB/2012/11/06/000158349_20121106085546/Rendered/PDF /wps6259.pdf (Accessed 27 June 2013) M4PHub (2013) The M4P approach has limited utility in post-conflict environments. Retrieved from http://www.m4phub.org/debates/ (Accessed 10 September 2013) ODI (2010) Strengthening Learning from Research and Evaluation: Going with the Grain. Retrieved from http://www.odi.org.uk/publications/5154-learning-research-evaluation-dfid (Accessed 22 September 2013) ODI (2011) Learning how to Learn: eight lessons for impact evaluations that make a difference. Retrieved from http://www.odi.org.uk/publications/5716-impact-evaluation-assessment-lesson-learning (Accessed 22 September 2013) ODI (2013) Localising aid to NGOs: issues and challenges highlighted by a multi-country study. Retrieved from http://www.odi.org.uk/opinion/7578-local-capacity-localising-aid-ngos-donors-hiv-aids (Accessed 19 July 2013) Practical Action (2008) Promising Practices in Participatory Market System Development: Transforming Livestock Markets in Northern Zimbabwe Retrieved from http://practicalaction.org/docs/ia2/promising_practices_pmsd_livestock_zim.pdf (Accessed 09 September 2013) Roche (1995) Institutional learning in Oxfam: some thoughts. Oxford, Oxfam (Internal Discussion Paper), p.55. In Open University (2012) Capacities for Managing Development: Part 3. Open University: Wakefield. Roodman (2011) Due Diligence: An Impertinent Enquiry into Microfinance. Centre for Global Development: Washington SDC/DfID (2008) A Synthesis of the Making Markets work for the Poor (M4P) approach. Retrieved from http://www.value-chains.org/dyn/bds/docs/681/Synthesis_2008.pdf (Accessed 26 September 2013) Skule (2004) Learning conditions at work: a framework to understand and assess informal learning in the workplace. Retrieved from http://www.researchgate.net/publication/228253834_Learning_Conditions_at_Work_A_Framework_to_Understan d_and_Assess_Informal_Learning_in_the_Workplace/file/32bfe511e48eebedcb.pdf (Accessed 23 September 2013) -31-
  • 32. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Slate (2009) Jurassic Web: The Internet of 1996 is almost unrecognisable compared with what we have today. Retrieved from http://www.slate.com/articles/technology/technology/2009/02/jurassic_web.html (Accessed 23 September 2013) The Guardian (2013) Top tips to crack market-based development. Retrieved from http://www.theguardian.com/global-development-professionals-network/2013/aug/26/market-development-best- bits (Accessed 23 September 2013) USAID (2011) Agribusiness and Agriculture Value Chain Assessment: Final Report. Retrieved from http://pdf.usaid.gov/pdf_docs/PDACR715.pdf (Accessed 10 September 2013) Versta (2013) How to estimate the length of a survey. Retrieved from http://www.verstaresearch.com/newsletters/how-to-estimate-the-length-of-a-survey.html#how-to-estimate-the- length-of-a-survey (Accessed 23 September 2013) Walker (2012) Why is it so tempting for livelihood projects to ignore poor people Retrieved from http://blogs.ucl.ac.uk/dpublog/tag/m4p-framework/ (Accessed 09 September 2013) Wallace, T (1997) New Development Agendas: Changes in UK NGO Policies and Procedures. Review of African Political Economy, no. 71. Washington Post (2012) Microcredit doesn’t end poverty, despite all the hype. Retrieved from http://www.washingtonpost.com/opinions/microcredit-doesnt-end-poverty-despite-all-the- hype/2012/01/20/gIQAtrfqzR_story.html (Accessed 09 September 2013) World Bank (2000) Voices of the Poor. Retrieved from http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTPOVERTY/0,,contentMDK:20622514~menuPK:336998~pa gePK:148956~piPK:216618~theSitePK:336992,00.html (Accessed 09 September 2013) -32-
  • 33. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Appendix 1 - Questionnaire Questionnaire | Use of Market Development Project Reviews Thank you for agreeing to take part in this 15-minute survey. The survey is targeted at individuals with experience of systemic, facilitative market- and value chain development projects, and is focused on your thoughts about reviews/evaluations and learning on those projects. The questionnaire is in 3 sections: Section 1: demographic information - 6 questions Section 2: facilitators of- and barriers to learning from reviews/evaluations - 4 questions Section 3: learning in your working environment - 7 questions The results will be used for an MSc research dissertation, and will be published in aggregated and anonymised form online. * Required MSc Researcher: Doug Pearman Section 1: Demographics What is your gender? * Male Female Other: What is your age range? * Which of the following best describes your current employment level? * Student Entry Level Researcher/lecturer -33-
  • 34. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Consultant Manager Senior Manager / Director Owner Other: For how many years have you had exposure to reviews/evaluations on market development projects? * 0-2 years 3-4 years 5-9 years 10-14 years 15-19 years 20+ years What is the name of your current or most recent main employer/client (in relation to market development projects)? * Which of the following best describes the role of your current or most recent employer/client (in relation to market development projects) * This will be the working environment you are questioned about later in this survey Governmental body (e.g. DfID, USAID) International organisation (e.g. World Bank, UN) Academic institution / thinktank (e.g. ODI, CGD) For-profit implementer (e.g. ASI, GRM international) Non-profit implementer (e.g. Save the Children) Other: Section 2 - Facilitators of- and Barriers to Learning 2.1 Which are the most effective mechanisms for learning about how to better deliver market development projects? (5) Most effective (4) (3) (2) (1) Least effective No exposure to this First-hand experience working on projects (NB: excluding the process of conducting formal reviews) First-hand experience working with capable colleagues on projects (NB: excluding the process of conducting -34-
  • 35. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 (5) Most effective (4) (3) (2) (1) Least effective No exposure to this formal reviews) Face-to-face discussions with others about projects (e.g. at your workplace, at conferences) Remote discussions with others about projects (e.g. email, online communities of practice) Conducting your own project reviews (including formal evaluations) Formal academic courses (e.g. diploma / degree) Structured training courses (i.e. non- academic) Reading project reviews (including formal evaluations) conducted by others on your project Reading other projects’ reviews 2.2 From a personal point of view, the process of conducting an evaluation is more important for learning how to deliver better programmes than considering the findings of completed evaluations (5) Strongly Agree (4) Agree (3) Neither Agree nor Disagree (2) Disagree (1) Strongly Disagree 2.3 The greatest barrier to learning how to better deliver market development programmes is that the content of project reviews is unsuitable (i.e. project reviews do not contain the right information) practitioners have insufficient time to properly draw lessons from the available evidence the processes by which evaluations are disseminated are insufficient a lack of credible and trusting relationships between the users and the writers of the evaluation reports Other: -35-
  • 36. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 2.4 How would you characterise the following potential areas of problems when evaluations are carried out on market development projects? Major problem Minor problem No problem Don't know Non-use of evaluation results Intentional misuse of evaluation results Unintentional misuse of evaluation results Section 3: Learning in your working environment 3.1 How effective is your personal working environment in helping you learn how to better deliver market development programmes? (5) Very effective (4) Effective (3) Neither effective nor ineffective (2) Ineffective (1) Very ineffective 3.2 How important are project reviews in helping you to learn about how to better deliver market development programmes? (5) Extremely important (4) Very important (3) Moderately important (2) Somewhat important (1) Not at all important 3.3 Project reviews (including formal evaluations) achieve the following in my working environment: (5) Strongly agree (4) Agree (3) Neither agree nor disagree (2) Disagree (1) Strongly disagree No exposure to this satisfaction of donors’ administrative requirements for measuring activity and impact individual learning by policy makers about which project approaches are more and which are less effective -36-
  • 37. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 (5) Strongly agree (4) Agree (3) Neither agree nor disagree (2) Disagree (1) Strongly disagree No exposure to this individual learning by project beneficiaries which is used to improve the project under evaluation individual learning by staff delivering the project which is used to improve the project under evaluation individual learning by staff delivering the project which is used to improve other projects individual learning by staff working on other projects which is used to improve other projects learning by donors and donor organisations to make funding decisions / alter grant requirements learning by project delivery organisations to alter their expectations when delivering projects 3.4 Do you have the time and the ability to learn how to better deliver market development projects through: (5) Strongly agree (4) Agree (3) Neither agree nor disagree (2) Disagree (1) Strongly disagree No exposure to this first-hand experience working on projects (NB: excluding the process of conducting formal reviews)? first-hand experience working with capable colleagues on projects (NB: excluding the process of conducting -37-
  • 38. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 (5) Strongly agree (4) Agree (3) Neither agree nor disagree (2) Disagree (1) Strongly disagree No exposure to this formal reviews)? face-to-face discussions with others about projects (e.g. at your workplace, at conferences)? remote discussions with others about projects (e.g. email, online communities of practice)? conducting your own project reviews (including formal evaluations)? formal academic courses (e.g. diploma / degree)? structured training courses (i.e. non- academic)? reading project reviews (including formal evaluations) conducted by others on your project? reading other projects’ reviews? 3.5 Have you made changes to your work which have made a measurable positive effect as a result of learning through: (5) Strongly agree (4) Agree (3) Neither agree nor disagree (2) Disagree (1) Strongly disagree No exposure to this first-hand experience working on projects (NB: excluding the process of conducting formal reviews)? first-hand experience working with capable colleagues on projects (NB: -38-
  • 39. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 (5) Strongly agree (4) Agree (3) Neither agree nor disagree (2) Disagree (1) Strongly disagree No exposure to this excluding the process of conducting formal reviews)? face-to-face discussions with others about projects (e.g. at your workplace, at conferences)? remote discussions with others about projects (e.g. email, online communities of practice)? conducting your own project reviews (including formal evaluations)? formal academic courses (e.g. diploma / degree)? structured training courses (i.e. non- academic)? reading project reviews (including formal evaluations) conducted by others on your project? reading other projects’ reviews? 3.6 If you search for market development project evaluations using an IT system, what functionality does that system have? Yes No Not applicable Easily searchable on keywords Searchable metadata about the evaluation author and/or project team members -39-
  • 40. Douglas J Pearman – B5394824 – TMA30 (EMA) Oct 2013 – TU874 Yes No Not applicable Searchable metadata about the type of project Searchable metadata about project locations Searchable metadata about project dates Fast and responsive IT platform and network connection Evaluation data quality is well managed 3.7 In which of the following ways is learning incentivised in your working environment? Yes No Not applicable Feature of job description Assessed at periodic performance management reviews A component of promotion criteria Time allocated for learning activities Personal recognition (e.g. praise from management) Internal recognition (e.g. quarterly newsletter exposure) Tangible reward (e.g. invitation to a conference) Thank you for your participation -40-