1. Daniel Ticehurst, August, 2017
“The curious task of economics is to demonstrate to men how little they really know about what
they imagine they can design”.1
INTRODUCTION/BACKGROUND
Successful aid does not normally result from relentlessly seeking to implement a pre-ordained
intention spelt out in a plan or results framework aligned to the objectives of those who finance
it. It emerges from how aid is able to respond and adapt to a complex and dynamic reality.
This is often arrived at through failure or mistakes.
“If you're not prepared to be wrong, you'll never come up with anything original.” 2
The implementation of development projects is not – as Rondinelli and, more recently,
Andrews explains – simply to roll out known solutions according to an established receipe.3
Rather, projects are better seen as policy experiments that help decision makers learn about
what works to resolve specific, locally determined problems in a particular context, for
creating organizational capability and for mobilizing the commitment of stakeholders who are
in the development process for the long term.
Often, however, changes sought by aid are often little more than illusions about the capacity of
projects to offer significant and lasting contributions to overarching development objectives.
Humility is a rare trait. Its absence leads to stresses and demands on organisational systems.
They often undermine capability by eroding local initiative and contributing to a lack of
ownership of the development agenda in recipient countries.
As short term policy experiments, aid projects are not necessarily compatible with long-term
plans. This article investigates the reasons for this disconnect and proposes solutions to more
effectively and democratically manage development interventions. My hope is to prompt new
questions to be asked about how development interventions are managed and for existing
conventions on monitoring and management to be challenged and scrutinised.
WHY DOES THE AID SYSTEM STRUGGLE TO BE ADAPTIVE ?
Very often the most bureaucratic and rigid organisations are those which congratulate
themselves on their passion and their competence.
1) Adopting overly mechanistic design and appraisal
“The procedures adopted for designing and implementing aid interventions today remain ever
rigid and detailed at the same time as recognising that development problems are less
amenable to systematic design.”4
The systems (political, market, social etc..) in which aid operate are inherently complex and in
a constant state of flux. They are organic. Both Picciotto and, earlier, Rondinelli, argued such
operating environment traits mean that the most significant development problems are not
amenable to blueprint solutions. Rigidly sticking to objectives which typically overstate the
significance and lasting effects of discrete inverventions and encourage “cooking the books”
underlie spectacular aid failures.
The decisions and behaviours of organisations and institutions (private and public) of those
working in and governing different parts of such systems and the nature of their relationships
with each are unpredictable. The context in which aid is targeted is highly complex. It is often
difficult to define, in precise terms, how aid can be most effectively delivered in advance.
Therefore, it is challenging to define, ex ante, measurements of success. It is because of this
that we need to be more tolerant of ambiguity.
The offer of certainty is not worth receiving. Mistakenly, many funders believe that
programmes can only start once they think they have understood the problem, turned it
upsidedown into objectives and defined ways to resolve and monitor it. A “truth” emerges that
simply needs implementing: the time, the evidence and resources allocated in getting there
1
The Fatal Conceit. Frederick Von Hayek. University of Chicago Press. 1991.
2
Sir Ken Robinson https://youtu.be/iG9CE55wbtY
3
D. Rondinelli (1983). Development Projects as Policy Experiments. An Adaptive Approach to Development
Administration. Development and Underdevelopment Series. Methuen and Co Ltd
4
Op cit (Rondinelli, 1983)
1
2. demand it. A position that has repeatedly been questioned, more recently by Jones and
Hummelbrunner (2013). They emphasize that the development process is a highly complex
and contingent process that needs to be guided by principles, and is not reducible to simple
rules or menus.
2) Defining erroneous objectives and measures of success
“I don’t know where you want to go, but I’m willing to take you there.”5
Aid agencies, despite their claims, seldom adequately engage with either beneficiaries and/or
the governments that permit their presence in the very first place. This is more common than
one would expect or hope. Often the legitimacy of donor investments is defined with reference
to global and/or inward looking, largely numeric, corporate objectives/targets.
Such objectives/targets represent complex issues, and these are based on assessments that
favour the values of those in charge rather than those in need. They, as Picciotto points out,
need to reflect a greater emphasis on beneficiary and borrower needs and objectives, rather
than privileging those of the funder, its administrative requirements and its corporate
objectives.6
There are salutary lessons, to be drawn from allegedly “best in class” results-based systems,
such as those in the UK health system, that over emphasis numeric targets” and miss quality
of care:
“I have heard one UK minister argue a year or so ago that we had just seen the best ever
year for the health service, only to be destroyed by the anecdotal evidence of medical and
patient experience. The scary thing is that she couldn’t understand that there was a problem.
Just like the social work director who had achieved all her targets but was still the scapegoat
for an horrific case of child abuse.” (D. Snowden, 2009)
3) Mis-using and over-engineering M&E tools
“M&E as currently practiced is insufficient as a learning tool about complex development
projects.”7
Many current approaches to monitoring encourage people responsible for it to give up their
critical faculties and simply to focus on compliance. Some monitoring systems are audited.
Passing brings a form of redemption if you will.
Defining what needs monitoring lies across the different types of results - from outputs, the
outcome/purpose up to the impact/overall objectives. Confusing and confounding the
performance of those implementing aid programmes (outputs) with that of those they are
supporting (Outcomes) is common place.
These (mistakes) are often set out in frameworks of varying types and shapes – results/logical
frameworks and, of late, accompanied by theories of change and value for money
frameworks. I understand the intellectual reason for having both log/results frameworks and
theories of change. They are supposed to complement each other. Often in practice, however,
the process or method used in developing them is rarely inclusive and used coherently. One
adds little or no value to the other. Value for money frameworks, on the other hand, appear
rather contrived and run in parallel to the other two. Best, perhaps, just to define a set of value
for money questions any decent monitoring system should answer.
A recent study found that operational results chains are often the weakest element. 8
In the
absence of a clear and validated theory that explains how and why an aid agency is pursuing
any given objective, the results matrix (metrics) is not meaningful, making it more difficult to
5
“Somewhere” by Chronixx, Perfect Key Riddim, DZL Records
6
Evaluator Anxiety in an Inequitable World. In Evaluation for an Equitable Society (2016)
7
It‘s All About MeE: Using Structured Experiential Learning (“e”) to Crawl the Design Space. Centre for Global
Development. Working Paper 332, April 2013
8
Measuring Up: The Importance of Quality Results Frameworks in Country Strategies. Using results
frameworks to improve planning and to better identify potential dynamic synergies at country level. March 10,
2015. By Geeta Batra & Shoghik Hovhannisyan
2
3. assess: a) the quantiy and relevance of what support is being offered and how much this
costs; b) the assumptions or pre-conditions regarding the use and retention of the support
before attempting to measure the outcome; and, of ultimate import c) the changes stimulated
by the support.
The process of determining what data should be collected and why, often remains donor
centered with very little participation (if any) from the implementers despite their understanding
of the reality on the ground gained through interactions with beneficiaries. It is driven by the
indicators defined by the donor, not the questions defined by those responsible for
implementation let alone the intended beneficiaries. So for many implementers, monitoring is
just another item to check off their long list of tasks to be done for the donor rather than being
seen as a valuable activity in its own right.
4) Measuring the wrong things
“There is a fundamental disconnect between the rhetoric about the need for learning in
development and the reality of the monitoring procedures that funding agencies require.” 9
Over the past 15 years, monitoring has been influenced by the results agenda, and for good
reasons: there is a need for those responsible for implementation to do more than simply
describe what they have done, what support they have made available and how much money
they have spent providing it. (But even this is rarely done well.) In this regard, numerous M&E
guidelines have been produced. Many have not learned the lessons from over-ambitious
project based M&E efforts in the 1970’s and 80’s. These tried to assess changes of agronomic
and socio-economic variables within 5 years. Some approaches, for affect not effect, even
label themselves as M,E and Learning as a fatuous reminder to an object of M&E.
Guidelines are typically concerned with “Results Based Monitoring” and “Monitoring and
Results Measurement”. They provide a simplistic form of menu or recipe to follow that purports
to help first define the different types of results in a chain, define indicators for them and then
measure and report them. Progress of a kind for some.
Management systems, and many managers who design and run them, limit their brief to the
traditional orthodoxy: set objectives and deliverables, link delivery to payment milestones; take
a look at risks that run in parallel to assumptions, audit compliance with process through, for
example, ISO and let the M&E expert sort out the rest. Stacey and Snowden argue the
consequences of managing programmes in such environments turns such orthodoxy on its
head. Managers typically delegate managing the quality and relevance of the support to so-
called M&E experts. But the work of such experts is seldom appropriate given the complex
environments in which aid projects operate. Management approaches need to fit and reflect
the context. And monitoring is often caught up in the blinkered practice of blindly reporting
indicators that are rarely adequate to capture overall results and that often ignore or pay lip
service to assumptions.
Alternative approaches need to emphasise learning about the present and adapting
accordingly. Putting a premium on a common culture and objective within a programme, that
is not not necessarily the right place to go, is not an atmosphere in which assumptions should
be re-examined in light of implementation experience. Rather the reverse. They are obvious
and potent pressures for conformity. (Stacey 1996)
Good practice in monitoring means giving importance to assessing the assumptions we make
almost as much as the results themselves. Measuring higher level indicators offers little
insight compared to an analysis of the assumptions made on how well and to what extent
those being supported rate and respond to the support. Moreover, early detection allows you
to adapt. In other words, it is the assumptions underlying the theory of change we need to
monitor and learn about rather than ignore lessons from the past as to the limits of just
measuring pre-defined or ex ante indicators and menus of universal indicators concoted by
funders. By the time you assess the values of such indicators it is often too late (to adapt).
The excessive reliance on pre- or externally defined indicators misses two basic points. First,
context matters and indicators need to be specific to a given decision situation and it should
take account of uncertainty. Second, beneficiaries matter and indicators need to reflect their
values. The acid test of a good measure is simple: does it help improve performance? This
9
Seeking Surprise: Rethinking Monitoring for Collective Learning in Rural Resource Management.
Published Phd
Thesis, Wageningen University, Wageningen, The Netherlands Irene Guijt. 2008
3
4. means addressing issues of scale, aggregation, critical limits, and thresholds, etc. They are
situation dependent and the operating environment is complex and constantly changing.
POTENTIAL SOLUTIONS
What those who manage implementation need is a system and people with skills to answer
questions that inform decisions on: a) what resources or inputs to allocate across the outputs;
b) how to institutionalize learning and create a legitimate space for mistakes and failure; and
c) how best to adapt implementation in the light of implementation evidence.
Any system that sells itself on serving the information needs of managers needs to
acknowledge the importance of the perspectives of and reactions among those they are paid
to serve. Is there any other way in assessing the quality and relevance of the support?
There is a need to set up space and opportunities to listen to the clients of the programme
and/or the ultimate beneficiaries. It is the complex and continuing interactions between people,
these interactions, and the emergent relationships they seek that we need to learn about if
monitoring is to help improve, not just comment and report on, ‘performance’ and ‘results’.
Learning from all stakeholders about where they are coming from, the environments/systems
in which they live in and depend on, and how and in what ways organisations and
programmes can help them get to where they want to go, would make programmes more
accountable to those who matter: the beneficiaries. It is their story and their results, not the
donor’s. Thus they should be the ones who validate the presence and significance of change.
Managers need to reflect at least once a year on the appropriateness, in the light of unfolding
events and responses from those it supports, of the objectives of the intervention, its
underlying assumptions and the results it seeks. This, and the decisions they make, can and
should be integral to monitoring, and not held in abeyance to evaluations.
FINAL THOUGHTS
“M&E” is not a profession. Although related and need each other, M and E require different
skills and experiences. To say one person has all the requisite skills and experiences is to be
credulous and easily led. The brief and skills needed for a monitoring specialist are no different
to those one would expect from a good manager, a financial specialist and those responsible
for delivering the support working together. Evaluation by contrast is a distinct discipline.
This said, monitoring judiciously combined with evaluation can help manage aid in complex
environments. This means: a) Affording as much, if not more, importance to assessing the
assumptions than we currently give to measuring results; b) Affording primacy to listening to
the views of the people as to the quality and relevance of the support and observing how they
respond to this support as well as providing space to change agents to share their
experiences; and c) Making more use of systems thinking approaches such as network
analysis that can help us better understand how relationships affect, if not are themselves,
outcomes.
In this context, the really challenging questions are less to do with how to do better monitoring.
They call for a better understanding grounded in the lessons of development experience
regarding the bureaucratic, institutional and political pressures that underlie many current
approaches to monitoring, and how to change them.10
Useful References
1. Rick Davies (2009). The Use of Social Network Analysis Tools in the Evaluation of
Social Change Communications. An input into the Background Conceptual Paper: An
Expanded M&E Framework for Social Change Communication.
2. H Jones and R Hummelbrunner (2013). A Guide for Planning and Strategy
Development in the Face of Complexity. ODI Background Note.
3. Andrews, M., L. Pritchett, and M. Woolcock (2010). ‘Capability Traps? The
Mechanisms of Persistent Implementation Failure’. Center for Global Development,
Working Paper 234.
4. Picciotto, Robert and Rachel Weaving. (1994). “A New Project Cycle for the World
Bank?” Washington D.C.: Finance and Development, December
10
Pers Comm Simon Maxwell
4
5. 5. D. Rondinelli (1983). Development Projects as Policy Experiments. An Adaptive
Approach to Development Administration. Development and Underdevelopment
Series. Methuen and Co Ltd
6. Lawrence F. Salmen (1992). Reducing Poverty: An Institutional Perspective Poverty
and Social Policy Series Paper No. 1
7. Lawrence F. Salmen (1995). Beneficiary Assessment: An Approach Described, Social
Development Paper Number 10. (Washington, D.C.: World Bank, July 1995).
8. D. Snowden (2009). The Occult Insignificance of Meaningless Numbers
(http://cognitive-edge.com/blog/the-occult-insignificance-of-meaningless-numbers)
9. R D Stacey (1996). Complexity and Creativity in Organizations, Berrett-Koehler, San
Francisco.
10. R D Stacey (1992). Managing The Unknowable: strategic boundaries between order
and chaos in organizations, Jossey-Bass, San Francisco.
5
6. 5. D. Rondinelli (1983). Development Projects as Policy Experiments. An Adaptive
Approach to Development Administration. Development and Underdevelopment
Series. Methuen and Co Ltd
6. Lawrence F. Salmen (1992). Reducing Poverty: An Institutional Perspective Poverty
and Social Policy Series Paper No. 1
7. Lawrence F. Salmen (1995). Beneficiary Assessment: An Approach Described, Social
Development Paper Number 10. (Washington, D.C.: World Bank, July 1995).
8. D. Snowden (2009). The Occult Insignificance of Meaningless Numbers
(http://cognitive-edge.com/blog/the-occult-insignificance-of-meaningless-numbers)
9. R D Stacey (1996). Complexity and Creativity in Organizations, Berrett-Koehler, San
Francisco.
10. R D Stacey (1992). Managing The Unknowable: strategic boundaries between order
and chaos in organizations, Jossey-Bass, San Francisco.
5