The world today faces a multitude of intractable development problems. How can girls stay in school longer? If children stay in school, how do we ensure they are learning? How can governments expand the number of people using toilets or mosquito nets? Then there are the larger questions – how do countries get rich? How do they achieve a structural transformation from farm-based livelihoods to an economy led by manufacturing and services? What behavioural incentives will it take for effective action to counter the climate crisis? These questions, and several others of this nature, are at the core of the work we do in development. To find answers to these questions and to do what we do better, decision-makers are supposed to turn to ‘evidence’ – evidence generated from research and evaluation. We however also know that the use of evidence in decisions remains limited.
A recent report by Centre for Global Development (CGD), “Breakthrough to Policy Use: Reinvigorating Impact Evaluation for Global Development” provides five excellent recommendations “on ‘what and how’ to fund to deliver on the promise of impact evaluations and bolster the broader evidence ecosystem:”
Read the full report here.
These recommendations are an excellent guide for the evaluation community (donors and those generating evidence) for how to improve their effectiveness. But why are decision-makers not using evidence as much as (or as often as) we would like to see in the first place. There are several examples where decisions and policies have been shaped by evidence; but all around us there are many more instances where decisions are made in a near-total absence of evidence.
If one places the decision-maker at the centre, the reasons can be summarised as follows: Often reliable and timely evidence may just not be available. It remains common practice in our sector for reviews and evaluations to be commissioned at the conclusion of an intervention. At other times, an experimental evaluation commissioned to inform decision-makers does not yield results until long after the programme is finished. Recognising this challenge, evaluators now seek to work in partnership with decision-makers (and implementers) so that there is concurrent learning and feedback.
Another reason that the available knowledge and evidence may not be applicable across settings. This is a challenge in scenarios where knowledge or evidence that is available in a certain context is simply not applicable to the context where it is sought to be applied. For instance, evidence for the performance of a decentralised water supply system in Ghana is simply not applicable in Somalia. The change in context fundamentally affects the prospects of an intervention that may be essentially technical in nature.
What’s limiting the use of evidence in decisions?
Third, even with good intentions, decision-makers equipped with evidence may not have the ability to apply the said evidence in designing solutions. For instance, evidence may suggest that regular monitoring of schools by the district-level education officer has an impact on the quality of education being delivered in government schools. But if those officers do not have the resources to fuel their vehicles to access those schools, the availability of such evidence is of very little value to them. In this example, I framed ‘ability’ in terms of physical resources, but one can think of it also in other terms.
Finally, policymakers experience real-life constraints that limits their authority compared to simulated experimental settings. This is where interests and incentives may not align horizontally or vertically, and senior decision-makers committed to working towards using knowledge systems to attain the best outcomes possible are thwarted.
This is by no means easy.
First and foremost, decisions and decision-makers need to drive this movement. Decision-makers are not passive recipients of evidence. They are not even just consumers of evidence – they are the primary client in this eco-system. Recognising the primacy of decision-makers will bring about a transformation in how evidence is generated and disseminated.
Achieving a breakthrough on encouraging the use of evidence in decisions rests on successfully easing all four of these constraints – Availability, Applicability, Ability and Authority – each one of them on their own is necessary but not sufficient. Taking a problem-driven approach to each of these constraints will help unpack what is really holding back the use of evidence in any given scenario.
Conceptually, this is not too different from the set of approaches that have come to sit under the ‘Problem-Driven Iterative Adaptation’ (PDIA) umbrella – a steadfast focus on the problem is necessary in order to ensure that we are able to focus our collective energies towards pursuing the right outcome.
The recommendations in the CGD report too are instructive in this regard. Technical fixes alone will not make a dent here. Neither will arms-length engagement. Building trust through long-term partnerships is key.
Finally, decision-makers in government – whether responsible for framing policies, designing programmes, or for implementing guidelines issued from the top – all function under sets of constraints unique to them, determined by their position and the power that it affords them. They are part of a larger networked structure that we need to learn to understand. Being empathetic always helps.
In India's eastern hills, an ancient tribe's eternal forest bond faces rupture as controversial legislation opens their sacred home to excavation, threatening the continuity of her… More...
IDS alum Deepti Ameta (MAGlob06) asks for more investment in renewable energy for India to help women rest, learn and re… More...
Protests and social mobilisations are key in democracies like Colombia and UK, but how do experiences differ during stri… More...
Andre Flores (MAFOOD04) tells us about government provision of ayuda to Philippine citizens and discusses its benefits and pitfalls during the Covid-1… More...