A key reason we do impact evaluations is to inform policy decisions. It is absolutely crucial to build up an evidence base. However, in a paper leveraging AidGrade data combined with experimental data, I argue that we also shouldn’t neglect the decision-making process and that improvements in the decision-making process can sometimes dominate the returns from conducting an impact evaluation.
This perhaps sounds crazy, and I’m not at all suggesting we abandon impact evaluations. They can be important tools. What I do in the paper is to build a model of the returns to impact evaluation (in terms of improved policy outcomes) assuming policymakers are Bayesian updaters and have only altruistic motives (caring only about the impact of the project on intended beneficiaries). I then gather the real priors of policymakers, practitioners, researchers, and a comparison group of MTurk workers and use these priors to estimate the returns of impact evaluations. Since most projects have fairly small impacts, the typical return to an impact evaluation is also very small.*
Thanks to the great advice of a referee, I also looked at different ways a decision could be made, comparing a single policymaker making a decision as a dictator and a group using majority voting. There is a large literature on the wisdom of the crowds, and there has also been some work to suggest people’s priors are better than meta-analysis results. There are many other ways in which decisions could be made, but even without considering more complicated decision-making rules, it is already apparent that changing the way in which decisions are made can sometimes be more valuable than conducting an impact evaluation. Of course, this depends on the quality of the decision-makers; for the relatively poorly-informed MTurk subjects, I observed something like a “folly” of the crowds when considering how they would behave if faced with a particularly noisy signal of the effects of a program.
In another paper, joint with Aidan Coville, I focus on how policymakers update and find the situation may actually be worse because policymakers (and practitioners and researchers – no one should feel superior here!) do not Bayesian update but are subject to several behavioural biases.
In summary, we talk a lot about “evidence-based” decisions, but making an evidence-based decision takes a lot more than just evidence. There remains a lot of low-hanging fruit in this research area.
*I argue that an impact evaluation is most useful for the highly uncertain, possibly highly effective projects, a straightforward and well-known result of the normal learning model.
Very interesting – thanks for the post and the links!