Development economics has relied increasingly on randomized controlled trials (RCTs), championed by the likes of the folks at J-PAL, IPA, CEGA, and many others. On the other hand, the strategy has its discontents, who fear that a lot of money is going into evaluations that may not have much practical value, as impact evaluations may not have much external validity.
I was worried that my paper “How Much Can We Generalize from Impact Evaluations?”, which draws upon a unique data set of roughly 600 impact evaluations across 20 different types of development programs, would stoke the flames and end up criticized by both sides. It didn’t, because we’re all economists and care a lot about data. At the end of the day, to what extent results generalize, and when, is an empirical question. I think this “war” is poorly named, because we can all agree that it is critically important to look carefully at the data.
I am very heartened by initiatives like the Berkeley Initiative for Transparency in the Social Sciences (BITSS) which emphasize getting to the right answer, not getting an answer. I suspect that meta-analysis will continue to grow in use in economics, and that the answer to the question “how much do results generalize?” will continue to be tested.
For my part, I intend for AidGrade’s data to be constantly updated and publicly available, and to continue to allow people to conduct their own meta-analyses instantly online, by selecting papers they wish to include (more filters to be added). I will be applying for grants to develop online training modules to help crowdsource the data (moving to a Wiki style), which will enable this to keep going and expanding in perpetuity, becoming more and more useful as more studies are completed.
We are in a new era. If I can borrow Chris Blattman’s conceptualization of “impact evaluations 1.0” (just run it) vs. “impact evaluations 2.0” (run it with an emphasis on mechanisms), I’d suggest a slightly modified “impact evaluations 3.0”: run it, with an emphasis on mechanisms, but then synthesize your results with those from other studies to build something bigger than the sum of your parts.