Part 1 of a series on risk; see part 2 here
Risk means many different things to many different people.
When we talk about the risk involved in a certain development program, the first important question is: who is really bearing the risk?
In asset pricing theory, the only risks that matter are risks that cannot be diversified away. If two potential investments have uncorrelated returns, an investor can hedge his or her bets by diversifying his or her portfolio. In terms of development programs, risks can be mitigated by diversification when the success of different programs is not correlated. Since the success of different programs is indeed largely uncorrelated, as they are disperse in space and in their targeted outcomes, an individual donor could diversify the risk he/she would not end up achieving his/her desired outcomes.
However, the way we normally think about development, the donors are not part of the goal. It can be okay for an individual donor to have funded an unsuccessful program so long as, combined, the aid programs that are funded are successful (though again this depends on a number of other things such as the extent to which unsuccessful programs hurt intended beneficiaries or others).
In other words, donors can hedge their own risks by diversifying the portfolio of programs they donate to, but this is irrelevant to the beneficiaries, who do not receive a diversified portfolio of programs. When we think about risk, we should instead pay attention to whether individual programs themselves are risky.
Yet whether an individual program run by one particular NGO in one particular place at one particular time is risky is precisely the thing on which we have the least information. Even if we were so lucky as to have an impact evaluation of the program so that we knew its past success, and even if we were so lucky as to have that impact evaluation capture the estimated effects on different parts of the population rather than just the mean effect, this still wouldn’t tell us how risky that program was. You can’t quantify the dispersion of the estimates of the mean, for example, when you only have one mean.
We have to resort to analysis of a more general type of program. When you repeat the program, you are not repeating exactly the same program – if nothing else, the intended beneficiaries have changed, having benefitted or been harmed by the previous program. Still, it seems reasonable to use past data as evidence on which to base future policy (in a Bayesian approach, it should still cause us to update our priors, even if it is not fully predictive). And at least when you observe multiple instances of the same program being rolled out, you can measure the variance across programs, or the standard deviation, a more typical measure.
This is another benefit of meta-analysis. As a side perk, you get to see how much results vary, under which contexts, and calculate the coefficient of variation within interventions. Some types of programs do systematically vary more in their results than others. I have a working paper on this and am happy to discuss further.