One of my papers with Aidan Coville of the World Bank and Sampada KC, a former pre-doctoral student, is now out at the Journal of Development Economics.
In this piece (“Local knowledge, formal evidence, and policy decisions”), we conduct discrete choice experiments with both policymakers (staff at various LMIC government line agencies, largely constituting those with decision-making power over programs or monitoring and evaluation specialists) and policy practitioners (World Bank and Inter-American Development Bank staff). We painstakingly collected these data at World Bank and IDB impact evaluation workshops. The benefit of collecting data at these workshops is that the sampling frame was demonstrably interested in evidence-based policy and we obtained relatively high response rates there (82% overall, and 91% in the case of World Bank workshops).
Participants were asked which of two hypothetical programs they would prefer to increase enrollment rates. The programs had impact evaluation results attached to them and varied by the following attributes:
Estimated impact (0, +5, +10 pp)
Study design (observational, quasi-experimental, RCT)
Study location (same country, same region, other)
Precision (±1 pp vs. ±10 pp confidence interval)
Recommended by local expert (recommended by a local expert or not)
One difference between this study and earlier work is that by asking participants to consider which programs they would prefer (rather than just which studies they would be interested in), we can estimate participants’ willingness-to-pay for programs being supported by different kinds of evidence in terms of how much estimated impact they would be willing to give up. As described in the paper, program impacts are a nice yardstick for assessing tradeoffs since they are similar to public budgets, representing social costs that depend on the policymaker’s choices.
We find that local expert advice matters so much – 5 pp on average – that it can often outweigh estimated differences in program impacts. Policymakers have a similarly strong preference towards programs with an impact evaluation from the same country. Clearly, there is a strong desire to have locally-relevant evidence.
The bottom line: If researchers want their studies to have the maximal impact, they should try to run the study in as close to the target setting as possible and stay in close communication with local experts.
Side perk: While we didn’t set up the study intending to focus on how much policymakers and policy practitioners care about statistical significance, we could crudely capture this through the data we gathered. When we generate an indicator of whether a program was associated with a statistically significant effect or not and include it in the regressions, it seems to mostly drive the preference for results with large effect sizes and small confidence intervals. To our knowledge this is the first study to consider such preferences.
Comments