­
:::: MENU ::::

How much do policymakers and policy practitioners weigh context and local expertise when making decisions?

One of my papers with Aidan Coville of the World Bank and Sampada KC, a former pre-doctoral student, is now out at the Journal of Development Economics.

In this piece (“Local knowledge, formal evidence, and policy decisions”), we conduct discrete choice experiments with both policymakers (staff at various LMIC government line agencies, largely constituting those with decision-making power over programs or monitoring and evaluation specialists) and policy practitioners (World Bank and Inter-American Development Bank staff). We painstakingly collected these data at World Bank and IDB impact evaluation workshops. The benefit of collecting data at these workshops is that the sampling frame was demonstrably interested in evidence-based policy and we obtained relatively high response rates there (82% overall, and 91% in the case of World Bank workshops).

Participants were asked which of two hypothetical programs they would prefer to increase enrollment rates. The programs had impact evaluation results attached to them and varied by the following attributes:

Estimated impact (0, +5, +10 pp)

Study design (observational, quasi-experimental, RCT)

Study location (same country, same region, other)

Precision (±1 pp vs. ±10 pp confidence interval)

Recommended by local expert (recommended by a local expert or not)

One difference between this study and earlier work is that by asking participants to consider which programs they would prefer (rather than just which studies they would be interested in), we can estimate participants’ willingness-to-pay for programs being supported by different kinds of evidence in terms of how much estimated impact they would be willing to give up. As described in the paper, program impacts are a nice yardstick for assessing tradeoffs since they are similar to public budgets, representing social costs that depend on the policymaker’s choices.

We find that local expert advice matters so much – 5 pp on average – that it can often outweigh estimated differences in program impacts. Policymakers have a similarly strong preference towards programs with an impact evaluation from the same country. Clearly, there is a strong desire to have locally-relevant evidence.

The bottom line: If researchers want their studies to have the maximal impact, they should try to run the study in as close to the target setting as possible and stay in close communication with local experts.

Side perk: While we didn’t set up the study intending to focus on how much policymakers and policy practitioners care about statistical significance, we could crudely capture this through the data we gathered. When we generate an indicator of whether a program was associated with a statistically significant effect or not and include it in the regressions, it seems to mostly drive the preference for results with large effect sizes and small confidence intervals. To our knowledge this is the first study to consider such preferences.


Arrogance, brittleness, and start-up culture

I’m a fan of start-ups.

I like the energy, I like the enthusiasm, I like the idea of doing something new, moving fast and breaking things, hopefully getting something right in the end, and letting the market determine which products win.

But there is something I’ve noticed across several domains, perhaps most notably associated with start-ups and politics, but not restricted to those domains. And that is a high degree of arrogance of people at the top.

Maybe people don’t start out arrogant. Maybe, along the way, power corrupts. Maybe, along the way, people get surrounded by yes-men and sycophants, and no one tells them how dumb some of their ideas are.

Ironically, they may end up divorced from market forces, caught up in their own reality, until forced to notice a major mistake.

Or maybe arrogance is selected for. Maybe you have to be pretty arrogant to begin with, to pursue paths that are very unlikely to lead to conventional success. Maybe if you then attain that success you attribute it to some quality of yourself rather than to chance.

I see this in tech sometimes (much as I love tech). I see this in politics. I see this in various online and offline communities.

And it worries me, because arrogance leads to brittleness. Maybe there is some wisdom to the old saying “pride comes before a fall”. If you are arrogant, you’re not necessarily able to Bayesian update. You’re not necessarily going to put enough weight on things that can go wrong. And if there is also deference to power, for whatever reason, and your bad ideas go unchecked, then you will really be in trouble.

For many kinds of problems, democracy, norms, and rules serve to hold this in check.

And it’s very popular among some start-up crowds to say yes, but these things are inefficient. Constraints hold back strong leaders. Rules are too onerous. You can’t produce anything good under stifling conditions. But suppose the US took a more authoritarian turn – it might become temporarily more efficient, but it would likely become more brittle, too.

Too often people go all-in on efficiency. To think it’s better to be unconstrained because, after all, they are doing Very Important Work so constraints are more costly. To think it’s fine if something breaks because then they can just stop and do something else.

It’s a pretty brittle and risky mindset and it’s unlikely to pay off in the long run, and especially unlikely when you add in people’s natural tendency to overshoot and miss the mark, to take on too many risks, to concentrate the risks, to not even notice risks. In theory, on paper, maybe going all-in on efficiency looks like the best option, but in real life people are subject to too many biases and receive too little feedback and ultimately are unlikely to be able to evaluate the risks.

Some constraints are good, actually, given humans’ predispositions. Not too many, but some.


On encouragement

Sometimes I’ve seen people be encouraged to pursue a risky career strategy. And I think people should pay more attention to who is giving the advice, how it benefits them, whether they are in a position to know the odds, and what are the downside risks to the encourager if their advice goes wrong, etc. Because I’ve seen many cases where someone was encouraged, and encouraged, and it was almost irresponsible for them to be encouraged so much against high odds of failure. (I am referring here to the type of encouragement that is optimistic, rather than the type of encouragement that indicates something is unlikely but makes an argument that it is worth it anyway.)

From a societal standpoint, it often makes sense to prod people into taking risks, for example if people are too personally risk-averse. But the incentives are often not aligned. I will give a few examples, though I’ll avoid some of the particularly devastating ones I’ve observed to preserve the anonymity of the affected parties.

For one, consider an academic advisor to a PhD student. The advisor’s incentives are likely to try to push the student into the top academic job they can get. A lot of people are sensitive to the needs of the student, but not everyone is. Suppose an advisor thinks the student could get a non-academic job if they went on the job market early but they have an outside chance of an academic job if they went on the job market later. The student might be encouraged to stay in the program longer in the hopes of getting an academic job, but they could easily end up without an academic job even after staying longer. If the student knows the risks and can make the decision, that’s fine. But the advisor can only benefit, in a self-interested sense, from playing up the odds of success. If the student fails, it’s no skin off the advisor’s nose (again, apart from altruistic preferences).

And consider the flip side: if you’re asked to give advice, and you give a more negative assessment than the recipient has in mind, not only might your advice be rejected, but now the recipient might also dislike you. Even without any other outside incentive to be overoptimistic, this gives an advice-giver the incentive to shade their assessments up. If the person succeeds, they will thank you for it, and if they don’t succeed, they probably won’t hold your advice against you.

Further, being overoptimistic can sometimes be helpful in getting the necessary work done, because it can be motivating. Being realistic can be discouraging. So it is not only individuals who might be overly optimistic in their advice. Sometimes there can be institutional pressures towards optimism (e.g., competitions that want to encourage many high-quality applications though the odds of success are low).

I’m not saying we should always tell people our unfiltered assessment. People sometimes don’t want to hear that. And I’m sympathetic to the view that optimism can be helpful at the societal level. But I haven’t described the most egregious cases I’ve observed. And I do think some people are overly trusting of encouragement and don’t think through the incentives of those giving the encouragement. For example, one should be careful about accepting bad jobs in the hopes they might, with dubious odds, lead to a better job someday. And people are naturally attracted to hearing good news about themselves. It’s easy for people to update on the good more than the bad (e.g., Eil and Rao 2011). Be careful out there and always consider who is giving the encouragement and their (perhaps subconscious) incentives.


Pages:1234567...17