:::: MENU ::::

Incentivizing researchers to add their results

The biggest barrier to maintaining a constantly-updated, comprehensive data set on different studies’ results (necessary for meta-analysis) is getting all those data.

Do you know anyone who could help build an app that encourages researchers to “See how your results compare” — so that if they enter in their data, they get some nice graphics about where their results fall in the distribution of all studies done to that point, perhaps disaggregated by region or other study characteristics?

Let’s leverage researchers’ curiosity about their own data. Navel-gazing for the public good.


Ending the war

Development economics has relied increasingly on randomized controlled trials (RCTs), championed by the likes of the folks at J-PAL, IPA, CEGA, and many others. On the other hand, the strategy has its discontents, who fear that a lot of money is going into evaluations that may not have much practical value, as impact evaluations may not have much external validity.

I was worried that my paper “How Much Can We Generalize from Impact Evaluations?”, which draws upon a unique data set of roughly 600 impact evaluations across 20 different types of development programs, would stoke the flames and end up criticized by both sides. It didn’t, because we’re all economists and care a lot about data. At the end of the day, to what extent results generalize, and when, is an empirical question. I think this “war” is poorly named, because we can all agree that it is critically important to look carefully at the data.

I am very heartened by initiatives like the Berkeley Initiative for Transparency in the Social Sciences (BITSS) which emphasize getting to the right answer, not getting an answer. I suspect that meta-analysis will continue to grow in use in economics, and that the answer to the question “how much do results generalize?” will continue to be tested.

For my part, I intend for AidGrade’s data to be constantly updated and publicly available, and to continue to allow people to conduct their own meta-analyses instantly online, by selecting papers they wish to include (more filters to be added). I will be applying for grants to develop online training modules to help crowdsource the data (moving to a Wiki style), which will enable this to keep going and expanding in perpetuity, becoming more and more useful as more studies are completed.

We are in a new era. If I can borrow Chris Blattman’s conceptualization of “impact evaluations 1.0” (just run it) vs. “impact evaluations 2.0” (run it with an emphasis on mechanisms), I’d suggest a slightly modified “impact evaluations 3.0”: run it, with an emphasis on mechanisms, but then synthesize your results with those from other studies to build something bigger than the sum of your parts.


Silence

Apologies for the radio silence. Job market and all.


Paths to development and “The Second Machine Age”

Was fortunate enough to attend an event with Erik Brynjolfsson and Andrew McAfee yesterday discussing “The Second Machine Age” (coming out Jan. 20).

It’s in the same field as Tyler Cowen’s “Average is Over”, discussing how technological advances might reshape society – with emphasis on “might”, as they see room for current decisions to affect future outcomes.

I think we’ve heard a lot about how as machines get smarter and take over more cognitive jobs, the big question is what those displaced workers will do. As much as this will affect workers in America, the question is frankly even more relevant for developing countries. Developing countries often compete on the basis of low-cost labour. What will happen when that path to development is cut off? While a small fraction of those in the U.S. can be expected to benefit extraordinarily from the new system, we can expect even fewer from developing countries to reap such rewards.

Highly important topic for anyone interested in development.


“Slacktivism” appears to be a real phenomenon

Struck by several things in Mario Macis’ presentation today at ASSA. He reported results from an experiment in which half of participants were randomly given the option to broadcast their donation activity to friends on Facebook using an app, “HelpAttack!”.

First item of note: out of more than 6 million Facebook users reached through advertizing, there were 2,000 likes and only 18 processed pledges. How sad!

Second: there were 0 additional pledges through network effects.

Additional results in the paper, but these stood out to me. It’s possible that one needs direct solicitation from one friend to the next in order to observe peer effects. Petrie and Smith, Jonathan Meer, and Catherine Carman have also apparently studied this question in alternate settings with more positive results.

I must admit to being disappointed at how, during the two crowdfunders AidGrade has done, people would routinely sign up to be notified when more data came out but at the same time would not be willing to contribute to the production of said data. It’s completely natural but a bit sad.