:::: MENU ::::

Reinhart-Rogoff and the problem with economics research

If you haven’t read about the Reinhart and Rogoff scandal, you can read about it here, here or here, among other places. In brief, a major paper was found to have made a number of errors, from Excel errors to questionable exclusion of several data points.

There was a lot of outrage in the public, but the response of economists was much more muted in general. Partly, I think this is because economics is a small world and everyone knows everyone. Partly, I think it’s because nobody’s particularly surprised; errors and even misrepresentations happen all the time.
As a discipline, we should be focusing on better correcting mechanisms. But why is there such a big problem in economics in the first place and what can we do about it?

First, regarding data. There is a big push to get people to share their data and their code, but the devil is in the details. It’s not enough to put data or code out there – you need someone to look at it to see whether or not it’s any good. Nobody wants to closely examine data or code unless it’s a really important topic and they are trying to replicate the results, but there are very low incentives to replicate papers.
Solutions? Journals requiring authors share their data and code are already doing a good job on at least encouraging some sharing. What is needed is more pressure and attention to what exactly it is that is shared, along with feedback mechanisms to correct any mistakes. AidGrade has feedback mechanisms explicitly built into its meta-analysis protocols. More radically, there is a sort of “GitHub for research” on the way that would allow all the usual features of forking along with automated posting of data and code. The nice thing about this is that it could take choice out of the picture, both eliminating the hurdle of manually posting data as well as serving as a commitment device for openness.

Second problem: people are biased and this bias can permeate their methods and affect their results, even unconsciously. It is easy enough to, after running a regression, think to run it on a subgroup or with different controls and, if you obtain a result that supports your priors, think this regression closer to capturing reality. Donald Green describes the problems associated with this well.

One thing that would help solve this problem is a pre-analysis registry where people can share their initial hypotheses and how they plan to test them. Then, if they deviate from them, at least we know and can consider the results in light of this.

There is already an effort by the AEA/J-PAL to have such a pre-analysis registry, and while it is a fantastic endeavour it does not go far enough. It only accepts randomized controlled trials, which make up a very small share of development economics research or economics research in general.

The other day I was looking for a place to post a pre-analysis plan for work I am doing. Since it was not an RCT, there was nowhere to post it. I tried signing up for the Open Science Framework but didn’t see a single public pre-analysis plan posted there. Though some may be hidden, that really is a shame and points to the fact that if people don’t have the incentive to do it, they won’t. So for now I am sharing my plans with friends, with the side benefit of getting feedback, but I would of course prefer for there to be a repository for these plans. Would anyone like to set one up with me? Let me know @evavivalt – this is the kind of work best done jointly, so share the word.


One Comment