:::: MENU ::::

Predictions of Social Science Research Results

There has been an explosion of studies collecting predictions about what they will find, especially in development economics. I am fully supportive of this, as you might guess from several past blog posts. I think it is in every applied researcher’s best interest to collect forecasts about what they will find. If nothing else, they help make null results more interesting and allow researchers to note which results were particularly unexpected.

Another issue previously alluded to that I would like to highlight, however, is that forecasts can improve experimental design. I’m writing a book chapter on this now so expect more on this topic soon.

Forecasts are also important in Bayesian analysis, as “priors” one could use. And in the very, very long-run, they have the potential to improve decision-making: there are never going to be enough studies to answer all the questions we want answered, so it would be very nice to be able to say something about when our predictions are likely to be pretty accurate even in the absence of a study. Which study outcomes are less predictable? Who should we listen to? How should we aggregate and interpret forecasts? In the absence of being able to run ten billion studies, anything we can do to even slightly increase the accuracy of our forecasts is important.

Towards this end, I am collaborating with Stefano DellaVigna (of these awesome papers with Devin Pope) to build a website that researchers can use to collect forecasts for their studies. To be clear, many people are collecting forecasts on their own already for their respective research projects, but we are trying to develop a common framework that people can use so as to gather predictions more systematically. Predictions would be elicited from both the general public and from experts specified by researchers using the platform (subject to the constraint that no one person should receive a burdensome number of requests to submit predictions – providing these predictions should be thought of as providing a public good, like referee reports). There has been a lot of great work using prediction markets, but we are currently planning to elicit priors individually, so that each forecast is independent.

The hope is that this proves to be a useful tool for researchers who want to collect forecasts for their own studies; that the coordination will ensure no one person is asked to give a burdensome number of forecasts (unless they want to); and that over time the platform will generate data that could be used for more systematic analyses of forecasts.

As part of this work, we are organizing a one-day workshop on forecasts at Berkeley on December 11, 2018, hosted by the Berkeley Initiative for Transparency in the Social Sciences. We are still finalizing the program but hope to bring together all the leading experts working in this area for good discussion. I’m excited about this agenda and will post a link to the program when it is finalized.

Edit: program can be found here.


One Comment