:::: MENU ::::

Predictions of Social Science Research Results

There has been an explosion of studies collecting predictions about what they will find, especially in development economics. I am fully supportive of this, as you might guess from several past blog posts. I think it is in every applied researcher’s best interest to collect forecasts about what they will find. If nothing else, they help make null results more interesting and allow researchers to note which results were particularly unexpected.

Another issue previously alluded to that I would like to highlight, however, is that forecasts can improve experimental design. I’m writing a book chapter on this now so expect more on this topic soon.

Forecasts are also important in Bayesian analysis, as “priors” one could use. And in the very, very long-run, they have the potential to improve decision-making: there are never going to be enough studies to answer all the questions we want answered, so it would be very nice to be able to say something about when our predictions are likely to be pretty accurate even in the absence of a study. Which study outcomes are less predictable? Who should we listen to? How should we aggregate and interpret forecasts? In the absence of being able to run ten billion studies, anything we can do to even slightly increase the accuracy of our forecasts is important.

Towards this end, I am collaborating with Stefano DellaVigna (of these awesome papers with Devin Pope) to build a website that researchers can use to collect forecasts for their studies. To be clear, many people are collecting forecasts on their own already for their respective research projects, but we are trying to develop a common framework that people can use so as to gather predictions more systematically. Predictions would be elicited from both the general public and from experts specified by researchers using the platform (subject to the constraint that no one person should receive a burdensome number of requests to submit predictions – providing these predictions should be thought of as providing a public good, like referee reports). There has been a lot of great work using prediction markets, but we are currently planning to elicit priors individually, so that each forecast is independent.

The hope is that this proves to be a useful tool for researchers who want to collect forecasts for their own studies; that the coordination will ensure no one person is asked to give a burdensome number of forecasts (unless they want to); and that over time the platform will generate data that could be used for more systematic analyses of forecasts.

As part of this work, we are organizing a one-day workshop on forecasts at Berkeley on December 11, 2018, hosted by the Berkeley Initiative for Transparency in the Social Sciences. We are still finalizing the program but hope to bring together all the leading experts working in this area for good discussion. I’m excited about this agenda and will post a link to the program when it is finalized.

Edit: program can be found here.


Workshop on Causal Inference and Extrapolation

I have been helping the Global Priorities Institute at the University of Oxford organize a workshop on new approaches in causal inference and extrapolation, coming up March 16-17. It is pretty small, with a lot of breaks to facilitate good discussion (designed with SITE in mind as a model). As it is just before the annual conference of the Centre for the Study of African Economies, we are hoping others can make it, especially to the keynote, which is on the evening before the CSAE conference begins and is open to all. I’m really excited about both the institute and the workshop and hope that this becomes an annual event in some way, shape or form.

If you will be in town and would like to attend but have not signed up, please let me know. The programme is here.

Edit: site appears to be down (too much traffic?), so I am uploading a copy of the programme here.


Opportunities

Aidan Coville and I have been running a behavioural experiment with policymakers, practitioners and researchers to see how they update based on new evidence from impact evaluations and what biases they may have in updating. We have done some analysis of the numerical data, but we also have the audio transcripts from the one-on-one enumeration, where people were asked to describe their thought processes and why they gave the answers they gave. We are looking for someone proficient in text analysis to collaborate on a separate paper. This would be a great opportunity for grad students, but others welcome, too. Please pass on to anyone you know who might be interested.

Second, please see this post for a description of a Research Assistant position. Deadline is Jan. 31.

Finally, I am helping to organize a workshop for the Global Priorities Institute at the University of Oxford on causal inference and extrapolation. The call for papers is closed but please get in touch if you are interested in attending, as space is limited.


Pages:12345678...15