:::: MENU ::::

Announcing the Launch of the Social Science Prediction Platform!

I have been highly supportive of collecting ex ante forecasts of research results for some time now. Today, I am happy to say that the Social Science Prediction Platform is finally ready for public consumption.

This project, joint with Stefano DellaVigna and with the essential assistance of Nicholas Otis, Henry Xu, and the BITSS team, aims to do several things. Personally, I hope it can:

1. Popularize the ex ante prediction of research results

As argued elsewhere (e.g. here, here and here, as well as in this Science piece with Stefano DellaVigna and Devin Pope), ex ante priors are extremely useful scientifically. Some people already routinely collect them, but to others this is still a relatively new idea.

Personally, I hope that the collection of forecasts of research results becomes somewhat like pre-analysis plans: widespread in certain fields. However, like pre-analysis plans, it may take time for collecting forecasts to really take off. First, we need good examples, templates, and even workshops. This SSPP can be a useful hub to organize these activities. We already have an extensive Forecasting Survey Guide with tips for how to structure forecasting surveys, based on earlier work. We have a template and some annotated examples. We’ve had a workshop on the topic and hope to have more in the future. It is my hope that by gathering these resources together, we can provide a public good that enables others to launch research projects on a variety of topics, such as on meta-science, forecasting, or belief updating, in addition to increasing the value of each individual forecasted study.

2. Solve the coordination problem inherent in gathering forecasts

What coordination problem? Well, if everyone gathered forecasts for all their projects, their private incentive would be to request as many forecasts as possible to maximize their sample size. However, the more requests you receive as a forecaster, the more likely you are to ignore those requests. The platform can help to solve this problem in several ways:

– It can nudge researchers towards sending their surveys to fewer individuals. Pilots suggest that often many forecasts are not needed for a precise estimate

– The platform knows when an individual has already taken a lot of surveys and can direct further surveys to other people instead. For example, someone can specify: I only want to forecast the results of 5 surveys per month. Then we don’t send them more than that. I only wish journals had a similar system whereby they could coordinate across journals and not send anyone too many requests to referee

– We can build in incentives and nudges for people to provide more forecasts. For example, we are planning an incentive scheme for graduate student forecasters, and we suggest that those who want to have forecasts collected for their own projects provide a certain number of forecasts themselves, to be fair.

3. Gather panel data and identify super-forecasters

An advantage that the platform has over individual research teams gathering forecasts for their own independent studies is that we can begin to look across predictions and follow forecasters longitudinally. This could be immensely valuable in identifying super-forecasters and learning more about which types of studies or results are easier to forecast and to what extent forecast ability depends on various forecaster characteristics such as domain expertise.

The first few studies are now up online and ready to be forecast! Over the coming weeks we will post other studies and slowly open it up, so please do get in touch if you have a survey you would like to run. We expect that we’ll continue to learn as we go and would welcome further feedback.


One Comment