:::: MENU ::::

Research funding committees

I am glad to be part of the Social Science Meta-Analysis and Research Transparency (SSMART) review committee for its next round of projects. A total of $230,000 is available in grants of up to $30,000.

I am also excited to be on an oversight committee put together by ACE to assist their newly hired research officer Greg Boese in making decisions about which research to fund. TJ Mather and Maxmind have pledged $1,000,000 over the next three years to investigate the most effective ways to help animals, an area in which very little research exists. The people on the committee are very impressive and I look forward to working with them!

Research committees can be fun, because you get to stay apprised of all the new and exciting projects before they happen. They are also a great form of effective altruism; 80,000 Hours often recommends the somewhat similar work of a foundation grantmaker.

I am very excited to see what comes out of these initiatives.


Using machine learning for meta-analysis

AidGrade is starting to use machine learning to help extract data from academic papers for meta-analysis. This is a big deal – meta-analyses tend to go out of date quickly because data extraction is such a time-intensive process and new studies are constantly coming out at an ever-increasing rate.

AidGrade will use its existing database of impact evaluation results to help build and validate models. For each extracted piece of information, it will also generate a probability that the information is correct.

At the very minimum, this will reduce the amount of time it takes to identify key characteristics of studies, such as where they were done and which methods they used. It is also the only way to ensure that meta-analyses are perpetually updated as new studies come out. Given that the methods should be scalable to much of economics, education, and health (think of a ScienceScape (update: now known as Meta) for meta-analysis – they have catalogued 25 million studies, a number which one would definitely need machine learning to process!), it will build this tool in a general way so that its results can be used to inform policy even in developed countries.

To support this, AidGrade has a new crowdfunding campaign. Please share and contribute.


How much does an impact evaluation improve policy decisions?

Thanks to excellent feedback, I’ve extended my generalizability paper to include discussion of how much an impact evaluation improves policy decisions.

Results, in a nutshell: the “typical” impact evaluation (of a program with a small effect size, compared to an outside option that also has a small effect size) might improve policy decisions by only about 0.1-0.3% (of a small amount). If the outside option is much different (say an effect size of 0) and it is one of the earliest impact evaluations on a topic, this can go up to 4.6%.

There are a lot of caveats here, chief among them that an impact evaluation provides a public good and many people can use its results.

Nonetheless, personally, I find this sobering. I don’t think we’re usually in that best case scenario. These aren’t the results I want, but they are the results I get.