:::: MENU ::::

Back to blogging

Back to semi-regular blogging, with too many updates to mention.

Along with David Broockman, I have been helping out with Y Combinator Research’s basic income study. This project, led by Elizabeth Rhodes, will provide unconditional cash transfers on a randomized basis and seems like it will be our best shot at answering a bunch of questions about a transfer scheme like this in the US. A description of the much smaller pilot is here; the full study details are not publicly available yet. I will sometimes be in San Francisco for this.

Another exciting thing is that data collection for a collaboration with the World Bank on how policymakers make decisions is just wrapping up. Expect updated papers (multiple!) here soon.

I’ve also gotten some results for a study on whether new technology affects ethical beliefs. The results surprised me: at least in the case I consider, the answer is no. I’ll try to post more about this in the future.

A lot of other interesting things in the pipeline. I am putting blogging at least once a month as a hard goal for myself, so more soon!


Updates

It’s been a while since that last post and a lot has changed in the interim.

I am pleased to announce I have taken up my position at the Research School of Economics at the Australian National University, after visiting Stanford. The position also carries with it the title of Inaugural Wealth and Wellbeing Fellow, and there’s no teaching until 2018.

The three research projects I have been working on that I am most excited about I unfortunately can’t talk about yet publicly. Hopefully, I will be able to resume blogging later in the fall or winter.

I recently participated in EAGxMelbourne and EA Global, as well as seminars at the University of Melbourne and UNSW.

As for future travel plans, I have some things scheduled in North America (Berkeley, NYC, Princeton, Chicago) and, later on, Hong Kong and Singapore. Let me know if you are nearby one of those areas and interested in meeting up.


Research funding committees

I am glad to be part of the Social Science Meta-Analysis and Research Transparency (SSMART) review committee for its next round of projects. A total of $230,000 is available in grants of up to $30,000.

I am also excited to be on an oversight committee put together by ACE to assist their newly hired research officer Greg Boese in making decisions about which research to fund. TJ Mather and Maxmind have pledged $1,000,000 over the next three years to investigate the most effective ways to help animals, an area in which very little research exists. The people on the committee are very impressive and I look forward to working with them!

Research committees can be fun, because you get to stay apprised of all the new and exciting projects before they happen. They are also a great form of effective altruism; 80,000 Hours often recommends the somewhat similar work of a foundation grantmaker.

I am very excited to see what comes out of these initiatives.


Using machine learning for meta-analysis

AidGrade is starting to use machine learning to help extract data from academic papers for meta-analysis. This is a big deal – meta-analyses tend to go out of date quickly because data extraction is such a time-intensive process and new studies are constantly coming out at an ever-increasing rate.

AidGrade will use its existing database of impact evaluation results to help build and validate models. For each extracted piece of information, it will also generate a probability that the information is correct.

At the very minimum, this will reduce the amount of time it takes to identify key characteristics of studies, such as where they were done and which methods they used. It is also the only way to ensure that meta-analyses are perpetually updated as new studies come out. Given that the methods should be scalable to much of economics, education, and health (think of a ScienceScape (update: now known as Meta) for meta-analysis – they have catalogued 25 million studies, a number which one would definitely need machine learning to process!), it will build this tool in a general way so that its results can be used to inform policy even in developed countries.

To support this, AidGrade has a new crowdfunding campaign. Please share and contribute.


How much does an impact evaluation improve policy decisions?

Thanks to excellent feedback, I’ve extended my generalizability paper to include discussion of how much an impact evaluation improves policy decisions.

Results, in a nutshell: the “typical” impact evaluation (of a program with a small effect size, compared to an outside option that also has a small effect size) might improve policy decisions by only about 0.1-0.3% (of a small amount). If the outside option is much different (say an effect size of 0) and it is one of the earliest impact evaluations on a topic, this can go up to 4.6%.

There are a lot of caveats here, chief among them that an impact evaluation provides a public good and many people can use its results.

Nonetheless, personally, I find this sobering. I don’t think we’re usually in that best case scenario. These aren’t the results I want, but they are the results I get.