:::: MENU ::::

Updates

It’s been a while since that last post and a lot has changed in the interim.

I am pleased to announce I have taken up my position at the Research School of Economics at the Australian National University, after visiting Stanford. The position also carries with it the title of Inaugural Wealth and Wellbeing Fellow, and there’s no teaching until 2018.

The three research projects I have been working on that I am most excited about I unfortunately can’t talk about yet publicly. Hopefully, I will be able to resume blogging later in the fall or winter.

I recently participated in EAGxMelbourne and EA Global, as well as seminars at the University of Melbourne and UNSW.

As for future travel plans, I have some things scheduled in North America (Berkeley, NYC, Princeton, Chicago) and, later on, Hong Kong and Singapore. Let me know if you are nearby one of those areas and interested in meeting up.


Coming up next

I’ve been busily working on the next set of papers. As a quick preview, here are some of the things I am most excited about:

Experiments with policymakers

In collaboration with the World Bank, I am testing out how policymakers update in response to new information from impact evaluations. In order for policymaking to be evidence-based, you need three things: evidence, correct updating based on the evidence, and for there to not be any other pressures facing policymakers (i.e. political economy reasons to pursue other programs). A lot of studies contribute to building up the evidence base, but this paper focuses on the second issue. We look at both standard and novel biases, examine what they think of across-study variation, and test out different ways of presenting information to overcome these biases.

Optimal goal-setting

This RCT is a very cool collaboration with a firm on whether people set the right goals. I will blog more about this in the future.

How activists are born

I am collaborating with David Broockman of Stanford GSB and a large activist group to test whether building relationships with others invested in a cause can inspire people to take action.

Tablets impact evaluation

An excellent undergrad at Stanford University, James Taechajongjintana, informed me about a program that provided hundreds of thousands of tablets to students. It is remarkable that no one had looked at all the data, and we are collaborating to turn this into a paper.

Animal advocacy research

I am collaborating with Josh Tasoff of Claremont Graduate University and Emiliano Huet-Vaughn of Middlebury to test animal advocacy interventions and identify how they could be made more effective.


Research funding committees

I am glad to be part of the Social Science Meta-Analysis and Research Transparency (SSMART) review committee for its next round of projects. A total of $230,000 is available in grants of up to $30,000.

I am also excited to be on an oversight committee put together by ACE to assist their newly hired research officer Greg Boese in making decisions about which research to fund. TJ Mather and Maxmind have pledged $1,000,000 over the next three years to investigate the most effective ways to help animals, an area in which very little research exists. The people on the committee are very impressive and I look forward to working with them!

Research committees can be fun, because you get to stay apprised of all the new and exciting projects before they happen. They are also a great form of effective altruism; 80,000 Hours often recommends the somewhat similar work of a foundation grantmaker.

I am very excited to see what comes out of these initiatives.


Using machine learning for meta-analysis

AidGrade is starting to use machine learning to help extract data from academic papers for meta-analysis. This is a big deal – meta-analyses tend to go out of date quickly because data extraction is such a time-intensive process and new studies are constantly coming out at an ever-increasing rate.

AidGrade will use its existing database of impact evaluation results to help build and validate models. For each extracted piece of information, it will also generate a probability that the information is correct.

At the very minimum, this will reduce the amount of time it takes to identify key characteristics of studies, such as where they were done and which methods they used. It is also the only way to ensure that meta-analyses are perpetually updated as new studies come out. Given that the methods should be scalable to much of economics, education, and health (think of a ScienceScape for meta-analysis – they have catalogued 25 million studies, a number which one would definitely need machine learning to process!), it will build this tool in a general way so that its results can be used to inform policy even in developed countries.

To support this, AidGrade has a new crowdfunding campaign. Please share and contribute.


How much does an impact evaluation improve policy decisions?

Thanks to excellent feedback, I’ve extended my generalizability paper to include discussion of how much an impact evaluation improves policy decisions.

Results, in a nutshell: the “typical” impact evaluation (of a program with a small effect size, compared to an outside option that also has a small effect size) might improve policy decisions by only about 0.1-0.3% (of a small amount). If the outside option is much different (say an effect size of 0) and it is one of the earliest impact evaluations on a topic, this can go up to 4.6%.

There are a lot of caveats here, chief among them that an impact evaluation provides a public good and many people can use its results.

Nonetheless, personally, I find this sobering. I don’t think we’re usually in that best case scenario. These aren’t the results I want, but they are the results I get.


Pages:1234567...12