:::: MENU ::::


After spam issues, re-enabling comments. Comments will be automatically locked 30 days after a post is made. Also testing out a new spam filter – apologies if your comments end up caught up in it (e-mail me to let me know if that is the case).

Priors matter for optimal design of experiments

Banerjee, Chassang and Snowberg have an under-appreciated paper, “Decision-Theoretic Approaches to Experiment Design and External Validity”, that anyone who designs experiments should think about.

Some highlights:

1. Bayesians do not (if making policy decisions themselves) randomize

Suppose you were a Bayesian and trying to maximize expected utility. There exists some set of ways to assign individuals to the treatment group that would maximize your expected utility. Randomizing could sometimes deviate from that set of ways (e.g. if you are unlucky enough to have imbalance between the treatment and control group along some observable characteristics). Therefore, randomizing would not be optimal. This is in the same spirit as Kasy (2013).

2. Priors matter

Apart from randomizing sometimes leading to failure to obtain balance, randomizing could also not be optimal for some priors. They provide the example of a superintendent who believes that whether a student is from a poor or privileged background is the main determinant of educational outcomes and who believes that those who go to private schools do better because they tend to be from privileged backgrounds, but who is open to testing whether private schools are helpful in and of themselves. The superintendent has the chance to enroll a single student in a private school. Clearly, they would not learn much by enrolling a privileged student in a private school — to learn the most, they should enroll a poor student.

3. The optimal experimental design depends on how the decisions are made

The experimenter may not be making the policy decision themselves. Rather, they may be trying to convince others (or trying to convince some small part of themselves that is uncertain, in the Knightian sense). Thus, when designing the experiment they need to place some weight on how much they want to convince themselves, given their own priors, vs. how much they want to convince someone else (or themselves, under ambiguity), given the other person’s priors. This depends on how the decisions are being made, e.g., whether it is a group of people with quite varied priors making a decision.

4. Randomizing is optimal when faced with an adversarial audience (given a sufficiently large sample size and assuming a maximin objective)

Suppose you care about the worst case scenario: a decision-maker whose priors are such that given the experimenter’s chosen design they have a greater chance of picking the wrong policy than anyone else with different priors.

In this situation, a randomized experiment is best (so long as the sample size is sufficiently large). It is not targeted towards people with any particular priors, and given that it is not targeted towards people with any particular priors, it also leaves less room for error. Optimizing for some priors generally means making decisions worse for people with other priors. The qualifying statement that the sample size must be sufficiently large is included because with small samples comes greater loss of power from randomization compared with the optimal deterministic experiment (again, think of covariate balance).

This paper is nice because, among other things, it helps explain why academics who face skeptical audiences randomize while firms that do not face an adversarial audience but merely wish to learn for the sake of their own decision-making will experiment in a smaller and more targeted way, especially when the costs per participant are high. A key assumption in the current framework is the maximin objective, which may not always be what we care about.

Back to blogging

Back to semi-regular blogging, with too many updates to mention.

Along with David Broockman, I have been helping out with Y Combinator Research’s basic income study. This project, led by Elizabeth Rhodes, will provide unconditional cash transfers on a randomized basis and seems like it will be our best shot at answering a bunch of questions about a transfer scheme like this in the US. A description of the much smaller pilot is here; the full study details are not publicly available yet. I will sometimes be in San Francisco for this.

Another exciting thing is that data collection for a collaboration with the World Bank on how policymakers make decisions is just wrapping up. Expect updated papers (multiple!) here soon.

I’ve also gotten some results for a study on whether new technology affects ethical beliefs. The results surprised me: at least in the case I consider, the answer is no. I’ll try to post more about this in the future.

A lot of other interesting things in the pipeline. I am putting blogging at least once a month as a hard goal for myself, so more soon!


It’s been a while since that last post and a lot has changed in the interim.

I am pleased to announce I have taken up my position at the Research School of Economics at the Australian National University, after visiting Stanford. The position also carries with it the title of Inaugural Wealth and Wellbeing Fellow, and there’s no teaching until 2018.

The three research projects I have been working on that I am most excited about I unfortunately can’t talk about yet publicly. Hopefully, I will be able to resume blogging later in the fall or winter.

I recently participated in EAGxMelbourne and EA Global, as well as seminars at the University of Melbourne and UNSW.

As for future travel plans, I have some things scheduled in North America (Berkeley, NYC, Princeton, Chicago) and, later on, Hong Kong and Singapore. Let me know if you are nearby one of those areas and interested in meeting up.

Research funding committees

I am glad to be part of the Social Science Meta-Analysis and Research Transparency (SSMART) review committee for its next round of projects. A total of $230,000 is available in grants of up to $30,000.

I am also excited to be on an oversight committee put together by ACE to assist their newly hired research officer Greg Boese in making decisions about which research to fund. TJ Mather and Maxmind have pledged $1,000,000 over the next three years to investigate the most effective ways to help animals, an area in which very little research exists. The people on the committee are very impressive and I look forward to working with them!

Research committees can be fun, because you get to stay apprised of all the new and exciting projects before they happen. They are also a great form of effective altruism; 80,000 Hours often recommends the somewhat similar work of a foundation grantmaker.

I am very excited to see what comes out of these initiatives.