:::: MENU ::::

My Favourite Development Stories — Part 1: The bottle of Jack Daniels used as the sole determinant of PPP.

I’m going to start a new feature on this blog, partially to get it going again.

I am not going to be able to properly attribute all the stories here, and it is possible that some urban legends will slip in. Nonetheless, I either experienced these stories myself first-hand or was told them by a reputable source.

This story is attributable to either Stefan Dercon or Christopher Adam, professors at Oxford.

The story is simple: during one of these professors’ lives, he got to see the inner workings of how PPP (purchasing power parity) was set for a small African country. The problem was that in order to calculate PPP, you need to be able to find a basket of goods which is comparable in both the country of interest and the base country (usually the U.S.).

Unfortunately, for Small African Country, the only comparable good for which they had good price data was a bottle of Jack Daniels. So the bottle of Jack Daniels, alone, set PPP for the entire country for several years.

Looking back on the Kickstarter


• “Likes” were not highly correlated with pledges. Sometimes, you would see likes but no pledges; other times, you would see pledges but no likes. Of course, likes could have still been instrumental in getting pledges from others. But overall, facebook was responsible for less than 10% of the total.
• Major blog coverage was crucial.
• Others helped out so very much. I had no idea how much people would advocate for it. In the end, only $1,440 of the $11,631.25 came from people I knew (even as acquaintances) before the Kickstarter. Strangers gave much more than friends both in absolute terms and on average per person, about which I am very glad because I don’t think the point of a Kickstarter is to bankrupt your friends.
• 5% came from people who found it through Kickstarter itself, most towards the end.

All in all, I would not recommend it as a fundraising tool so much as a social networking tool (!). There are probably easier ways of getting the money, given how very much time it takes. But the connections made were invaluable. Via the Kickstarter, might have found something like a business partner, along with many collaborators and people interested in similar ventures.

The other aspect of the process that I didn’t anticipate and turned out to be one of the best benefits was that many of the pledges and people’s efforts were very touching and humbling. When you know someone hard up still supported it…. When you see total strangers (or former total strangers) working so very hard to promote it…. Those are the things I will take with me for life.

Thank you.

“Why hasn’t it been done already?”

Brett Keller asked a fantastic question regarding the Kickstarter: “If it would be this cheap, why hasn’t it been done already?”

I have a few hypotheses:

1) In academia, people usually either get paid for their labour or get co-authorship. I suspect most people doing meta-analyses use the pay-full-time-RAs model. That would be very expensive. We’re mostly using the participatory approach.

Our being able to use the participatory approach might be partially due to the state of the economy, or probably more relevantly to development being a sexy field for young people, and one in which there is no money in general.

I suspect there may be some stickiness in which the older guard assumes it needs to pay and the younger guard isn’t credentialed to do it by itself.

2) Perhaps more typically a meta-analysis comes about when a couple of researchers get really interested in a topic and realize the publication returns to doing a particular analysis would be high. Then they might be able to find volunteer labour for it, but they would still only be doing that one meta-analysis.

Here, the “one topic” necessarily involves many alternative aid programs. I personally have no qualms about doing unsexy research — and meta-analyses are usually not sexy — so doing multiple meta-analyses is not a death knell for me. Nor does it seem unfeasible to me from a coordination perspective.

I actually think there’s a strong individual interest component to this project happening now. The topic is interesting enough to me personally that I’m willing to coordinate, and the topic is interesting enough to others personally that they’re willing to collaborate. This is not a satisfying answer because it begs the question: why wasn’t it interesting enough to people before that they could resolve the coordination problem? But I do think it’s partially heterogeneous preferences. I don’t think coordination costs have changed much, except insofar as the idea that this is how things are done is perhaps more prevalent. Data have also improved substantially over the last few years.

And we’re also moving to a world in which we’re more networked and online, at least the young, university-educated folks. The whole project is a bit crowdsourced (though we can’t fully implement it on nothing yet). I wonder if you will see this kind of model more and more.

The short answer: we’re all Millennials (barely, in my case). Maybe you need Millennials old enough to have an education. What will Millennials with Bio PhDs do?

I could easily be missing some part of the story — would be interested in your thoughts!

Thanks again for the interesting question; it was a pleasure to contemplate. Hope that you can support (promote?) the cause! We don’t have much time left!

P.S. If anyone is offended, thinking it’s interesting enough to them personally and yet they’re not involved — great, get in touch!

The limits of meta-analysis

With all that I have been promoting the Kickstarter, you would think I wouldn’t highlight the limits of meta-analysis.

But progress depends on being honest about the strengths and weaknesses of any method. So let me tell you a story about when a meta-analysis didn’t really help much. This story is essentially the opposite of a meta-analysis, being simply an anecdote.

Young as I am (late 20s), I hurt my back. Badly. I hurt it 4 times in the past year. The most recent time, I was simply trying to walk down the street when it struck, bad enough for me to almost pass out on the street (losing most of your senses but retaining consciousness is a strange feeling).

Things got a lot better. But later on, about to go to Kenya for an impact evaluation planning workshop, I was a little nervous at the prospect of having to sit for a prolonged period of time, since that’s what caused my back to go out in the first place. So I returned to my physical therapist and she suggested I try “dry needling”.

I hadn’t heard of the term before, and to be honest I didn’t look into it too much. This particular physical therapist’s office had already proven its worth in gold, and I had great insurance, so why not give it a shot.

It wasn’t done to treat pain but to stretch out my hamstrings a bit, because your hamstrings can pull your back out of place. Before having the dry needling done, my right hamstrings were always tighter than my left. Every single time I stretched, for months, there was a visible difference — I had over 100 observations.

The physical therapist only dry-needled my right hamstrings, not my left. And afterwards, my right hamstrings, for the first time, stretched farther than my left. I wasn’t anticipating that, and I was honestly surprised and had to take a moment to attribute the cause, so I don’t think it was a placebo effect.

It seems pretty darn causal to me!

Yet I later heard that no firm conclusions have been drawn about dry needling from meta-analysis.*

Without looking at the paper, I could think of two main hypotheses to explain this:

1) Perhaps it has mixed effects and I just got lucky;

2) Perhaps the studies that went into the meta-analysis were of poor quality.

It turns out the reason for the lack of firm findings is much more mundane: they actually do not find many good-quality studies to include (see, for instance, the number of studies cited in Figure 3 on page 25). Honestly, when they get down to using one paper at a time, it stretches the definition of “meta-analysis”.
Still, the episode highlights a major problem meta-analyses face — they are only as good as the studies that go into them.

They also seem to generally put undo emphasis on the mean, when we may be just as interested in other moments. Impact evaluations are often guilty of this as well, reminding me of this comic.

Despite this, you can be sure that if we ever do get a better sense of the general effectiveness of dry needling and the conditions under which it works, it will be because a meta-analysis based on more good studies will have synthesized the data.

* That paper looks at low back pain rather than hamstring flexibility, but there is even less evidence for hamstring flexibility.

Important note: Definition of “aid”

I use a very broad definition of aid.

Because we all know that when it comes down to it, “traditional” aid contributes little to development.

“Aid” for me is anything that improves well-being (well-being is also not the same thing as income). Could a political protest be aid? Yes. Could improving investment climate be aid? Yes.

Why not just call it “development”? Things can develop on their own, and I would like to imply agents (of the “do with”, not “do to” kind). I also think it’s a good idea to nudge those with a more traditional view of aid to consider a wider range of possibilities. It’s a reclaiming of the term.