clock menu more-arrow no yes mobile

Filed under:

A charity just admitted that its program wasn’t working. That’s a big deal.

Evidence Action’s commitment to research and transparency should be a model for other nonprofits.

Dhaka, Bangladesh, June 15, 2018.
Dhaka, Bangladesh, June 15, 2018.
Allison Joyce/Getty Images
Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

Last week, a major international development charity did something remarkable: It admitted that one of its programs didn’t seem to work.

No Lean Season is an innovative program that was created to help poor families in rural Bangladesh during the period between planting and harvesting (typically September to November). During that period, there are no jobs and no income, and families go hungry. By some estimates, at least 300 million of the rural poor may be affected by seasonal poverty.

No Lean Season aimed to solve that by giving small subsidies to workers so they could migrate to urban areas, where there are job opportunities, for the months before the harvest. In small trials, it worked great. A $20 subsidy was enough to convince people to take the leap. They found jobs in the city, sent money home, returned for the harvest season, and made the trip again in subsequent years, even without another subsidy.

So Evidence Action, the nonprofit that funded the pilot programs of No Lean Season, invested big in scaling it up. In 2016, it had run the program in 82 villages; in 2017, it offered it in 699. No Lean Season made GiveWell’s list of top charities.

Evidence Action wanted more data to assess the program’s effectiveness, so it participated in a rigorous randomized controlled trial (RCT) — the gold standard for effectiveness research for interventions like these — of the program’s benefits at scale.

Last week, the results from the study finally came in — and they were disappointing. In a blog post, Evidence Action wrote: “An RCT-at-scale found that the [No Lean Season] program did not have the desired impact on inducing migration, and consequently did not increase income or consumption.” (The emphasis is in the original blog post.)

This admission was a big deal in development circles. Here’s why: It is exceptionally rare for a charity to participate in research, conclude that the research suggests its program as implemented doesn’t work, and publicize those results in a major announcement to donors.

It would have been easy, on multiple levels, for Evidence Action to do otherwise. It could have ignored or contested the results of the RCT; the research would still be published, but it would attract a lot less attention and publicity. Or it could have dismissed the failure as unrepresentative — there were unusual floods in Bangladesh in 2017, it could argue, which might have caused the program failures. Or it could have put a more positive spin on the results. After all, while the RCT was discouraging, it wasn’t devastating — there was, in fact, a small increase in migration.

Evidence Action did the opposite. “Consistent with our organizational values, we are putting ‘evidence first,’ and using the 2017 results to make significant program improvements and pivots,” the group wrote. “We are continuing to rigorously test to see if program improvements have generated the desired impacts, with results emerging in 2019. We have agreed with GiveWell that No Lean Season should not be a top charity in 2018. Until we assess these results, we will not be seeking additional funding for No Lean Season.”

Let that sink in: Evidence Action is saying it won’t seek new funds for its own program until it has figured out why it failed, because the evidence showed it doesn’t work. It’s a stunning development, and it’s worth digging into why it happened and why it matters.

How Evidence Action solved some of the big challenges of transparency and evidence-driven work

We’d benefit enormously from a nonprofit sector in which every charity was as careful and honest as Evidence Action. Charities have a unique understanding of their programs, and when they’re active participants in the process of figuring out whether those programs work, we can learn a lot more and do more good.

But while it’s important for charities to be transparent, it’s also immensely challenging. Most organizations really believe in the work they do, and they worry that transparency could scare off their donors and be destructive for their programs — eventually hurting the people they’re trying to help.

Research can sometimes be confusing instead of clarifying; for instance, early childhood education programs have been studied to death, but often the research sheds more heat than light, serving as ammunition for partisans for and against the programs without telling us much about how to run them more effectively. And RCTs are expensive — enough so that many charities can’t afford them.

Making matters worse, many charities have just one project. If an RCT shows that that project doesn’t work, that’s tantamount to arguing that the charity is a waste of money and should be shut down — and that’s a painful enough conclusion that most charities will probably grasp to avoid it, whether that means not collecting the evidence in the first place or not publicizing it once they have it if the results prove dismal.

So what made Evidence Action different? One thing, of course, is its founding commitment to doing research. Evidence Action’s mission is to “bridge the gap between proven interventions that work and scaling them up to produce measurable impact for millions of people.” That means it’s had an unusual amount of in-house expertise with doing research and analyzing the results from the beginning, and was unusually aware of the challenges of scaling a project while maintaining its cost-effectiveness.

It seems likely that another thing that helped Evidence Action was its organizational structure. Evidence Action exists to study, scale, and then run promising programs. That means there’s room for failure, unlike with a charity that would have to close its doors if it discovered that its sole program was ineffective.

Evidence Action expects some of the programs it incubates to fail, and will keep investing in the other programs with a better track record. Losing funding for No Lean Season won’t affect its other programs, which fundraise independently (many of them are still among GiveWell’s top recommended charities). Furthermore, admitting failure doesn’t mean No Lean Season is doomed — Evidence Action has the resources to keep working on it to try to correct the problems that interfered this time. All of that means there’s a lot more institutional room to admit failure.

Why this matters

If Evidence Action hadn’t checked whether No Lean Season kept working when scaled up, it still would have had a lot of evidence that made it look like a promising program. It might have wrongly concluded the program was doing a lot of good, and continued spending a lot of money. That money would go to migration subsidies that, for whatever reason, don’t make a difference when distributed in that way, instead of to other, better programs like Evidence Action’s Dispensers for Safe Water or Deworm the World. Evidence Action would have missed the chance to fix whatever went wrong and make No Lean Season more effective, as they’re currently working to do.

So it’s crucially important that charities run tests of their programs, including tests at scale. But when tests like these are so expensive and can be devastating for the charities, realistically, most charities won’t conduct such research or won’t publish it.

If we want more careful, clearly researched, thorough investigations of programs, we need to do more than pressure charities to do research — we need to create a culture where they have genuine affordance to do so. That means more charities structured like Evidence Action, where learning that one program doesn’t work won’t undermine the whole organization — and it means more support for smaller charities to collect evidence and iterate on their programs, trying to find an implementation that does work.

The response on social media to Evidence Action’s blog post was almost uniformly positive — and I think that’s a good sign. Evidence Action’s transparency, good epistemics, and commitment to getting No Lean Season right is highly encouraging (disclosure: it motivated me to donate to the group this year). If those qualities similarly motivate other donors, maybe we can create a world in which charities expect that doing research will enable them to do more for their recipients, not less — even if the results are discouraging.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.