Implementers of poverty-fighting programmes have high hopes of impact. They often want programmes to increase income, health, happiness, women’s decision-making power, and ultimately, result in better life outcomes for the next generation. But are those expectations realistic?
Programmes working with the very poor have a lot to overcome. The poorest households often eat only one meal a day, have few or no income-generating activities and face frequent health shocks. Children’s school enrolment rates are as low as 40 per cent. How much of this can one programme change?
Programme implementers and evaluators both look to a ‘theory of change’ for guidance. A theory of change is a set of hypotheses of how inputs lead to outputs, which lead to impact. A reasonable theory of change keeps everybody honest and clear-headed by mapping out assumptions: What exactly leads to the desired impact? But just because one can draw nice boxes and arrows between inputs and outputs doesn’t mean one knows a) the true path of impact; or b) the effectiveness of the intervention. Understanding the causal path gets especially tricky with holistic programmes.
Holistic programmes for the ultra poor could for example include both a livelihood development component (such as goats or petty trade) and direct food support. Picture three different scenarios from such a programme:
Scenario 1: holistic programme → increased food purchases → better child nutrition
Scenario 2: holistic programme → increased income → increased food purchases → better child nutrition
Scenario 3: holistic programme → increased income → money for school fees → child gets school-feeding programme → better child nutrition
These are very different paths to impact. We could observe improvements in child nutrition and have no real idea where they came from. Was it the consumption support that directly led to improved child nutrition, or did the program help the families build sustainable income, which led to improved child nutrition? What would we recommend to implementers interested in achieving similar outcomes? Which path leads to the impact we want?
Data and design analysis, or both, can provide those answers.
Impact evaluations measure the causal effects of the programme by estimating how programme participants fared compared to how they would have fared in the absence of the programme. This is typically done by comparing participants to a comparison group of non-participants (ideally randomized so the two groups are alike to begin with, both observably and unobservably). However, we need to do more than merely observe if the first step leads to the last. We want to understand the underlying mechanism. This is often accomplished by fancier designs and collection of specific data.
“If you want to encourage a particular behaviour, you may have to design for it.”
Concretely, to distinguish the first above scenario from the second, one could have multiple treatment arms with and without the consumption support, or the livelihood development, and observe the differential response to food purchases and child nutrition. To distinguish the second from the third, good data (and thinking ahead as to the potential mechanisms) can help: Is there a school-feeding programme in the area, and does treatment lead to higher participation in it? Do food purchases increase? These data, compared to those from control group of non-participants, allow one to speak not just to the efficacy of the full package, but to the underlying mechanism through which it works.
We’re in the middle of several large-scale evaluations of an ambitious programme to lift the ultra poor out of extreme poverty. The Graduation Model (i.e., to ‘graduate’ out of ultra poverty) combines consumption support, a new income-generating asset given to participants along with training and regular coaching, access to savings accounts, and typically additional services like basic health care. The innovation of the model is that it recognizes that the poor face many challenges (fewer assets, fewer skills, poorer health) and therefore attempts to address many of them at once, to ensure that vulnerable households don’t fall deeper into poverty while trying to build new livelihoods.
Innovations for Poverty Action researchers are evaluating graduation pilots in seven different countries, using randomized evaluations. Since they compare households that were alike before the programme, randomized evaluations assure us that any differences between the groups after the programme are more than likely caused by it. (This is how new medicines are tested. Randomized trials have been used with social programmes since the 1960s, and increasingly in developing countries since the 1990s.) We compare participating households to a control group of non-participants, before and after the programme, and follow up a year after programme completion to measure whether impacts are sustained over time.
We will soon have results from six of the seven pilots, in Ethiopia, Ghana, Honduras, India, Pakistan and Peru. And in Bangladesh, researchers have also conducted a long-term randomized evaluation. For now, results from India and Bangladesh tell a consistent story.
Lesson 1: Graduation programmes for the ultra poor can benefit families and their children.
Graduation programmes increase household consumption and reduce poverty. They improve children’s food security and increase expenditures on food – even when food is provided as part of the programme. Participants are also happier than non-participants.
Lesson 2: You get what you design for.
The pilots were designed foremost to create new economic opportunities, and they did that. We might have hoped to see cascades of impacts into other areas, like school attendance, but the evidence isn’t there. Why not? To some extent we can’t be sure, but returning to the theory of change, what were the expectations and assumptions? There wasn’t any part of the intervention expressly focused on getting kids into school. So it’s hard to be overly surprised when there’s no impact.
Moreover, if we want to create change in a particular area, we have to understand the constraints faced by the population. How far is the nearest school? How much are school fees? Can households afford them? Are school fees really the only barrier to school attendance? Are the children expected to work to help support the family? With this knowledge, programmes can design specific interventions to overcome barriers. An intervention could be as simple as providing information to parents to encourage them to send their daughters to school. (This has been very effective in other studies.)
We can see the power of design reflected in savings balances across sites. Preliminary data from our Ethiopia study show that households have dramatically more in the bank than in other sites – hundreds of dollars more. This is because Ethiopia was the only site to require that participants save the value of the asset transfer they received.
It’s no surprise then that conditionality has been a hot topic in cash transfer programmes lately. A study of unconditional cash transfers offered by the programme GiveDirectly in Kenya showed that households given large sums of money (US$300–US$1,000) with no strings attached don’t waste the money on frivolous expenditures or alcohol. They invest it and upgrade thatch roofs to tin. But a comparison of conditional and unconditional cash transfers in Malawi showed that while unconditional transfers improved schooling rates, programmes with strict conditions have much stronger effects, with over twice the improvement in enrolment. Lesson learned: if you want to encourage a particular behaviour, you may have to design for it.
Still, designing effective programmes can be extraordinarily difficult, a task made all the more challenging by variation across sites and contexts. Early indicators suggest quite strong results on household consumption across most of the sites, with only one real exception, Honduras. The qualitative feedback along the way for Honduras is enlightening.
Why the difference in Honduras? The answer seems to have a lot to do with chickens. In the Honduras pilot most of the families chose chickens as their livelihood, and invested significant resources in feeding and caring for the chickens. But the imported breed did not fare well in Honduras, and the chickens died off in large numbers, leaving many households worse off than they started. In Ethiopia, conversely, modern beehives performed exceptionally well, bolstered by linkages to the national market for honey.
So, don’t give out chickens, is that the lesson? No, it isn’t that simple. The lessons are broader, and more nuanced. We learn that one must pay careful attention to the risks and rewards of the livelihood grant, and that ‘new’ livelihoods may pose particular risks, perhaps known risks, or perhaps unknown ones. Both the successes and the failures show the value of experimentation complemented by rigorous evaluation. New implementers of the graduation model have learned much from the early adopters and are incorporating lessons from the pilots in the design of their scaled-up programmes – in livelihood development, savings mobilization, and children’s long-term welfare. We plan to evaluate new graduation programmes to see whether impact can be maintained at scale, and indeed even improved upon.