Imagine a plan to increase voter participation that has worked repeatedly in the United States and is now being tried in a country with a recent history of political instability and contested elections. Democracy International ran just such a pilot project in Tunisia, with very encouraging results.[1] However, a similar experiment attempted by the United Nations in another country in the region had to be withdrawn because of political opposition. As behavioral scientists, we know context drives people’s preferences and decisions. Countless studies show preferences are not fixed but respond to features of context. And yet, we rarely pay as much attention to context when designing or measuring interventions.
What Needs to be Localized?
Localization can occur at many levels, including intervention design, stimulus design, and measurement. Many years ago, one of us visited a project for a banking product in rural Andhra Pradesh in South India. Pregnant women (and their families) would make regular monthly deposits that translated into a loan for delivery. The insight behind this intervention was clear: many families did not plan for childbirth-related expenses despite having roughly nine months to do so.
What was the fatal flaw in an otherwise well-designed program? Traditionally, the first child is born at the mother’s natal home, leaving the in-laws less invested in childbirth expenses. Further, the natal home could be in a different district or state in which the bank may not have a local branch.
The stimulus presented to people is how behavioral theory becomes practice. Subtle design choices matter. We were reminded of this when we had to re-do a lot of visuals in a goal-setting tool. The visuals had been tested in Tanzania and the team was seeking to implement this in Kenya. In Tanzania, the gender-neutral icons used to show people choosing between various options were interpreted as intended. However, in neighboring Kenya, women interpreted the visuals as applying only to men, who were seen as the traditional business gender.
Localized Measurement
Localizing research also means localizing the measurement of responses. One of us was piloting basic behavioral experiments around mental accounting and asked respondents in Kenya what they thought was the localized version of a classic question:
Let’s say you buy a cinema ticket and when you arrive at the cinema, you find it has fallen out of your wallet. You have 20 KES in your wallet, which is the price of a ticket. Do you buy a new ticket?
The research team had focused all their attention on what the “right” price for a cinema ticket was. That is not what people were focusing on at all. They had a more basic question: What is a wallet? Several respondents answered the question in the negative, simply because they didn’t understand the idea of money in a wallet.
Sometimes questions are simply unanswerable to the population. When a group of researchers from the World Bank asked people from Jakarta, Lima, and Nairobi about life expectancy, expecting to compare responses with the World Bank staff’s expectations, they found many non-responses and hard-to-interpret data.
The reason? Many respondents thought answering such a question was bad luck under some circumstances.[2]
Other questions require more piloting for precise interpretation. Measuring social norms requires assessing the reference group of the people responding. For a project about women’s workforce participation in Jordan, researchers wanted to know people’s best guesses of the percentage of Jordanian women who worked in their community. Because communities were so varied (rural to urban, migrants to natives), they had to try several versions of the reference group (“people around here,” “people where you live,” “people like you”) before settling on an acceptable one.
Perspective Taking vs. Perspective Getting
How can behavioral scientists get better at developing interventions, designing stimuli, and creating surveys that adequately capture local context? We are often told to “put ourselves in others’ shoes” in order to practice “perspective-taking.”[3] On the surface, this seems like good advice. Perspective-taking is hard, however, if you have never had the experience of trying to survive extreme poverty, living in a conflict-ridden area, or suffering from constant ill health. In fact, perspective-taking in such cases may even decrease accuracy.
A recent survey of World Bank professionals showed that even seasoned professionals harbored inaccurate beliefs about the poor, despite being deeply immersed in the contexts of poverty alleviation. They imagined poor individuals differ from themselves in self-perception, though responses by bottom, middle, and top thirds of the income distribution are similar. Even development professionals often assume poor individuals are less autonomous, less responsible, less hopeful, and less knowledgeable than they are in fact.
A better way to get accurate information about others is "perspective-getting," which occurs when we ask people directly. Perspective-getting is equally hard. Decades of research in behavioral science show us the pitfalls. People exhibit social desirability bias, particularly when power relationships between the researcher and the respondent exist. We often cannot access genuine reasons and substitute the “easier to answer” question instead. Because of this, perspective-getting will always remain a work-in-progress.
Conclusion & Key Points
Localizing research is about making mistakes over and over. No intervention, measure, stimulus, or scientist is perfect from the start. Feedback, however painful (or funny in hindsight), is a key part of the process. We should all be sharing more stories of failure and mistakes so that we can keep reflecting on this.
The best test of an intervention or measure is when people think it has been locally developed. Many years ago, one of us interviewed for a job with Colgate. She had grown up in India using Colgate toothpaste and brushes and had watched countless Colgate ads in Hindi. She was shocked to find out that Colgate was not an Indian company, but an American multinational with headquarters in New York!
We need our own Colgate moments as behavioral scientists. We need the people we serve to feel like the work we are doing originated in their own community, rather than from some far-off institution headquartered in a place the local population has likely never seen.
See the study information at http://democracyinternational.com/media/DI_Tunisia_Study_One-Pager-English-v2.pdf ↩︎
To review the full report, see https://www.worldbank.org/en/publication/wdr2015 ↩︎
For more information about this topic see https://behavioralscientist.org/be-mindwise-perspective-taking-vs-perspective-getting/ ↩︎