This dissertation contains three essays broadly related to evaluating program effects in developing countries and survey methodology.
In Chapter 1, joint work with Naresh Kumar, we evaluate the impact of a multifaceted female empowerment program on reducing intimate partner violence (IPV) in urban Liberia. We ran a randomized controlled trial (RCT) in partnership with the Liberian Red Cross. The program intervention includes intensive psychosocial therapy and vocational skills training throughout a full year. About 12 months after program completion, we find the program significantly reduced the proportion of women who experienced emotional, physical, and sexual IPV by 10-26 percentage points (from control bases of 24-62 percent). While there are multiple pathways through which IPV could be impacted, one channel is that the business training was highly effective: labor supply increased by 37 percent and expenditure by 49 percent. One focus of the program is psychological empowerment, and we find positive but statistically insignificant effects on distress and happiness indices. We also find improvements in social norms around IPV: perceived justifiability of IPV reduced by 0.3 standard deviations.
In Chapter 2, joint work with Shilpa Aggarwal, Dahyeon Jeong, Naresh Kumar, Jonathan Robinson and Alan Spearot, we study another important issue in this topic, which is the accurate measurement of IPV. Women may under-report intimate partner violence (IPV) due to several social and psychological factors. We conducted a measurement experiment in rural Liberia and Malawi in which women were asked IPV questions via self-interviewing (SI) or face-to-face interviewing (FTFI). About a third of women incorrectly answered basic screening questions in SI, and SI generates placebo effects on innocuous questions even for those who "pass" screening. Because the probability of responding "yes" to any specific IPV question is less than 50 percent, and that IPV is typically reported as an index (reporting yes to at least one question), such misunderstanding increases IPV reporting. In Malawi, we find that SI increases the reported incidence of any type of IPV by 13 percentage points on a base of 20 percent; in Liberia, we find an insignificant increase of 4 percentage points on a base of 38 percent. Our results suggest SI may spuriously increase reported IPV rates.
In Chapter 3, joint work with Shilpa Aggarwal, Dahyeon Jeong, Naresh Kumar, Jonathan Robinson and Alan Spearot, we quantify survey fatigue by randomizing the order of questions in in-person surveys (lasting 2.5 hours on average) fielded in an evaluation of cash transfers in rural Liberia and Malawi. An additional hour of survey time increases the probability that a respondent skips a question by 10-64 percent. Because skips are more common, the total monetary value of aggregated categories such as assets or expenditures declines as the survey goes on, and this effect is sizeable for some categories: for example, an extra hour of survey time lowers food expenditures by 25 percent. Evidence from a similar experiment within high-frequency phone surveys shows that the results are not driven by the respondents deliberately choosing to skip questions in order to hasten the end of the survey, suggesting that cognitive burden is the key driver of survey fatigue.