Sharing versus Believing (Mis)information on Social Media
- Narang, Jimmy
- Advisor(s): Miguel, Edward
Abstract
This dissertation explores how misinformation spreads on social networks through peer-to-peer messaging, when trusted friends or family share news stories with each other. It identifies mechanisms through which false stories gain greater credibility from sharing than true stories do; and demonstrates these mechanisms using lab-in-field experiments in India. In doing so the dissertation addresses a key gap in the literature: namely, how platforms such as WhatsApp --- which have no centralized newsfeed, offer no recommendations, and provide no feedback on user engagement --- nonetheless become powerful vectors of transmitting misinformation online.
Chapter 1 presents the motivation behind this research and the relevant literature. Chapter 2 develops a microeconomic model of how individuals (“sharers”) decide which stories to share with their friends, and how those friends (the “receivers”) update their beliefs about these stories in response. In the model, receivers take into account who shared the story -- and guess why they might have shared it -- while forming their opinion, but make mistakes in the process. For instance, they misinterpret sharing as a sign of the sharer's belief in the story's veracity (and discount other reasons for sharing), or they overestimate how informative their friends' beliefs are about a story's truth. Moreover, receivers' errors disproportionately increase their beliefs in false stories, relative to true ones.
Chapter 2 describes the experimental design I use to test if receivers indeed make such errors, and identify why they do so. I build a social-media-style platform for the study, and invite over 800 pairs of real-life friends to share news stories with each other on it, and report their beliefs about these stories veracity. For robustness, I vary how these stories are selected, participants' incentives for reporting various answers, as well as the pace with (and setting in) which complete these tasks. The study's findings are robust to these variations.
Chapter 3 presents empirical evidence on how sharers decide which stories to share, and how well they can predict these stories' veracity. I find that sharers' beliefs are strong predictors of stories' veracity, as well as their likelihood of sharing these stories. Nevertheless, their sharing decisions are essentially uninformative about story veracity. Chapter 4 presents empirical evidence on how receivers update their beliefs about these forwarded stories. I find receivers significantly over-interpret sharing as a sign of a story’s veracity, and update as if there is a 2-in-3 chance that a shared story is true (when in fact, the chance is scarcely better than noise). Furthermore, I find receivers show no sign of accounting for _why_ the sharer may have shared the story. Consequently, receivers' beliefs about shared stories increase irrespective of the sharers' beliefs in them. To isolate various source(s) of these errors, I present receivers with (i) their friends' _beliefs_ about the story, directly; and (ii) informative, computer-generated clues about the stories' veracity. I find receivers overestimate how well their friends’ beliefs predict a story’s veracity; ignore heterogeneity in how those beliefs map to sharing decisions; and exhibit cognitive biases -- such as base-rate neglect and under-inference -- that cause them to update the most on stories they least believed originally. As a result, receivers’ beliefs in shared stories increase irrespective of the sharers’ belief in them.
Finally, in Chapter 5, I estimate what receivers' beliefs would look like under counterfactual signaling arrangements. I find no single arrangement would simutaneously increase mean belief in true stories and reduce mean belief in false stories, but the best option would be to reveal sharers' beliefs to receivers, both in stories they shared and did not share. I conclude with a short discussion of the study's limitations and policy implications.