Skip to main content
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Blinding Evaluations of Scientific Evidence Reveals and Reduces Partisan Biases

No data is associated with this publication.
Creative Commons 'BY' version 4.0 license

Do political partisans evaluate new information in a biased way? Despite decades of research, this question has been difficult for psychologists to resolve. Proponents of rationalist accounts claim that ostensible biases can be solely explained by impartial, accuracy-motivated processes; in contrast, proponents of motivated accounts contend that partisans’ evaluations are often influenced by biased, directionally-motivated processes. Embracing the logic of blinding that underlies many scientific practices, I designed and deployed a novel experimental paradigm across four preregistered experiments (N = 4010) to critically test these two accounts. Participants were randomly assigned to evaluate the methodological quality of scientific evidence either before they knew its results (blinded evaluations) or after they knew its results (unblinded evaluations). The critical assumption underlying this design was that blinded participants provided purely impartial, accuracy-motivated evaluations. Unblinded participants, on the other hand, may or may not have been biased by their prior political beliefs when evaluating the new information. Indeed, in every study, unblinded participants were unduly influenced by their prior beliefs compared to their attitudinally-similar counterparts who made blinded evaluations. Partisans’ feelings and expectations produced independent, yet highly intertwined, biasing effects on their evaluations. These biases were most evident in unblinded participants’ denigration of politically-unfriendly information, rather than their veneration of politically-friendly information. Additionally, I found that blinding partisans’ quality evaluations influenced how credible they found politically-unfriendly information and how they updated their beliefs in response to that information. I discuss how these findings can be integrated into existing theoretical models—including Bayesian models of belief updating—to provide more accurate descriptions of political cognition. Ultimately, these results disconfirmed strong rationalist accounts of partisan evaluations and supported the existence of partisan biases.

Main Content

This item is under embargo until June 22, 2024.