Skip to main content
eScholarship
Open Access Publications from the University of California

What interventions can decrease or increase belief polarisation in a population of rational agents?

Abstract

In many situations where people communicate (e.g., Twitter, Facebook etc), people self-organise into ‘echo chambers’ of like-minded individuals, with different echo chambers espousing very different beliefs. Why does this occur? Previous work has demonstrated that such belief polarisation can emerge even when all agents are completely rational, as long as their initial beliefs are heterogeneous and they do not automatically know who to trust. In this work, we used agent-based simulations to further investigate the mechanisms for belief polarisation. Our work extended previous work by using a more realistic scenario. In this scenario, we found that previously proposed methods for reducing belief polarisation did not work but we were able to find a new method that did. However, this same method could be reversed by adversarial entities to increase belief polarisation. We discuss how this danger can be best mitigated and what theoretical conclusions be drawn from our findings.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View