In this work, we study a mathematical model for the spread of fake news on social networks. The model includes a large number of agents attempting to learn an underlying true state of the world in an iterative fashion. At each iteration, agents update their beliefs about the true state in a non-Bayesian fashion, using noisy observations of the true state and the beliefs of a subset of other agents. These subsets may include stubborn agents, who attempt to convince others of an erroneous true state (modeling users spreading fake news). This process continues for a finite number of iterations we call the learning horizon. In the first part of the paper, we characterize the learning outcome in terms of the learning horizon and a quantity that describes the "density" of stubborn agents, assuming a certain generative model for the underlying social network. Among other conclusions, our analysis shows that the learning outcome exhibits a phase transition, wherein agents learn the true state on short horizons but suddenly forget the true state on slightly longer horizons, and that adversaries deploying stubborn agents experience diminishing returns. In the second part of the paper, we leverage our learning outcome analysis to devise optimal strategies for seeding stubborn agents so as to disrupt learning. While our proofs of optimality rely on the same generative graph model, we show empirically that these seeding strategies outperform intuitive heuristics on real social networks not conforming to the generative model. Furthermore, the form of our proposed strategies is non-obvious and yields novel insights into vulnerabilities of non-Bayesian learning models.
View on arXiv