Slow Convergence of Interacting Kalman Filters in Word-of-Mouth Social
Learning
We consider word-of-mouth social learning involving Kalman filter agents that operate sequentially. The first Kalman filter receives the raw observations, while each subsequent Kalman filter receives a noisy measurement of the conditional mean of the previous Kalman filter. The prior is updated by the -th Kalman filter. When , and the observations are noisy measurements of a Gaussian random variable, the covariance goes to zero as for observations, instead of in the standard Kalman filter. In this paper we prove that for agents, the covariance decreases to zero as , i.e, the learning slows down exponentially with the number of agents. We also show that by artificially weighing the prior at each time, the learning rate can be made optimal as . The implication is that in word-of-mouth social learning, artificially re-weighing the prior can yield the optimal learning rate.
View on arXiv