Distributed Averaging in Population Protocols
We consider a simple one-way averaging protocol on graphs. Initially, every node of the graph has a value. A node is chosen uniformly at random and samples neighbours uniformly at random. Then, averages its value with as follows: for some , where is the value of node at time . Note that, in contrast to neighbourhood value balancing, only changes its value. Hence, the sum (and the average) of the values of all nodes changes over time. Our results are two-fold. First, we show a bound on the convergence time (the time it takes until all values are roughly the same) that is asymptotically tight for some initial assignments of values to the nodes. Our second set of results concerns the ability of this protocol to approximate well the initial average of all values: we bound the probability that the final outcome is significantly away from the initial average. Interestingly, the variance of the outcome does not depend on the graph structure. The proof introduces an interesting generalisation of the duality between coalescing random walks and the voter model.
View on arXiv