Learned Collusion
Main:32 Pages
21 Figures
Bibliography:3 Pages
16 Tables
Appendix:6 Pages
Abstract
Q-learning can be described as an all-purpose automaton that provides estimates (Q-values) of the continuation values associated with each available action and follows the naive policy of almost always choosing the action with highest Q-value. We consider a family of automata based on Q-values, whose policy may systematically favor some actions over others, for example through a bias that favors cooperation. We look for stable equilibrium biases, easily learned under converging logit/best-response dynamics over biases, not requiring any tacit agreement. These biases strongly foster collusion or cooperation across a rich array of payoff and monitoring structures, independently of initial Q-values.
View on arXivComments on this paper
