22
21

ααα^α-Rank: Practically Scaling αα-Rank through Stochastic Optimisation

Abstract

Recently, α\alpha-Rank, a graph-based algorithm, has been proposed as a solution to ranking joint policy profiles in large scale multi-agent systems. α\alpha-Rank claimed tractability through a polynomial time implementation with respect to the total number of pure strategy profiles. Here, we note that inputs to the algorithm were not clearly specified in the original presentation; as such, we deem complexity claims as not grounded, and conjecture solving α\alpha-Rank is NP-hard. The authors of α\alpha-Rank suggested that the input to α\alpha-Rank can be an exponentially-sized payoff matrix; a claim promised to be clarified in subsequent manuscripts. Even though α\alpha-Rank exhibits a polynomial-time solution with respect to such an input, we further reflect additional critical problems. We demonstrate that due to the need of constructing an exponentially large Markov chain, α\alpha-Rank is infeasible beyond a small finite number of agents. We ground these claims by adopting amount of dollars spent as a non-refutable evaluation metric. Realising such scalability issue, we present a stochastic implementation of α\alpha-Rank with a double oracle mechanism allowing for reductions in joint strategy spaces. Our method, αα\alpha^\alpha-Rank, does not need to save exponentially-large transition matrix, and can terminate early under required precision. Although theoretically our method exhibits similar worst-case complexity guarantees compared to α\alpha-Rank, it allows us, for the first time, to practically conduct large-scale multi-agent evaluations. On 104×10410^4 \times 10^4 random matrices, we achieve 1000x1000x speed reduction. Furthermore, we also show successful results on large joint strategy profiles with a maximum size in the order of O(225)\mathcal{O}(2^{25}) (33\approx 33 million joint strategies) -- a setting not evaluable using α\alpha-Rank with reasonable computational budget.

View on arXiv
Comments on this paper