Efficient Reinforcement Learning for Global Decision Making in the Presence of Local Agents at Scale

We study reinforcement learning for global decision-making in the presence of many local agents, where the global decision-maker makes decisions affecting all local agents, and the objective is to learn a policy that maximizes the rewards of both the global and the local agents. Such problems find many applications, e.g. demand response, EV charging, queueing, etc. In this setting, scalability has been a long-standing challenge due to the size of the state/action space which can be exponential in the number of agents. This work proposes the algorithm where the global agent subsamples local agents to compute an optimal policy in time that is only exponential in , providing an exponential speedup from standard methods that are exponential in . We show that the learned policy converges to the optimal policy in the order of as the number of sub-sampled agents increases, where is the Bellman noise, by proving a novel generalization of the Dvoretzky-Kiefer-Wolfowitz inequality to the regime of sampling without replacement. We also conduct numerical simulations in a demand-response setting and a queueing setting.
View on arXiv