Stochastic Shortest Path with Sparse Adversarial Costs
- AAML

We study the adversarial Stochastic Shortest Path (SSP) problem with sparse costs under full-information feedback. In the known transition setting, existing bounds based on Online Mirror Descent (OMD) with negative-entropy regularization scale with , where is the size of the state-action space. While we show that this is optimal in the worst-case, this bound fails to capture the benefits of sparsity when only a small number of state-action pairs incur cost. In fact, we also show that the negative-entropy is inherently non-adaptive to sparsity: it provably incurs regret scaling with on sparse problems. Instead, we propose a family of -norm regularizers () that adapts to the sparsity and achieves regret scaling with instead of . We show this is optimal via a matching lower bound, highlighting that captures the effective dimension of the problem instead of . Finally, in the unknown transition setting the benefits of sparsity are limited: we prove that even on sparse problems, the minimax regret for any learner scales polynomially with .
View on arXiv