Structured Reinforcement Learning for Combinatorial Decision-Making

Reinforcement learning (RL) is increasingly applied to real-world problems involving complex and structured decisions, such as routing, scheduling, and assortment planning. These settings challenge standard RL algorithms, which struggle to scale, generalize, and exploit structure in the presence of combinatorial action spaces. We propose Structured Reinforcement Learning (SRL), a novel actor-critic framework that embeds combinatorial optimization layers into the actor neural network. We enable end-to-end learning of the actor via Fenchel-Young losses and provide a geometric interpretation of SRL as a primal-dual algorithm in the dual of the moment polytope. Across six environments with exogenous and endogenous uncertainty, SRL matches or surpasses the performance of unstructured RL and imitation learning on static tasks and improves over these baselines by up to 92% on dynamic problems, with improved stability and convergence speed.
View on arXiv@article{hoppe2025_2505.19053, title={ Structured Reinforcement Learning for Combinatorial Decision-Making }, author={ Heiko Hoppe and Léo Baty and Louis Bouvier and Axel Parmentier and Maximilian Schiffer }, journal={arXiv preprint arXiv:2505.19053}, year={ 2025 } }