ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.20957
34
0

Reward Dimension Reduction for Scalable Multi-Objective Reinforcement Learning

28 February 2025
Giseung Park
Y. Sung
    OffRL
ArXivPDFHTML
Abstract

In this paper, we introduce a simple yet effective reward dimension reduction method to tackle the scalability challenges of multi-objective reinforcement learning algorithms. While most existing approaches focus on optimizing two to four objectives, their abilities to scale to environments with more objectives remain uncertain. Our method uses a dimension reduction approach to enhance learning efficiency and policy performance in multi-objective settings. While most traditional dimension reduction methods are designed for static datasets, our approach is tailored for online learning and preserves Pareto-optimality after transformation. We propose a new training and evaluation framework for reward dimension reduction in multi-objective reinforcement learning and demonstrate the superiority of our method in environments including one with sixteen objectives, significantly outperforming existing online dimension reduction methods.

View on arXiv
@article{park2025_2502.20957,
  title={ Reward Dimension Reduction for Scalable Multi-Objective Reinforcement Learning },
  author={ Giseung Park and Youngchul Sung },
  journal={arXiv preprint arXiv:2502.20957},
  year={ 2025 }
}
Comments on this paper