ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.08735
11
0

Preference Optimization for Combinatorial Optimization Problems

13 May 2025
Mingjun Pan
Guanquan Lin
You-Wei Luo
Bin Zhu
Zhien Dai
Lijun Sun
Chun Yuan
ArXivPDFHTML
Abstract

Reinforcement Learning (RL) has emerged as a powerful tool for neural combinatorial optimization, enabling models to learn heuristics that solve complex problems without requiring expert knowledge. Despite significant progress, existing RL approaches face challenges such as diminishing reward signals and inefficient exploration in vast combinatorial action spaces, leading to inefficiency. In this paper, we propose Preference Optimization, a novel method that transforms quantitative reward signals into qualitative preference signals via statistical comparison modeling, emphasizing the superiority among sampled solutions. Methodologically, by reparameterizing the reward function in terms of policy and utilizing preference models, we formulate an entropy-regularized RL objective that aligns the policy directly with preferences while avoiding intractable computations. Furthermore, we integrate local search techniques into the fine-tuning rather than post-processing to generate high-quality preference pairs, helping the policy escape local optima. Empirical results on various benchmarks, such as the Traveling Salesman Problem (TSP), the Capacitated Vehicle Routing Problem (CVRP) and the Flexible Flow Shop Problem (FFSP), demonstrate that our method significantly outperforms existing RL algorithms, achieving superior convergence efficiency and solution quality.

View on arXiv
@article{pan2025_2505.08735,
  title={ Preference Optimization for Combinatorial Optimization Problems },
  author={ Mingjun Pan and Guanquan Lin and You-Wei Luo and Bin Zhu and Zhien Dai and Lijun Sun and Chun Yuan },
  journal={arXiv preprint arXiv:2505.08735},
  year={ 2025 }
}
Comments on this paper