ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10480
35
1

World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning

13 March 2025
Siyin Wang
Zhaoye Fei
Qinyuan Cheng
S. Zhang
Panpan Cai
Jinlan Fu
Xipeng Qiu
ArXivPDFHTML
Abstract

Recent advances in large vision-language models (LVLMs) have shown promise for embodied task planning, yet they struggle with fundamental challenges like dependency constraints and efficiency. Existing approaches either solely optimize action selection or leverage world models during inference, overlooking the benefits of learning to model the world as a way to enhance planning capabilities. We propose Dual Preference Optimization (D2^22PO), a new learning framework that jointly optimizes state prediction and action selection through preference learning, enabling LVLMs to understand environment dynamics for better planning. To automatically collect trajectories and stepwise preference data without human annotation, we introduce a tree search mechanism for extensive exploration via trial-and-error. Extensive experiments on VoTa-Bench demonstrate that our D2^22PO-based method significantly outperforms existing methods and GPT-4o when applied to Qwen2-VL (7B), LLaVA-1.6 (7B), and LLaMA-3.2 (11B), achieving superior task success rates with more efficient execution paths.

View on arXiv
@article{wang2025_2503.10480,
  title={ World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning },
  author={ Siyin Wang and Zhaoye Fei and Qinyuan Cheng and Shiduo Zhang and Panpan Cai and Jinlan Fu and Xipeng Qiu },
  journal={arXiv preprint arXiv:2503.10480},
  year={ 2025 }
}
Comments on this paper