ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.09814
49
0

DreamCS: Geometry-Aware Text-to-3D Generation with Unpaired 3D Reward Supervision

11 June 2025
Xiandong Zou
Ruihao Xia
Hongsong Wang
Pan Zhou
    AI4TS
ArXiv (abs)PDFHTML
Abstract

While text-to-3D generation has attracted growing interest, existing methods often struggle to produce 3D assets that align well with human preferences. Current preference alignment techniques for 3D content typically rely on hardly-collected preference-paired multi-view 2D images to train 2D reward models, when then guide 3D generation -- leading to geometric artifacts due to their inherent 2D bias. To address these limitations, we construct 3D-MeshPref, the first large-scale unpaired 3D preference dataset, featuring diverse 3D meshes annotated by a large language model and refined by human evaluators. We then develop RewardCS, the first reward model trained directly on unpaired 3D-MeshPref data using a novel Cauchy-Schwarz divergence objective, enabling effective learning of human-aligned 3D geometric preferences without requiring paired comparisons. Building on this, we propose DreamCS, a unified framework that integrates RewardCS into text-to-3D pipelines -- enhancing both implicit and explicit 3D generation with human preference feedback. Extensive experiments show DreamCS outperforms prior methods, producing 3D assets that are both geometrically faithful and human-preferred. Code and models will be released publicly.

View on arXiv
@article{zou2025_2506.09814,
  title={ DreamCS: Geometry-Aware Text-to-3D Generation with Unpaired 3D Reward Supervision },
  author={ Xiandong Zou and Ruihao Xia and Hongsong Wang and Pan Zhou },
  journal={arXiv preprint arXiv:2506.09814},
  year={ 2025 }
}
Main:9 Pages
11 Figures
Bibliography:4 Pages
4 Tables
Appendix:6 Pages
Comments on this paper