ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12383
40
0

VRsketch2Gaussian: 3D VR Sketch Guided 3D Object Generation with Gaussian Splatting

16 March 2025
Songen Gu
Haoxuan Song
Binjie Liu
Qian Yu
Sanyi Zhang
Haiyong Jiang
Jin Huang
Feng Tian
    3DGS
    3DV
ArXivPDFHTML
Abstract

We propose VRSketch2Gaussian, a first VR sketch-guided, multi-modal, native 3D object generation framework that incorporates a 3D Gaussian Splatting representation. As part of our work, we introduce VRSS, the first large-scale paired dataset containing VR sketches, text, images, and 3DGS, bridging the gap in multi-modal VR sketch-based generation. Our approach features the following key innovations: 1) Sketch-CLIP feature alignment. We propose a two-stage alignment strategy that bridges the domain gap between sparse VR sketch embeddings and rich CLIP embeddings, facilitating both VR sketch-based retrieval and generation tasks. 2) Fine-Grained multi-modal conditioning. We disentangle the 3D generation process by using explicit VR sketches for geometric conditioning and text descriptions for appearance control. To facilitate this, we propose a generalizable VR sketch encoder that effectively aligns different modalities. 3) Efficient and high-fidelity 3D native generation. Our method leverages a 3D-native generation approach that enables fast and texture-rich 3D object synthesis. Experiments conducted on our VRSS dataset demonstrate that our method achieves high-quality, multi-modal VR sketch-based 3D generation. We believe our VRSS dataset and VRsketch2Gaussian method will be beneficial for the 3D generation community.

View on arXiv
@article{gu2025_2503.12383,
  title={ VRsketch2Gaussian: 3D VR Sketch Guided 3D Object Generation with Gaussian Splatting },
  author={ Songen Gu and Haoxuan Song and Binjie Liu and Qian Yu and Sanyi Zhang and Haiyong Jiang and Jin Huang and Feng Tian },
  journal={arXiv preprint arXiv:2503.12383},
  year={ 2025 }
}
Comments on this paper