CrossGen: Learning and Generating Cross Fields for Quad Meshing

Cross fields play a critical role in various geometry processing tasks, especially for quad mesh generation. Existing methods for cross field generation often struggle to balance computational efficiency with generation quality, using slow per-shape optimization. We introduce CrossGen, a novel framework that supports both feed-forward prediction and latent generative modeling of cross fields for quad meshing by unifying geometry and cross field representations within a joint latent space. Our method enables extremely fast computation of high-quality cross fields of general input shapes, typically within one second without per-shape optimization. Our method assumes a point-sampled surface, or called a point-cloud surface, as input, so we can accommodate various different surface representations by a straightforward point sampling process. Using an auto-encoder network architecture, we encode input point-cloud surfaces into a sparse voxel grid with fine-grained latent spaces, which are decoded into both SDF-based surface geometry and cross fields. We also contribute a dataset of models with both high-quality signed distance fields (SDFs) representations and their corresponding cross fields, and use it to train our network. Once trained, the network is capable of computing a cross field of an input surface in a feed-forward manner, ensuring high geometric fidelity, noise resilience, and rapid inference. Furthermore, leveraging the same unified latent representation, we incorporate a diffusion model for computing cross fields of new shapes generated from partial input, such as sketches. To demonstrate its practical applications, we validate CrossGen on the quad mesh generation task for a large variety of surface shapes. Experimental results...
View on arXiv@article{dong2025_2506.07020, title={ CrossGen: Learning and Generating Cross Fields for Quad Meshing }, author={ Qiujie Dong and Jiepeng Wang and Rui Xu and Cheng Lin and Yuan Liu and Shiqing Xin and Zichun Zhong and Xin Li and Changhe Tu and Taku Komura and Leif Kobbelt and Scott Schaefer and Wenping Wang }, journal={arXiv preprint arXiv:2506.07020}, year={ 2025 } }