ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12947
56
0

DivCon-NeRF: Generating Augmented Rays with Diversity and Consistency for Few-shot View Synthesis

17 March 2025
Ingyun Lee
J. Jang
Seunghyeon Seo
Nojun Kwak
ArXivPDFHTML
Abstract

Neural Radiance Field (NeRF) has shown remarkable performance in novel view synthesis but requires many multiview images, making it impractical for few-shot scenarios. Ray augmentation was proposed to prevent overfitting for sparse training data by generating additional rays. However, existing methods, which generate augmented rays only near the original rays, produce severe floaters and appearance distortion due to limited viewpoints and inconsistent rays obstructed by nearby obstacles and complex surfaces. To address these problems, we propose DivCon-NeRF, which significantly enhances both diversity and consistency. It employs surface-sphere augmentation, which preserves the distance between the original camera and the predicted surface point. This allows the model to compare the order of high-probability surface points and filter out inconsistent rays easily without requiring the exact depth. By introducing inner-sphere augmentation, DivCon-NeRF randomizes angles and distances for diverse viewpoints, further increasing diversity. Consequently, our method significantly reduces floaters and visual distortions, achieving state-of-the-art performance on the Blender, LLFF, and DTU datasets. Our code will be publicly available.

View on arXiv
@article{lee2025_2503.12947,
  title={ DivCon-NeRF: Generating Augmented Rays with Diversity and Consistency for Few-shot View Synthesis },
  author={ Ingyun Lee and Jae Won Jang and Seunghyeon Seo and Nojun Kwak },
  journal={arXiv preprint arXiv:2503.12947},
  year={ 2025 }
}
Comments on this paper