Normal-guided Detail-Preserving Neural Implicit Function for High-Fidelity 3D Surface Reconstruction

Neural implicit representations have emerged as a powerful paradigm for 3D reconstruction. However, despite their success, existing methods fail to capture fine geometric details and thin structures, especially in scenarios where only sparse multi-view RGB images of the objects of interest are available. This paper shows that training neural representations with first-order differential properties (surface normals) leads to highly accurate 3D surface reconstruction, even with as few as two RGB images. Using input RGB images, we compute approximate ground-truth surface normals from depth maps produced by an off-the-shelf monocular depth estimator. During training, we directly locate the surface point of the SDF network and supervise its normal with the one estimated from the depth map. Extensive experiments demonstrate that our method achieves state-of-the-art reconstruction accuracy with a minimal number of views, capturing intricate geometric details and thin structures that were previously challenging to capture.
View on arXiv@article{patel2025_2406.04861, title={ Normal-guided Detail-Preserving Neural Implicit Function for High-Fidelity 3D Surface Reconstruction }, author={ Aarya Patel and Hamid Laga and Ojaswa Sharma }, journal={arXiv preprint arXiv:2406.04861}, year={ 2025 } }