SpNeRF: Memory Efficient Sparse Volumetric Neural Rendering Accelerator for Edge Devices

Neural rendering has gained prominence for its high-quality output, which is crucial for AR/VR applications. However, its large voxel grid data size and irregular access patterns challenge real-time processing on edge devices. While previous works have focused on improving data locality, they have not adequately addressed the issue of large voxel grid sizes, which necessitate frequent off-chip memory access and substantial on-chip memory. This paper introduces SpNeRF, a software-hardware co-design solution tailored for sparse volumetric neural rendering. We first identify memory-bound rendering inefficiencies and analyze the inherent sparsity in the voxel grid data of neural rendering. To enhance efficiency, we propose novel preprocessing and online decoding steps, reducing the memory size for voxel grid. The preprocessing step employs hash mapping to support irregular data access while maintaining a minimal memory size. The online decoding step enables efficient on-chip sparse voxel grid processing, incorporating bitmap masking to mitigate PSNR loss caused by hash collisions. To further optimize performance, we design a dedicated hardware architecture supporting our sparse voxel grid processing technique. Experimental results demonstrate that SpNeRF achieves an average 21.07 reduction in memory size while maintaining comparable PSNR levels. When benchmarked against Jetson XNX, Jetson ONX,this http URLandthis http URL, our design achieves speedups of 95.1, 63.5, 1.5 and 10.3, and improves energy efficiency by 625.6, 529.1, 4, and 4.4, respectively.
View on arXiv@article{zhang2025_2505.08191, title={ SpNeRF: Memory Efficient Sparse Volumetric Neural Rendering Accelerator for Edge Devices }, author={ Yipu Zhang and Jiawei Liang and Jian Peng and Jiang Xu and Wei Zhang }, journal={arXiv preprint arXiv:2505.08191}, year={ 2025 } }