ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17954
15
1

iVR-GS: Inverse Volume Rendering for Explorable Visualization via Editable 3D Gaussian Splatting

24 April 2025
Kaiyuan Tang
Siyuan Yao
Chaoli Wang
    3DGS
ArXivPDFHTML
Abstract

In volume visualization, users can interactively explore the three-dimensional data by specifying color and opacity mappings in the transfer function (TF) or adjusting lighting parameters, facilitating meaningful interpretation of the underlying structure. However, rendering large-scale volumes demands powerful GPUs and high-speed memory access for real-time performance. While existing novel view synthesis (NVS) methods offer faster rendering speeds with lower hardware requirements, the visible parts of a reconstructed scene are fixed and constrained by preset TF settings, significantly limiting user exploration. This paper introduces inverse volume rendering via Gaussian splatting (iVR-GS), an innovative NVS method that reduces the rendering cost while enabling scene editing for interactive volume exploration. Specifically, we compose multiple iVR-GS models associated with basic TFs covering disjoint visible parts to make the entire volumetric scene visible. Each basic model contains a collection of 3D editable Gaussians, where each Gaussian is a 3D spatial point that supports real-time scene rendering and editing. We demonstrate the superior reconstruction quality and composability of iVR-GS against other NVS solutions (Plenoxels, CCNeRF, and base 3DGS) on various volume datasets. The code is available atthis https URL.

View on arXiv
@article{tang2025_2504.17954,
  title={ iVR-GS: Inverse Volume Rendering for Explorable Visualization via Editable 3D Gaussian Splatting },
  author={ Kaiyuan Tang and Siyuan Yao and Chaoli Wang },
  journal={arXiv preprint arXiv:2504.17954},
  year={ 2025 }
}
Comments on this paper