50
0

Make the Fastest Faster: Importance Mask for Interactive Volume Visualization using Reconstruction Neural Networks

Abstract

Visualizing a large-scale volumetric dataset with high resolution is challenging due to the high computational time and space complexity. Recent deep-learning-based image inpainting methods significantly improve rendering latency by reconstructing a high-resolution image for visualization in constant time on GPU from a partially rendered image where only a small portion of pixels go through the expensive rendering pipeline. However, existing methods need to render every pixel of a predefined regular sampling pattern. In this work, we provide Importance Mask Learning (IML) and Synthesis (IMS) networks which are the first attempts to learn importance regions from the sampling pattern to further minimize the number of pixels to render by jointly considering the dataset, user's view parameters, and the downstream reconstruction neural networks. Our solution is a unified framework to handle various image inpainting-based visualization methods through the proposed differentiable compaction/decompaction layers. Experiments show our method can further improve the overall rendering latency of state-of-the-art volume visualization methods using reconstruction neural network for free when rendering scientific volumetric datasets. Our method can also directly optimize the off-the-shelf pre-trained reconstruction neural networks without elongated retraining.

View on arXiv
@article{sun2025_2502.06053,
  title={ Make the Fastest Faster: Importance Mask for Interactive Volume Visualization using Reconstruction Neural Networks },
  author={ Jianxin Sun and David Lenz and Hongfeng Yu and Tom Peterka },
  journal={arXiv preprint arXiv:2502.06053},
  year={ 2025 }
}
Comments on this paper