VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow

Neural networks are widely adopted to solve complex and challenging tasks. Especially in high-stakes decision-making, understanding their reasoning process is crucial, yet proves challenging for modern deep networks. Feature visualization (FV) is a powerful tool to decode what information neurons are responding to and hence to better understand the reasoning behind such networks. In particular, in FV we generate human-understandable images that reflect the information detected by neurons of interest. However, current methods often yield unrecognizable visualizations, exhibiting repetitive patterns and visual artifacts that are hard to understand for a human. To address these problems, we propose to guide FV through statistics of real image features combined with measures of relevant network flow to generate prototypical images. Our approach yields human-understandable visualizations that both qualitatively and quantitatively improve over state-of-the-art FVs across various architectures. As such, it can be used to decode which information the network uses, complementing mechanistic circuits that identify where it is encoded. Code is available at:this https URL
View on arXiv@article{gorgun2025_2503.22399, title={ VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow }, author={ Ada Gorgun and Bernt Schiele and Jonas Fischer }, journal={arXiv preprint arXiv:2503.22399}, year={ 2025 } }