Visual Explanation via Similar Feature Activation for Metric Learning
- FAtt

Visual explanation maps enhance the trustworthiness of decisions made by deep learning models and offer valuable guidance for developing new algorithms in image recognition tasks. Class activation maps (CAM) and their variants (e.g., Grad-CAM and Relevance-CAM) have been extensively employed to explore the interpretability of softmax-based convolutional neural networks, which require a fully connected layer as the classifier for decision-making. However, these methods cannot be directly applied to metric learning models, as such models lack a fully connected layer functioning as a classifier. To address this limitation, we propose a novel visual explanation method termed Similar Feature Activation Map (SFAM). This method introduces the channel-wise contribution importance score (CIS) to measure feature importance, derived from the similarity measurement between two image embeddings. The explanation map is constructed by linearly combining the proposed importance weights with the feature map from a CNN model. Quantitative and qualitative experiments show that SFAM provides highly promising interpretable visual explanations for CNN models using Euclidean distance or cosine similarity as the similarity metric.
View on arXiv@article{liao2025_2506.01636, title={ Visual Explanation via Similar Feature Activation for Metric Learning }, author={ Yi Liao and Ugochukwu Ejike Akpudo and Jue Zhang and Yongsheng Gao and Jun Zhou and Wenyi Zeng and Weichuan Zhang }, journal={arXiv preprint arXiv:2506.01636}, year={ 2025 } }