ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.19700
79
0

Explaining the Impact of Training on Vision Models via Activation Clustering

29 November 2024
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
ArXivPDFHTML
Abstract

This paper introduces Neuro-Activated Vision Explanations (NAVE), a method for extracting and visualizing the internal representations of vision model encoders. By clustering feature activations, NAVE provides insights into learned semantics without fine-tuning. Using object localization, we show that NAVE's concepts align with image semantics. Through extensive experiments, we analyze the impact of training strategies and architectures on encoder representation capabilities. Additionally, we apply NAVE to study training artifacts in vision transformers and reveal how weak training strategies and spurious correlations degrade model performance. Our findings establish NAVE as a valuable tool for post-hoc model inspection and improving transparency in vision models.

View on arXiv
@article{boubekki2025_2411.19700,
  title={ Explaining the Impact of Training on Vision Models via Activation Clustering },
  author={ Ahcène Boubekki and Samuel G. Fadel and Sebastian Mair },
  journal={arXiv preprint arXiv:2411.19700},
  year={ 2025 }
}
Comments on this paper