ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.11489
14
0

Uncovering Branch specialization in InceptionV1 using k sparse autoencoders

14 April 2025
Matthew Bozoukov
ArXivPDFHTML
Abstract

Sparse Autoencoders (SAEs) have shown to find interpretable features in neural networks from polysemantic neurons caused by superposition. Previous work has shown SAEs are an effective tool to extract interpretable features from the early layers of InceptionV1. Since then, there have been many improvements to SAEs but branch specialization is still an enigma in the later layers of InceptionV1. We show various examples of branch specialization occuring in each layer of the mixed4a-4e branch, in the 5x5 branch and in one 1x1 branch. We also provide evidence to claim that branch specialization seems to be consistent across layers, similar features across the model will be localized in the same convolution size branches in their respective layer.

View on arXiv
@article{bozoukov2025_2504.11489,
  title={ Uncovering Branch specialization in InceptionV1 using k sparse autoencoders },
  author={ Matthew Bozoukov },
  journal={arXiv preprint arXiv:2504.11489},
  year={ 2025 }
}
Comments on this paper