ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.16484
38
0

Identifying Sub-networks in Neural Networks via Functionally Similar Representations

21 October 2024
Tian Gao
Amit Dhurandhar
K. Ramamurthy
Dennis L. Wei
ArXivPDFHTML
Abstract

Providing human-understandable insights into the inner workings of neural networks is an important step toward achieving more explainable and trustworthy AI. Existing approaches to such mechanistic interpretability typically require substantial prior knowledge and manual effort, with strategies tailored to specific tasks. In this work, we take a step toward automating the understanding of the network by investigating the existence of distinct sub-networks. Specifically, we explore a novel automated and task-agnostic approach based on the notion of functionally similar representations within neural networks to identify similar and dissimilar layers, revealing potential sub-networks. We achieve this by proposing, for the first time to our knowledge, the use of Gromov-Wasserstein distance, which overcomes challenges posed by varying distributions and dimensionalities across intermediate representations, issues that complicate direct layer to layer comparisons. On algebraic, language, and vision tasks, we observe the emergence of sub-groups within neural network layers corresponding to functional abstractions. Through downstream applications of model compression and fine-tuning, we show the proposed approach offers meaningful insights into the behavior of neural networks with minimal human and computational cost.

View on arXiv
@article{gao2025_2410.16484,
  title={ Identifying Sub-networks in Neural Networks via Functionally Similar Representations },
  author={ Tian Gao and Amit Dhurandhar and Karthikeyan Natesan Ramamurthy and Dennis Wei },
  journal={arXiv preprint arXiv:2410.16484},
  year={ 2025 }
}
Comments on this paper