ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.01588
  4. Cited By
Interpreting Layered Neural Networks via Hierarchical Modular
  Representation

Interpreting Layered Neural Networks via Hierarchical Modular Representation

3 October 2018
C. Watanabe
ArXivPDFHTML

Papers citing "Interpreting Layered Neural Networks via Hierarchical Modular Representation"

5 / 5 papers shown
Title
Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models
Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models
Philipp Mondorf
Sondre Wold
Barbara Plank
34
0
0
02 Oct 2024
Quantifying Local Specialization in Deep Neural Networks
Quantifying Local Specialization in Deep Neural Networks
Shlomi Hod
Daniel Filan
Stephen Casper
Andrew Critch
Stuart J. Russell
60
10
0
13 Oct 2021
Modularity in Reinforcement Learning via Algorithmic Independence in
  Credit Assignment
Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment
Michael Chang
Sid Kaushik
Sergey Levine
Thomas L. Griffiths
18
8
0
28 Jun 2021
Clusterability in Neural Networks
Clusterability in Neural Networks
Daniel Filan
Stephen Casper
Shlomi Hod
Cody Wild
Andrew Critch
Stuart J. Russell
GNN
24
30
0
04 Mar 2021
Are Neural Nets Modular? Inspecting Functional Modularity Through
  Differentiable Weight Masks
Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks
Róbert Csordás
Sjoerd van Steenkiste
Jürgen Schmidhuber
37
87
0
05 Oct 2020
1