ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.05041
  4. Cited By
Understanding the Role of Individual Units in a Deep Neural Network
v1v2 (latest)

Understanding the Role of Individual Units in a Deep Neural Network

Proceedings of the National Academy of Sciences of the United States of America (PNAS), 2020
10 September 2020
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Àgata Lapedriza
Bolei Zhou
Antonio Torralba
    GAN
ArXiv (abs)PDFHTML

Papers citing "Understanding the Role of Individual Units in a Deep Neural Network"

33 / 233 papers shown
AdaptCL: Efficient Collaborative Learning with Dynamic and Adaptive
  Pruning
AdaptCL: Efficient Collaborative Learning with Dynamic and Adaptive Pruning
Guangmeng Zhou
Ke Xu
Qi Li
Yang Liu
Yi Zhao
139
10
0
27 Jun 2021
Semantically Adversarial Scenario Generation with Explicit Knowledge
  Guidance
Semantically Adversarial Scenario Generation with Explicit Knowledge Guidance
Wenhao Ding
Hao-ming Lin
Yue Liu
Ding Zhao
GAN
725
1
0
08 Jun 2021
Barbershop: GAN-based Image Compositing using Segmentation Masks
Barbershop: GAN-based Image Compositing using Segmentation MasksACM Transactions on Graphics (TOG), 2021
Peihao Zhu
Rameen Abdal
John C. Femiani
Peter Wonka
169
32
0
02 Jun 2021
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Fine-grained Interpretation and Causation Analysis in Deep NLP ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
Hassan Sajjad
Narine Kokhlikyan
Fahim Dalvi
Nadir Durrani
MILM
316
8
0
17 May 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep NetworksInternational Conference on Machine Learning (ICML), 2021
Eric Wong
Shibani Santurkar
Aleksander Madry
FAtt
213
96
0
11 May 2021
PCE-PINNs: Physics-Informed Neural Networks for Uncertainty Propagation
  in Ocean Modeling
PCE-PINNs: Physics-Informed Neural Networks for Uncertainty Propagation in Ocean Modeling
Björn Lütjens
Catherine H. Crawford
Mark S. Veillette
Dava Newman
184
11
0
05 May 2021
Interpreting intermediate convolutional layers of generative CNNs
  trained on waveforms
Interpreting intermediate convolutional layers of generative CNNs trained on waveformsIEEE/ACM Transactions on Audio Speech and Language Processing (TASLP), 2021
Gašper Beguš
Alan Zhou
302
8
0
19 Apr 2021
DeepEverest: Accelerating Declarative Top-K Queries for Deep Neural
  Network Interpretation
DeepEverest: Accelerating Declarative Top-K Queries for Deep Neural Network InterpretationProceedings of the VLDB Endowment (PVLDB), 2021
Dong He
Maureen Daum
Walter Cai
Magdalena Balazinska
HAI
261
6
0
06 Apr 2021
Estimating the Generalization in Deep Neural Networks via Sparsity
Estimating the Generalization in Deep Neural Networks via Sparsity
Yang Zhao
Hao Zhang
210
2
0
02 Apr 2021
Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems
Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems
Daniel Buschek
Lukas Mecke
Florian Lehmann
Hai Dang
206
49
0
01 Apr 2021
Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation
Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation
Yi Sun
Abel N. Valente
Sijia Liu
Dakuo Wang
AAML
180
7
0
25 Mar 2021
Quantitative Performance Assessment of CNN Units via Topological Entropy
  Calculation
Quantitative Performance Assessment of CNN Units via Topological Entropy CalculationInternational Conference on Learning Representations (ICLR), 2021
Yang Zhao
Hao Zhang
253
8
0
17 Mar 2021
Neuron Coverage-Guided Domain Generalization
Neuron Coverage-Guided Domain GeneralizationIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Chris Xing Tian
Haoliang Li
Xiaofei Xie
Yang Liu
Shiqi Wang
320
39
0
27 Feb 2021
A Mathematical Principle of Deep Learning: Learn the Geodesic Curve in
  the Wasserstein Space
A Mathematical Principle of Deep Learning: Learn the Geodesic Curve in the Wasserstein Space
Kuo Gai
Shihua Zhang
415
9
0
18 Feb 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model InputsInternational Conference on Intelligent User Interfaces (IUI), 2021
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
247
29
0
17 Feb 2021
The Role of Edges in Line Drawing Perception
The Role of Edges in Line Drawing PerceptionPerception (Perception), 2021
Aaron Hertzmann
123
13
0
22 Jan 2021
InMoDeGAN: Interpretable Motion Decomposition Generative Adversarial
  Network for Video Generation
InMoDeGAN: Interpretable Motion Decomposition Generative Adversarial Network for Video Generation
Yaohui Wang
Francois Bremond
A. Dantcheva
VGenGAN
383
26
0
08 Jan 2021
Understanding Failures of Deep Networks via Robust Feature Extraction
Understanding Failures of Deep Networks via Robust Feature ExtractionComputer Vision and Pattern Recognition (CVPR), 2020
Sahil Singla
Besmira Nushi
S. Shah
Ece Kamar
Eric Horvitz
FAtt
379
86
0
03 Dec 2020
FACEGAN: Facial Attribute Controllable rEenactment GAN
FACEGAN: Facial Attribute Controllable rEenactment GAN
S. Tripathy
Arno Solin
Esa Rahtu
CVBM
173
48
0
09 Nov 2020
Unwrapping The Black Box of Deep ReLU Networks: Interpretability,
  Diagnostics, and Simplification
Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification
Agus Sudjianto
William Knauth
Rahul Singh
Zebin Yang
Aijun Zhang
FAtt
214
50
0
08 Nov 2020
Role Taxonomy of Units in Deep Neural Networks
Role Taxonomy of Units in Deep Neural Networks
Yang Zhao
Hao Zhang
Xiuyuan Hu
108
1
0
02 Nov 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
252
7
0
23 Oct 2020
Meta-trained agents implement Bayes-optimal agents
Meta-trained agents implement Bayes-optimal agents
Vladimir Mikulik
Grégoire Delétang
Tom McGrath
Tim Genewein
Miljan Martic
Shane Legg
Pedro A. Ortega
OODFedML
230
45
0
21 Oct 2020
Linking average- and worst-case perturbation robustness via class
  selectivity and dimensionality
Linking average- and worst-case perturbation robustness via class selectivity and dimensionality
Matthew L. Leavitt
Ari S. Morcos
AAML
200
2
0
14 Oct 2020
Intrinsic Probing through Dimension Selection
Intrinsic Probing through Dimension Selection
Lucas Torroba Hennigen
Adina Williams
Robert Bamler
200
60
0
06 Oct 2020
Unsupervised Point Cloud Pre-Training via Occlusion Completion
Unsupervised Point Cloud Pre-Training via Occlusion CompletionIEEE International Conference on Computer Vision (ICCV), 2020
Hanchen Wang
Qi Liu
Xiangyu Yue
Joan Lasenby
Matt J. Kusner
3DPC
510
303
0
02 Oct 2020
Distributional Generalization: A New Kind of Generalization
Distributional Generalization: A New Kind of Generalization
Preetum Nakkiran
Yamini Bansal
OOD
250
47
0
17 Sep 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks
  with a Synthetic Dataset
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic DatasetIEEE Transactions on Artificial Intelligence (IEEE TAI), 2020
Erico Tjoa
Cuntai Guan
XAIFAtt
371
32
0
07 Sep 2020
SensitiveLoss: Improving Accuracy and Fairness of Face Representations
  with Discrimination-Aware Deep Learning
SensitiveLoss: Improving Accuracy and Fairness of Face Representations with Discrimination-Aware Deep Learning
Ignacio Serna
Aythami Morales
Julian Fierrez
Manuel Cebrian
Nick Obradovich
Iyad Rahwan
FaMLCVBM
233
94
0
22 Apr 2020
Selectivity considered harmful: evaluating the causal impact of class
  selectivity in DNNs
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNsInternational Conference on Learning Representations (ICLR), 2020
Matthew L. Leavitt
Ari S. Morcos
254
34
0
03 Mar 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A SurveyIEEE Transactions on Radiation and Plasma Medical Sciences (TRPMS), 2020
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAMLAI4CE
475
376
0
08 Jan 2020
Frivolous Units: Wider Networks Are Not Really That Wide
Frivolous Units: Wider Networks Are Not Really That WideAAAI Conference on Artificial Intelligence (AAAI), 2019
Stephen Casper
Xavier Boix
Vanessa D’Amario
Ling Guo
Martin Schrimpf
Kasper Vinken
Gabriel Kreiman
252
20
0
10 Dec 2019
Understanding Neural Networks and Individual Neuron Importance via
  Information-Ordered Cumulative Ablation
Understanding Neural Networks and Individual Neuron Importance via Information-Ordered Cumulative Ablation
Rana Ali Amjad
Kairen Liu
Bernhard C. Geiger
FAtt
341
23
0
18 Apr 2018
Previous
12345
Page 5 of 5