ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.12447
  4. Cited By
How Well do Feature Visualizations Support Causal Understanding of CNN
  Activations?
v1v2v3 (latest)

How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

23 June 2021
Roland S. Zimmermann
Judy Borowski
Robert Geirhos
Matthias Bethge
Thomas S. A. Wallis
Wieland Brendel
    FAtt
ArXiv (abs)PDFHTML

Papers citing "How Well do Feature Visualizations Support Causal Understanding of CNN Activations?"

25 / 25 papers shown
Title
Probing the Probes: Methods and Metrics for Concept Alignment
Probing the Probes: Methods and Metrics for Concept Alignment
Jacob Lysnæs-Larsen
Marte Eggen
Inga Strümke
LLMSV
104
0
0
06 Nov 2025
DEXTER: Diffusion-Guided EXplanations with TExtual Reasoning for Vision Models
DEXTER: Diffusion-Guided EXplanations with TExtual Reasoning for Vision Models
Simone Carnemolla
M. Pennisi
Sarinda Samarasinghe
Giovanni Bellitto
S. Palazzo
Daniela Giordano
M. Shah
C. Spampinato
AAMLFAttVLM
193
0
0
16 Oct 2025
Concept-Centric Token Interpretation for Vector-Quantized Generative Models
Concept-Centric Token Interpretation for Vector-Quantized Generative Models
Tianze Yang
Yucheng Shi
Mengnan Du
Xuansheng Wu
Qiaoyu Tan
Jin Sun
Ninghao Liu
213
1
0
31 May 2025
Decoding Vision Transformers: the Diffusion Steering Lens
Decoding Vision Transformers: the Diffusion Steering Lens
Ryota Takatsuki
Sonia Joseph
Ippei Fujisawa
Ryota Kanai
DiffM
324
0
0
18 Apr 2025
VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow
VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow
Ada Gorgun
Bernt Schiele
Jonas Fischer
174
1
0
28 Mar 2025
MoireDB: Formula-generated Interference-fringe Image Dataset
MoireDB: Formula-generated Interference-fringe Image Dataset
Yuto Matsuo
Ryo Hayamizu
Hirokatsu Kataoka
Akio Nakamura
180
0
0
03 Feb 2025
Local vs distributed representations: What is the right basis for
  interpretability?
Local vs distributed representations: What is the right basis for interpretability?
Julien Colin
L. Goetschalckx
Thomas Fel
Victor Boutin
Jay Gopal
Thomas Serre
Nuria Oliver
HAI
204
4
0
06 Nov 2024
Understanding Inhibition Through Maximally Tense Images
Understanding Inhibition Through Maximally Tense Images
Chris Hamblin
Srijani Saha
Talia Konkle
George Alvarez
FAtt
154
0
0
08 Jun 2024
From Feature Visualization to Visual Circuits: Effect of Adversarial
  Model Manipulation
From Feature Visualization to Visual Circuits: Effect of Adversarial Model Manipulation
Géraldin Nanfack
Michael Eickenberg
Eugene Belilovsky
FAttAAMLGNN
240
1
0
03 Jun 2024
Interpretability Needs a New Paradigm
Interpretability Needs a New Paradigm
Andreas Madsen
Himabindu Lakkaraju
Siva Reddy
Sarath Chandar
177
6
0
08 May 2024
Causality from Bottom to Top: A Survey
Causality from Bottom to Top: A Survey
Abraham Itzhak Weinberg
Cristiano Premebida
Diego Resende Faria
CML
187
5
0
17 Mar 2024
Feature Accentuation: Revealing 'What' Features Respond to in Natural
  Images
Feature Accentuation: Revealing 'What' Features Respond to in Natural Images
Christopher Hamblin
Thomas Fel
Srijani Saha
Talia Konkle
George A. Alvarez
FAtt
290
4
0
15 Feb 2024
Error Discovery by Clustering Influence Embeddings
Error Discovery by Clustering Influence EmbeddingsNeural Information Processing Systems (NeurIPS), 2023
Fulton Wang
Julius Adebayo
Sarah Tan
Diego Garcia-Olano
Narine Kokhlikyan
270
5
0
07 Dec 2023
Identifying Interpretable Visual Features in Artificial and Biological
  Neural Systems
Identifying Interpretable Visual Features in Artificial and Biological Neural Systems
David A. Klindt
Sophia Sanborn
Francisco Acosta
Frédéric Poitevin
Nina Miolane
MILMFAtt
259
10
0
17 Oct 2023
BrainSCUBA: Fine-Grained Natural Language Captions of Visual Cortex
  Selectivity
BrainSCUBA: Fine-Grained Natural Language Captions of Visual Cortex SelectivityInternational Conference on Learning Representations (ICLR), 2023
Andrew F. Luo
Margaret M. Henderson
Michael J. Tarr
Brian Karrer
270
27
0
06 Oct 2023
COSE: A Consistency-Sensitivity Metric for Saliency on Image
  Classification
COSE: A Consistency-Sensitivity Metric for Saliency on Image Classification
Rangel Daroya
Aaron Sun
Subhransu Maji
151
1
0
20 Sep 2023
The role of causality in explainable artificial intelligence
The role of causality in explainable artificial intelligence
Gianluca Carloni
Andrea Berti
Sara Colantonio
CMLXAI
219
29
0
18 Sep 2023
Scale Alone Does not Improve Mechanistic Interpretability in Vision
  Models
Scale Alone Does not Improve Mechanistic Interpretability in Vision ModelsNeural Information Processing Systems (NeurIPS), 2023
Roland S. Zimmermann
Thomas Klein
Wieland Brendel
196
22
0
11 Jul 2023
Adversarial Attacks on the Interpretation of Neuron Activation
  Maximization
Adversarial Attacks on the Interpretation of Neuron Activation MaximizationAAAI Conference on Artificial Intelligence (AAAI), 2023
Géraldin Nanfack
A. Fulleringer
Jonathan Marty
Michael Eickenberg
Eugene Belilovsky
AAMLFAtt
178
12
0
12 Jun 2023
Unlocking Feature Visualization for Deeper Networks with MAgnitude
  Constrained Optimization
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained OptimizationNeural Information Processing Systems (NeurIPS), 2023
Thomas Fel
Thibaut Boissin
Victor Boutin
Agustin Picard
Paul Novello
...
Drew Linsley
Tom Rousseau
Rémi Cadène
Laurent Gardes
Thomas Serre
FAtt
259
28
0
11 Jun 2023
Don't trust your eyes: on the (un)reliability of feature visualizations
Don't trust your eyes: on the (un)reliability of feature visualizationsInternational Conference on Machine Learning (ICML), 2023
Robert Geirhos
Roland S. Zimmermann
Blair Bilodeau
Wieland Brendel
Been Kim
FAttOOD
343
35
0
07 Jun 2023
Are Deep Neural Networks Adequate Behavioural Models of Human Visual
  Perception?
Are Deep Neural Networks Adequate Behavioural Models of Human Visual Perception?Annual Review of Vision Science (ARVS), 2023
Felix Wichmann
Robert Geirhos
196
37
0
26 May 2023
Comparing the Decision-Making Mechanisms by Transformers and CNNs via
  Explanation Methods
Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation MethodsComputer Vision and Pattern Recognition (CVPR), 2022
Ming-Xiu Jiang
Saeed Khorram
Li Fuxin
FAtt
354
15
0
13 Dec 2022
What do Vision Transformers Learn? A Visual Exploration
What do Vision Transformers Learn? A Visual Exploration
Amin Ghiasi
Hamid Kazemi
Eitan Borgnia
Steven Reich
Manli Shu
Micah Goldblum
A. Wilson
Tom Goldstein
ViT
219
77
0
13 Dec 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual ExplanationsEuropean Conference on Computer Vision (ECCV), 2021
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
338
129
0
06 Dec 2021
1