ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.12606
  4. Cited By
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization

Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization

23 October 2020
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
    FAtt
ArXivPDFHTML

Papers citing "Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization"

8 / 8 papers shown
Title
iGAiVA: Integrated Generative AI and Visual Analytics in a Machine Learning Workflow for Text Classification
iGAiVA: Integrated Generative AI and Visual Analytics in a Machine Learning Workflow for Text Classification
Yuanzhe Jin
Adrian Carrasco-Revilla
Min Chen
VLM
25
1
0
24 Sep 2024
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And
  Dataset
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
24
14
0
25 Apr 2022
AI visualization in Nanoscale Microscopy
AI visualization in Nanoscale Microscopy
A. Rajagopal
V. Nirmala
J. Andrew
Karunya Institute of Technology
16
1
0
04 Jan 2022
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset
  For Controlled Experiments
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset For Controlled Experiments
M. Schuessler
Philipp Weiß
Leon Sixt
22
3
0
06 May 2021
Adversarial Perturbations Are Not So Weird: Entanglement of Robust and
  Non-Robust Features in Neural Network Classifiers
Adversarial Perturbations Are Not So Weird: Entanglement of Robust and Non-Robust Features in Neural Network Classifiers
Jacob Mitchell Springer
Melanie Mitchell
Garrett T. Kenyon
AAML
16
13
0
09 Feb 2021
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
19
48
0
19 Oct 2020
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,233
0
24 Jun 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,672
0
28 Feb 2017
1