ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.11257
  4. Cited By
Robust Models Are More Interpretable Because Attributions Look Normal

Robust Models Are More Interpretable Because Attributions Look Normal

20 March 2021
Zifan Wang
Matt Fredrikson
Anupam Datta
    OOD
    FAtt
ArXivPDFHTML

Papers citing "Robust Models Are More Interpretable Because Attributions Look Normal"

5 / 5 papers shown
Title
SAIF: Sparse Adversarial and Imperceptible Attack Framework
SAIF: Sparse Adversarial and Imperceptible Attack Framework
Tooba Imtiaz
Morgan Kohler
Jared Miller
Zifeng Wang
M. Sznaier
Octavia Camps
Octavia Camps
Jennifer Dy
AAML
13
0
0
14 Dec 2022
On the Relationship Between Explanation and Prediction: A Causal View
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
16
14
0
13 Dec 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
14
17
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
17
4
0
09 Nov 2022
Globally-Robust Neural Networks
Globally-Robust Neural Networks
Klas Leino
Zifan Wang
Matt Fredrikson
AAML
OOD
80
125
0
16 Feb 2021
1