ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.01943
  4. Cited By
When and How to Fool Explainable Models (and Humans) with Adversarial
  Examples

When and How to Fool Explainable Models (and Humans) with Adversarial Examples

5 July 2021
Jon Vadillo
Roberto Santana
Jose A. Lozano
    SILM
    AAML
ArXivPDFHTML

Papers citing "When and How to Fool Explainable Models (and Humans) with Adversarial Examples"

5 / 5 papers shown
Title
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Jon Vadillo
Roberto Santana
J. A. Lozano
Marta Z. Kwiatkowska
BDL
AAML
60
0
0
17 Feb 2025
Robust deep learning-based semantic organ segmentation in hyperspectral
  images
Robust deep learning-based semantic organ segmentation in hyperspectral images
Silvia Seidlitz
Jiansheng Fang
Jan Odenthal
Berkin Özdemir
H. Fu
...
Jiang-Dong Liu
Martin Wagner
Felix Nickel
Beat P. Müller-Stich
Lena Maier-Hein
24
8
0
09 Nov 2021
Adversarial Attacks for Tabular Data: Application to Fraud Detection and
  Imbalanced Data
Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data
F. Cartella
Orlando Anunciação
Yuki Funabiki
D. Yamaguchi
Toru Akishita
Olivier Elshocht
AAML
48
71
0
20 Jan 2021
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
134
654
0
28 Dec 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,658
0
28 Feb 2017
1