ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.15839
  4. Cited By
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
v1v2 (latest)

The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI

20 January 2025
Christopher Burger
Charles Walter
Thai Le
    AAML
ArXiv (abs)PDFHTMLGithub

Papers citing "The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI"

11 / 11 papers shown
Explainability of Point Cloud Neural Networks Using SMILE: Statistical
  Model-Agnostic Interpretability with Local Explanations
Explainability of Point Cloud Neural Networks Using SMILE: Statistical Model-Agnostic Interpretability with Local Explanations
Seyed Mohammad Ahmadi
Koorosh Aslansefat
Ruben Valcarce-Dineiro
Joshua Barnfather
290
4
0
20 Oct 2024
Are Your Explanations Reliable? Investigating the Stability of LIME in
  Explaining Text Classifiers by Marrying XAI and Adversarial Attack
Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Text Classifiers by Marrying XAI and Adversarial AttackConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Christopher Burger
Lingwei Chen
Thai Le
FAttAAML
290
20
0
21 May 2023
Perturbing Inputs for Fragile Interpretations in Deep Natural Language
  Processing
Perturbing Inputs for Fragile Interpretations in Deep Natural Language ProcessingBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackBoxNLP), 2021
Sanchit Sinha
Hanjie Chen
Arshdeep Sekhon
Yangfeng Ji
Yanjun Qi
AAMLFAtt
347
49
0
11 Aug 2021
An Analysis of LIME for Text Data
An Analysis of LIME for Text DataInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Dina Mardaoui
Damien Garreau
FAtt
401
49
0
23 Oct 2020
Multi-Dimensional Gender Bias Classification
Multi-Dimensional Gender Bias ClassificationConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Emily Dinan
Angela Fan
Ledell Yu Wu
Jason Weston
Douwe Kiela
Adina Williams
FaML
355
135
0
01 May 2020
Explaining the Explainer: A First Theoretical Analysis of LIME
Explaining the Explainer: A First Theoretical Analysis of LIMEInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Damien Garreau
U. V. Luxburg
FAtt
315
274
0
10 Jan 2020
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation MethodsAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2019
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAttAAMLMLAU
608
1,041
0
06 Nov 2019
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and
  lighter
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
3.4K
9,527
0
02 Oct 2019
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
382
625
0
21 Jun 2018
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is FragileAAAI Conference on Artificial Intelligence (AAAI), 2017
Amirata Ghorbani
Abubakar Abid
James Zou
FAttAAML
508
996
0
29 Oct 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
2.7K
21,359
0
16 Feb 2016
1
Page 1 of 1