ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.09418
  4. Cited By
SAFARI: Versatile and Efficient Evaluations for Robustness of
  Interpretability

SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability

19 August 2022
Wei Huang
Xingyu Zhao
Gao Jin
Xiaowei Huang
    AAML
ArXivPDFHTML

Papers citing "SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability"

8 / 8 papers shown
Title
Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI
Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI
Qi Huang
Emanuele Mezzi
Osman Mutlu
Miltiadis Kofinas
Vidya Prasad
Shadnan Azwad Khan
Elena Ranguelova
N. V. Stein
43
0
0
17 Jul 2024
What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled
  Safety Critical Systems
What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled Safety Critical Systems
Saddek Bensalem
Chih-Hong Cheng
Wei Huang
Xiaowei Huang
Changshun Wu
Xingyu Zhao
AAML
19
6
0
20 Jul 2023
A Survey of Safety and Trustworthiness of Large Language Models through
  the Lens of Verification and Validation
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Xiaowei Huang
Wenjie Ruan
Wei Huang
Gao Jin
Yizhen Dong
...
Sihao Wu
Peipei Xu
Dengyu Wu
André Freitas
Mustafa A. Mustafa
ALM
27
81
0
19 May 2023
Unfooling Perturbation-Based Post Hoc Explainers
Unfooling Perturbation-Based Post Hoc Explainers
Zachariah Carmichael
Walter J. Scheirer
AAML
53
14
0
29 May 2022
Hierarchical Distribution-Aware Testing of Deep Learning
Hierarchical Distribution-Aware Testing of Deep Learning
Wei Huang
Xingyu Zhao
Alec Banks
V. Cox
Xiaowei Huang
OOD
AAML
26
10
0
17 May 2022
Framework for Evaluating Faithfulness of Local Explanations
Framework for Evaluating Faithfulness of Local Explanations
S. Dasgupta
Nave Frost
Michal Moshkovitz
FAtt
111
61
0
01 Feb 2022
Defense Against Explanation Manipulation
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Xia Hu
AAML
37
11
0
08 Nov 2021
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
137
656
0
28 Dec 2020
1