ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.13413
  4. Cited By
RES: A Robust Framework for Guiding Visual Explanation

RES: A Robust Framework for Guiding Visual Explanation

Knowledge Discovery and Data Mining (KDD), 2022
27 June 2022
Yuyang Gao
Tong Sun
Guangji Bai
Siyi Gu
S. Hong
Bo Pan
    FAttAAMLXAI
ArXiv (abs)PDFHTMLGithub (32★)

Papers citing "RES: A Robust Framework for Guiding Visual Explanation"

20 / 20 papers shown
From Attribution to Action: Jointly ALIGNing Predictions and Explanations
From Attribution to Action: Jointly ALIGNing Predictions and Explanations
Dongsheng Hong
Chao Chen
Yanhui Chen
Shanshan Lin
Zhihao Chen
Xiangwen Liao
206
0
0
10 Nov 2025
AIM: Amending Inherent Interpretability via Self-Supervised Masking
AIM: Amending Inherent Interpretability via Self-Supervised Masking
Eyad Alshami
Shashank Agnihotri
Bernt Schiele
Margret Keuper
AAML
183
2
0
15 Aug 2025
On the Interplay of Human-AI Alignment,Fairness, and Performance Trade-offs in Medical Imaging
On the Interplay of Human-AI Alignment,Fairness, and Performance Trade-offs in Medical ImagingInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2025
Haozhe Luo
Ziyu Zhou
Zixin Shu
Aurélie Pahud de Mortanges
Robert Berke
Mauricio Reyes
273
2
0
15 May 2025
Large Language Models as Attribution Regularizers for Efficient Model Training
Large Language Models as Attribution Regularizers for Efficient Model Training
Davor Vukadin
Marin Šilić
Goran Delač
686
0
0
27 Feb 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
B-cosification: Transforming Deep Neural Networks to be Inherently InterpretableNeural Information Processing Systems (NeurIPS), 2024
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
450
13
0
28 Jan 2025
Effective Guidance for Model Attention with Simple Yes-no Annotations
Effective Guidance for Model Attention with Simple Yes-no AnnotationsBigData Congress [Services Society] (BSS), 2024
Seongmin Lee
Ali Payani
Duen Horng Chau
FAtt
424
0
0
29 Oct 2024
A survey on Concept-based Approaches For Model Improvement
A survey on Concept-based Approaches For Model Improvement
Avani Gupta
P. J. Narayanan
LRM
371
6
0
21 Mar 2024
DUE: Dynamic Uncertainty-Aware Explanation Supervision via 3D Imputation
DUE: Dynamic Uncertainty-Aware Explanation Supervision via 3D Imputation
Qilong Zhao
Yifei Zhang
Mengdan Zhu
Siyi Gu
Yuyang Gao
Xiaofeng Yang
Bo Pan
MedIm
275
7
0
16 Mar 2024
Closing the Knowledge Gap in Designing Data Annotation Interfaces for
  AI-powered Disaster Management Analytic Systems
Closing the Knowledge Gap in Designing Data Annotation Interfaces for AI-powered Disaster Management Analytic Systems
Zinat Ara
Hossein Salemi
Sungsoo Ray Hong
Yasas Senarath
Steve Peterson
A. Hughes
Hemant Purohit
168
7
0
04 Mar 2024
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Good Teachers Explain: Explanation-Enhanced Knowledge DistillationEuropean Conference on Computer Vision (ECCV), 2024
Amin Parchami-Araghi
Moritz Bohle
Sukrut Rao
Bernt Schiele
FAtt
287
23
0
05 Feb 2024
3DPFIX: Improving Remote Novices' 3D Printing Troubleshooting through
  Human-AI Collaboration
3DPFIX: Improving Remote Novices' 3D Printing Troubleshooting through Human-AI Collaboration
Nahyun Kwon
Tong Sun
Yuyang Gao
Bo Pan
Xu Wang
Jeeeun Kim
S. Hong
324
11
0
29 Jan 2024
Visual Attention Prompted Prediction and Learning
Visual Attention Prompted Prediction and LearningInternational Joint Conference on Artificial Intelligence (IJCAI), 2023
Yifei Zhang
Siyi Gu
Bo Pan
Guangji Bai
Meikang Qiu
Xiaofeng Yang
Bo Pan
LRMVLM
403
13
0
12 Oct 2023
Saliency-Bench: A Comprehensive Benchmark for Evaluating Visual Explanations
Saliency-Bench: A Comprehensive Benchmark for Evaluating Visual Explanations
Yifei Zhang
Siyi Gu
James Song
Bo Pan
Guangji Bai
Bo Pan
Liang Zhao
XAI
137
0
0
12 Oct 2023
Saliency-Guided Hidden Associative Replay for Continual Learning
Saliency-Guided Hidden Associative Replay for Continual Learning
Guangji Bai
Qilong Zhao
Xiaoyang Jiang
Yifei Zhang
Bo Pan
CLL
344
7
0
06 Oct 2023
Designing a Direct Feedback Loop between Humans and Convolutional Neural
  Networks through Local Explanations
Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
Tong Sun
Yuyang Gao
Shubham Khaladkar
Sijia Liu
Bo Pan
Younghoon Kim
S. Hong
AAMLFAttHAI
402
9
0
08 Jul 2023
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided LearningACM Computing Surveys (ACM CSUR), 2022
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Bo Pan
384
66
0
07 Dec 2022
Fairness and Explainability: Bridging the Gap Towards Fair Model
  Explanations
Fairness and Explainability: Bridging the Gap Towards Fair Model ExplanationsAAAI Conference on Artificial Intelligence (AAAI), 2022
Yuying Zhao
Yu Wang
Hanyu Wang
FaML
215
25
0
07 Dec 2022
Saliency-Regularized Deep Multi-Task Learning
Saliency-Regularized Deep Multi-Task LearningKnowledge Discovery and Data Mining (KDD), 2022
Guangji Bai
Bo Pan
260
14
0
03 Jul 2022
Aligning Eyes between Humans and Deep Neural Network through Interactive
  Attention Alignment
Aligning Eyes between Humans and Deep Neural Network through Interactive Attention Alignment
Yuyang Gao
Tong Sun
Bo Pan
Sungsoo Ray Hong
HAI
304
52
0
06 Feb 2022
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
2.7K
21,148
0
16 Feb 2016
1
Page 1 of 1