ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.00780
  4. Cited By
Visual correspondence-based explanations improve AI robustness and
  human-AI team accuracy

Visual correspondence-based explanations improve AI robustness and human-AI team accuracy

26 July 2022
Giang Nguyen
Mohammad Reza Taesiri
Anh Totti Nguyen
ArXivPDFHTML

Papers citing "Visual correspondence-based explanations improve AI robustness and human-AI team accuracy"

33 / 33 papers shown
Title
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Indu Panigrahi
Sunnie S. Y. Kim
Amna Liaqat
Rohan Jinturkar
Olga Russakovsky
Ruth C. Fong
Parastoo Abtahi
FAtt
HAI
57
0
0
14 Apr 2025
Rashomon Sets for Prototypical-Part Networks: Editing Interpretable Models in Real-Time
J. Donnelly
Zhicheng Guo
A. Barnett
Hayden McTavish
Chaofan Chen
Cynthia Rudin
55
0
0
03 Mar 2025
Distillation of Diffusion Features for Semantic Correspondence
Distillation of Diffusion Features for Semantic Correspondence
Frank Fundel
Johannes Schusterbauer
Vincent Tao Hu
Bjorn Ommer
DiffM
87
3
0
04 Dec 2024
Local vs distributed representations: What is the right basis for
  interpretability?
Local vs distributed representations: What is the right basis for interpretability?
Julien Colin
L. Goetschalckx
Thomas Fel
Victor Boutin
Jay Gopal
Thomas Serre
Nuria Oliver
HAI
29
2
0
06 Nov 2024
Interpretable Image Classification with Adaptive Prototype-based Vision
  Transformers
Interpretable Image Classification with Adaptive Prototype-based Vision Transformers
Chiyu Ma
J. Donnelly
Wenjun Liu
Soroush Vosoughi
Cynthia Rudin
Chaofan Chen
ViT
26
8
0
28 Oct 2024
Towards User-Focused Research in Training Data Attribution for
  Human-Centered Explainable AI
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
Elisa Nguyen
Johannes Bertram
Evgenii Kortukov
Jean Y. Song
Seong Joon Oh
TDI
367
2
0
25 Sep 2024
CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining
  Vision Models
CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining Vision Models
Teodor Chiaburu
Frank Haußer
Felix Bießmann
27
4
0
23 Apr 2024
Allowing humans to interactively guide machines where to look does not
  always improve human-AI team's classification accuracy
Allowing humans to interactively guide machines where to look does not always improve human-AI team's classification accuracy
Giang Nguyen
Mohammad Reza Taesiri
Sunnie S. Y. Kim
Anh Nguyen
HAI
AAML
FAtt
29
6
0
08 Apr 2024
PEEB: Part-based Image Classifiers with an Explainable and Editable
  Language Bottleneck
PEEB: Part-based Image Classifiers with an Explainable and Editable Language Bottleneck
Thang M. Pham
Peijie Chen
Tin Nguyen
Seunghyun Yoon
Trung Bui
Anh Nguyen
VLM
38
7
0
08 Mar 2024
Feature Accentuation: Revealing 'What' Features Respond to in Natural
  Images
Feature Accentuation: Revealing 'What' Features Respond to in Natural Images
Christopher Hamblin
Thomas Fel
Srijani Saha
Talia Konkle
George A. Alvarez
FAtt
21
3
0
15 Feb 2024
Leveraging Habitat Information for Fine-grained Bird Identification
Leveraging Habitat Information for Fine-grained Bird Identification
Tin Nguyen
Anh Nguyen
Anh Nguyen
VLM
36
0
0
22 Dec 2023
Instance Segmentation under Occlusions via Location-aware Copy-Paste
  Data Augmentation
Instance Segmentation under Occlusions via Location-aware Copy-Paste Data Augmentation
Son Nguyen
Mikel Lainsa
Hung Dao
Daeyoung Kim
Giang Nguyen
11
1
0
27 Oct 2023
Towards Effective Human-AI Decision-Making: The Role of Human Learning
  in Appropriate Reliance on AI Advice
Towards Effective Human-AI Decision-Making: The Role of Human Learning in Appropriate Reliance on AI Advice
Max Schemmer
Andrea Bartos
Philipp Spitzer
Patrick Hemmer
Niklas Kühl
Jonas Liebschner
G. Satzger
21
6
0
03 Oct 2023
May I Ask a Follow-up Question? Understanding the Benefits of
  Conversations in Neural Network Explainability
May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability
Tong Zhang
X. J. Yang
Boyang Albert Li
11
3
0
25 Sep 2023
Learning Invariant Representations with a Nonparametric Nadaraya-Watson
  Head
Learning Invariant Representations with a Nonparametric Nadaraya-Watson Head
Alan Q. Wang
Minh Nguyen
M. Sabuncu
CML
OOD
27
1
0
23 Sep 2023
PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained
  Image Classification Accuracy for AIs and Humans
PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans
Giang Nguyen
Valerie Chen
Mohammad Reza Taesiri
Anh Totti Nguyen
24
4
0
25 Aug 2023
The Impact of Imperfect XAI on Human-AI Decision-Making
The Impact of Imperfect XAI on Human-AI Decision-Making
Katelyn Morrison
Philipp Spitzer
Violet Turri
Michelle C. Feng
Niklas Kühl
Adam Perer
25
31
0
25 Jul 2023
Humans, AI, and Context: Understanding End-Users' Trust in a Real-World
  Computer Vision Application
Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
A. Monroy-Hernández
11
26
0
15 May 2023
In Search of Verifiability: Explanations Rarely Enable Complementary
  Performance in AI-Advised Decision Making
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making
Raymond Fok
Daniel S. Weld
19
61
0
12 May 2023
Towards a Praxis for Intercultural Ethics in Explainable AI
Towards a Praxis for Intercultural Ethics in Explainable AI
Chinasa T. Okolo
31
2
0
24 Apr 2023
ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of
  Zoom and Spatial Biases in Image Classification
ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification
Mohammad Reza Taesiri
Giang Nguyen
Sarra Habchi
C. Bezemer
Anh Totti Nguyen
VLM
27
20
0
11 Apr 2023
Why is plausibility surprisingly problematic as an XAI criterion?
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
47
3
0
30 Mar 2023
Learning Human-Compatible Representations for Case-Based Decision
  Support
Learning Human-Compatible Representations for Case-Based Decision Support
Han Liu
Yizhou Tian
Chacha Chen
Shi Feng
Yuxin Chen
Chenhao Tan
13
4
0
06 Mar 2023
Invisible Users: Uncovering End-Users' Requirements for Explainable AI
  via Explanation Forms and Goals
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
8
7
0
10 Feb 2023
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
  Explanations
Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations
Max Schemmer
Niklas Kühl
Carina Benz
Andrea Bartos
G. Satzger
14
96
0
04 Feb 2023
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
24
39
0
07 Dec 2022
A Flexible Nadaraya-Watson Head Can Offer Explainable and Calibrated
  Classification
A Flexible Nadaraya-Watson Head Can Offer Explainable and Calibrated Classification
Alan Q. Wang
M. Sabuncu
17
5
0
07 Dec 2022
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
25
91
0
20 Oct 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
A. Monroy-Hernández
21
107
0
02 Oct 2022
Machine Explanations and Human Understanding
Machine Explanations and Human Understanding
Chacha Chen
Shi Feng
Amit Sharma
Chenhao Tan
19
24
0
08 Feb 2022
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation
  Framework for Explainability Methods
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Julien Colin
Thomas Fel
Rémi Cadène
Thomas Serre
25
100
0
06 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
58
114
0
06 Dec 2021
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
282
39,190
0
01 Sep 2014
1