ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.01650
  4. Cited By
Concept Whitening for Interpretable Image Recognition

Concept Whitening for Interpretable Image Recognition

5 February 2020
Zhi Chen
Yijie Bei
Cynthia Rudin
    FAtt
ArXivPDFHTML

Papers citing "Concept Whitening for Interpretable Image Recognition"

22 / 22 papers shown
Title
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
44
0
0
28 Apr 2025
Is What You Ask For What You Get? Investigating Concept Associations in Text-to-Image Models
Is What You Ask For What You Get? Investigating Concept Associations in Text-to-Image Models
Salma Abdel Magid
Weiwei Pan
Simon Warchol
Grace Guo
Junsik Kim
Mahia Rahman
Hanspeter Pfister
84
0
0
06 Oct 2024
CEIR: Concept-based Explainable Image Representation Learning
CEIR: Concept-based Explainable Image Representation Learning
Yan Cui
Shuhong Liu
Liuzhuozheng Li
Zhiyuan Yuan
SSL
VLM
10
3
0
17 Dec 2023
Interpretability-Aware Vision Transformer
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
74
7
0
14 Sep 2023
LR-XFL: Logical Reasoning-based Explainable Federated Learning
LR-XFL: Logical Reasoning-based Explainable Federated Learning
Yanci Zhang
Hanyou Yu
LRM
8
7
0
24 Aug 2023
Exploring XAI for the Arts: Explaining Latent Space in Generative Music
Exploring XAI for the Arts: Explaining Latent Space in Generative Music
Nick Bryan-Kinns
Berker Banar
Corey Ford
Courtney N. Reed
Yixiao Zhang
S. Colton
Jack Armitage
9
30
0
10 Aug 2023
Uncovering Unique Concept Vectors through Latent Space Decomposition
Uncovering Unique Concept Vectors through Latent Space Decomposition
Mara Graziani
Laura Mahony
An-phi Nguyen
Henning Muller
Vincent Andrearczyk
23
4
0
13 Jul 2023
Coherent Concept-based Explanations in Medical Image and Its Application
  to Skin Lesion Diagnosis
Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis
Cristiano Patrício
João C. Neves
Luís F. Teixeira
MedIm
FAtt
19
16
0
10 Apr 2023
ICICLE: Interpretable Class Incremental Continual Learning
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
10
28
0
14 Mar 2023
Bort: Towards Explainable Neural Networks with Bounded Orthogonal
  Constraint
Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
AAML
20
7
0
18 Dec 2022
Joint localization and classification of breast tumors on ultrasound
  images using a novel auxiliary attention-based framework
Joint localization and classification of breast tumors on ultrasound images using a novel auxiliary attention-based framework
Zong Fan
Ping Gong
Shanshan Tang
Christine U. Lee
Xiaohui Zhang
P. Song
Shigao Chen
Hua Li
8
2
0
11 Oct 2022
TCNL: Transparent and Controllable Network Learning Via Embedding
  Human-Guided Concepts
TCNL: Transparent and Controllable Network Learning Via Embedding Human-Guided Concepts
Zhihao Wang
Chuang Zhu
14
1
0
07 Oct 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
20
12
0
19 Aug 2022
When are Post-hoc Conceptual Explanations Identifiable?
When are Post-hoc Conceptual Explanations Identifiable?
Tobias Leemann
Michael Kirchhof
Yao Rong
Enkelejda Kasneci
Gjergji Kasneci
50
10
0
28 Jun 2022
From Attribution Maps to Human-Understandable Explanations through
  Concept Relevance Propagation
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
FAtt
20
131
0
07 Jun 2022
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Y. Zou
130
182
0
31 May 2022
Learnable Visual Words for Interpretable Image Recognition
Learnable Visual Words for Interpretable Image Recognition
Wenxi Xiao
Zhengming Ding
Hongfu Liu
VLM
8
2
0
22 May 2022
ConceptExplainer: Interactive Explanation for Deep Neural Networks from
  a Concept Perspective
ConceptExplainer: Interactive Explanation for Deep Neural Networks from a Concept Perspective
Jinbin Huang
Aditi Mishra
Bum Chul Kwon
Chris Bryan
FAtt
HAI
17
30
0
04 Apr 2022
From Concept Drift to Model Degradation: An Overview on
  Performance-Aware Drift Detectors
From Concept Drift to Model Degradation: An Overview on Performance-Aware Drift Detectors
Firas Bayram
Bestoun S. Ahmed
A. Kassler
15
206
0
21 Mar 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
55
7
0
07 Feb 2022
Editing a classifier by rewriting its prediction rules
Editing a classifier by rewriting its prediction rules
Shibani Santurkar
Dimitris Tsipras
Mahalaxmi Elango
David Bau
Antonio Torralba
A. Madry
KELM
158
89
0
02 Dec 2021
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
117
293
0
17 Oct 2019
1