ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.04289
  4. Cited By
Do Concept Bottleneck Models Learn as Intended?

Do Concept Bottleneck Models Learn as Intended?

10 May 2021
Andrei Margeloiu
Matthew Ashman
Umang Bhatt
Yanzhi Chen
M. Jamnik
Adrian Weller
    SLR
ArXivPDFHTML

Papers citing "Do Concept Bottleneck Models Learn as Intended?"

20 / 70 papers shown
Title
Interpretable and intervenable ultrasonography-based machine learning
  models for pediatric appendicitis
Interpretable and intervenable ultrasonography-based machine learning models for pediatric appendicitis
Ricards Marcinkevics
Patricia Reis Wolfertstetter
Ugne Klimiene
Kieran Chin-Cheong
Alyssia Paschke
...
David Niederberger
S. Wellmann
Ece Ozkan
C. Knorr
Julia E. Vogt
14
25
0
28 Feb 2023
A Closer Look at the Intervention Procedure of Concept Bottleneck Models
A Closer Look at the Intervention Procedure of Concept Bottleneck Models
Sungbin Shin
Yohan Jo
Sungsoo Ahn
Namhoon Lee
16
30
0
28 Feb 2023
Towards a Deeper Understanding of Concept Bottleneck Models Through
  End-to-End Explanation
Towards a Deeper Understanding of Concept Bottleneck Models Through End-to-End Explanation
Jack Furby
Daniel Cunnington
Dave Braines
Alun D. Preece
14
6
0
07 Feb 2023
Towards Robust Metrics for Concept Representation Evaluation
Towards Robust Metrics for Concept Representation Evaluation
M. Zarlenga
Pietro Barbiero
Z. Shams
Dmitry Kazhdan
Umang Bhatt
Adrian Weller
M. Jamnik
16
24
0
25 Jan 2023
Understanding and Enhancing Robustness of Concept-based Models
Understanding and Enhancing Robustness of Concept-based Models
Sanchit Sinha
Mengdi Huai
Jianhui Sun
Aidong Zhang
AAML
25
18
0
29 Nov 2022
Learn to explain yourself, when you can: Equipping Concept Bottleneck
  Models with the ability to abstain on their concept predictions
Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions
J. Lockhart
Daniele Magazzeni
Manuela Veloso
17
4
0
21 Nov 2022
Language in a Bottle: Language Model Guided Concept Bottlenecks for
  Interpretable Image Classification
Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
Yue Yang
Artemis Panagopoulou
Shenghao Zhou
Daniel Jin
Chris Callison-Burch
Mark Yatskar
40
211
0
21 Nov 2022
Towards learning to explain with concept bottleneck models: mitigating
  information leakage
Towards learning to explain with concept bottleneck models: mitigating information leakage
J. Lockhart
Nicolas Marchesotti
Daniele Magazzeni
Manuela Veloso
17
6
0
07 Nov 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
A. Monroy-Hernández
38
107
0
02 Oct 2022
Concept Activation Regions: A Generalized Framework For Concept-Based
  Explanations
Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
Jonathan Crabbé
M. Schaar
56
46
0
22 Sep 2022
Leveraging Explanations in Interactive Machine Learning: An Overview
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAI
FAtt
LRM
26
62
0
29 Jul 2022
Overlooked factors in concept-based explanations: Dataset choice,
  concept learnability, and human capability
Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
FAtt
22
27
0
20 Jul 2022
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
106
64
0
31 May 2022
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Y. Zou
143
185
0
31 May 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
27
25
0
25 Feb 2022
Concept Bottleneck Model with Additional Unsupervised Concepts
Concept Bottleneck Model with Additional Unsupervised Concepts
Yoshihide Sawada
Keigo Nakamura
SSL
11
66
0
03 Feb 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A
  Review of Interpretability Methods
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
15
301
0
01 Nov 2021
Promises and Pitfalls of Black-Box Concept Learning Models
Promises and Pitfalls of Black-Box Concept Learning Models
Anita Mahinpei
Justin Clark
Isaac Lage
Finale Doshi-Velez
Weiwei Pan
31
91
0
24 Jun 2021
Progressive Interpretation Synthesis: Interpreting Task Solving by
  Quantifying Previously Used and Unused Information
Progressive Interpretation Synthesis: Interpreting Task Solving by Quantifying Previously Used and Unused Information
Zhengqi He
Taro Toyoizumi
15
1
0
08 Jan 2021
Previous
12