Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.11690
Cited By
Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions
21 November 2022
J. Lockhart
Daniele Magazzeni
Manuela Veloso
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions"
4 / 4 papers shown
Title
Conceptual Learning via Embedding Approximations for Reinforcing Interpretability and Transparency
Maor Dikter
Tsachi Blau
Chaim Baskin
41
0
0
13 Jun 2024
Learning to Intervene on Concept Bottlenecks
David Steinmann
Wolfgang Stammer
Felix Friedrich
Kristian Kersting
17
19
0
25 Aug 2023
Human Uncertainty in Concept-Based AI Systems
Katherine M. Collins
Matthew Barker
M. Zarlenga
Naveen Raman
Umang Bhatt
M. Jamnik
Ilia Sucholutsky
Adrian Weller
Krishnamurthy Dvijotham
58
39
0
22 Mar 2023
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
279
9,136
0
06 Jun 2015
1