ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.11690
  4. Cited By
Learn to explain yourself, when you can: Equipping Concept Bottleneck
  Models with the ability to abstain on their concept predictions

Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions

21 November 2022
J. Lockhart
Daniele Magazzeni
Manuela Veloso
ArXivPDFHTML

Papers citing "Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions"

4 / 4 papers shown
Title
Conceptual Learning via Embedding Approximations for Reinforcing
  Interpretability and Transparency
Conceptual Learning via Embedding Approximations for Reinforcing Interpretability and Transparency
Maor Dikter
Tsachi Blau
Chaim Baskin
41
0
0
13 Jun 2024
Learning to Intervene on Concept Bottlenecks
Learning to Intervene on Concept Bottlenecks
David Steinmann
Wolfgang Stammer
Felix Friedrich
Kristian Kersting
17
19
0
25 Aug 2023
Human Uncertainty in Concept-Based AI Systems
Human Uncertainty in Concept-Based AI Systems
Katherine M. Collins
Matthew Barker
M. Zarlenga
Naveen Raman
Umang Bhatt
M. Jamnik
Ilia Sucholutsky
Adrian Weller
Krishnamurthy Dvijotham
58
39
0
22 Mar 2023
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,136
0
06 Jun 2015
1