ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.17303
  4. Cited By
Distilling BlackBox to Interpretable models for Efficient Transfer
  Learning

Distilling BlackBox to Interpretable models for Efficient Transfer Learning

26 May 2023
Shantanu Ghosh
K. Yu
Kayhan Batmanghelich
ArXivPDFHTML

Papers citing "Distilling BlackBox to Interpretable models for Efficient Transfer Learning"

5 / 5 papers shown
Title
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Xin-Chao Xu
Yi Qin
Lu Mi
Hao Wang
X. Li
61
9
0
03 Jan 2025
Distributionally robust self-supervised learning for tabular data
Distributionally robust self-supervised learning for tabular data
Shantanu Ghosh
Tiankang Xie
Mikhail Kuznetsov
OOD
16
0
0
11 Oct 2024
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
M. Zarlenga
Pietro Barbiero
Gabriele Ciravegna
G. Marra
Francesco Giannini
...
F. Precioso
S. Melacci
Adrian Weller
Pietro Lio'
M. Jamnik
63
52
0
19 Sep 2022
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Y. Zou
130
182
0
31 May 2022
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
L. V. D. van der Maaten
Kilian Q. Weinberger
PINN
3DV
244
35,884
0
25 Aug 2016
1