Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.15734
Cited By
On the Impact of Knowledge Distillation for Model Interpretability
25 May 2023
Hyeongrok Han
Siwon Kim
Hyun-Soo Choi
Sungroh Yoon
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Impact of Knowledge Distillation for Model Interpretability"
6 / 6 papers shown
Title
Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training
Shizhan Gong
Qi Dou
Farzan Farnia
FAtt
40
2
0
06 Apr 2024
Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing?
Keshigeyan Chandrasegaran
Ngoc-Trung Tran
Yunqing Zhao
Ngai-man Cheung
80
41
0
29 Jun 2022
"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Jasmijn Bastings
Sebastian Ebert
Polina Zablotskaia
Anders Sandholm
Katja Filippova
115
75
0
14 Nov 2021
Model Interpretability through the Lens of Computational Complexity
Pablo Barceló
Mikaël Monet
Jorge A. Pérez
Bernardo Subercaseaux
114
94
0
23 Oct 2020
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,561
0
17 Apr 2017
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,194
0
01 Sep 2014
1