Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.10090
Cited By
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
20 October 2020
Guangda Ji
Zhanxing Zhu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher"
9 / 9 papers shown
Title
Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning
Dayong Ye
Tainqing Zhu
J. Li
Kun Gao
B. Liu
L. Zhang
Wanlei Zhou
Y. Zhang
AAML
MU
75
0
0
28 Jan 2025
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
Kaito Takanami
Takashi Takahashi
Ayaka Sakata
29
0
0
27 Jan 2025
Provable Weak-to-Strong Generalization via Benign Overfitting
David X. Wu
A. Sahai
55
6
0
06 Oct 2024
Is On-Device AI Broken and Exploitable? Assessing the Trust and Ethics in Small Language Models
Kalyan Nakka
Jimmy Dani
Nitesh Saxena
37
1
0
08 Jun 2024
Frameless Graph Knowledge Distillation
Dai Shi
Zhiqi Shao
Yi Guo
Junbin Gao
21
4
0
13 Jul 2023
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation
Rongzhi Zhang
Jiaming Shen
Tianqi Liu
Jia-Ling Liu
Michael Bendersky
Marc Najork
Chao Zhang
35
18
0
08 May 2023
Supervision Complexity and its Role in Knowledge Distillation
Hrayr Harutyunyan
A. S. Rawat
A. Menon
Seungyeon Kim
Surinder Kumar
12
12
0
28 Jan 2023
Informed Learning by Wide Neural Networks: Convergence, Generalization and Sampling Complexity
Jianyi Yang
Shaolei Ren
8
3
0
02 Jul 2022
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
17
2,822
0
09 Jun 2020
1