ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.10090
  4. Cited By
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data
  Efficiency and Imperfect Teacher

Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher

20 October 2020
Guangda Ji
Zhanxing Zhu
ArXivPDFHTML

Papers citing "Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher"

9 / 9 papers shown
Title
Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning
Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning
Dayong Ye
Tainqing Zhu
J. Li
Kun Gao
B. Liu
L. Zhang
Wanlei Zhou
Y. Zhang
AAML
MU
75
0
0
28 Jan 2025
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
Kaito Takanami
Takashi Takahashi
Ayaka Sakata
29
0
0
27 Jan 2025
Provable Weak-to-Strong Generalization via Benign Overfitting
Provable Weak-to-Strong Generalization via Benign Overfitting
David X. Wu
A. Sahai
55
6
0
06 Oct 2024
Is On-Device AI Broken and Exploitable? Assessing the Trust and Ethics in Small Language Models
Is On-Device AI Broken and Exploitable? Assessing the Trust and Ethics in Small Language Models
Kalyan Nakka
Jimmy Dani
Nitesh Saxena
37
1
0
08 Jun 2024
Frameless Graph Knowledge Distillation
Frameless Graph Knowledge Distillation
Dai Shi
Zhiqi Shao
Yi Guo
Junbin Gao
21
4
0
13 Jul 2023
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge
  Distillation
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation
Rongzhi Zhang
Jiaming Shen
Tianqi Liu
Jia-Ling Liu
Michael Bendersky
Marc Najork
Chao Zhang
35
18
0
08 May 2023
Supervision Complexity and its Role in Knowledge Distillation
Supervision Complexity and its Role in Knowledge Distillation
Hrayr Harutyunyan
A. S. Rawat
A. Menon
Seungyeon Kim
Surinder Kumar
12
12
0
28 Jan 2023
Informed Learning by Wide Neural Networks: Convergence, Generalization
  and Sampling Complexity
Informed Learning by Wide Neural Networks: Convergence, Generalization and Sampling Complexity
Jianyi Yang
Shaolei Ren
8
3
0
02 Jul 2022
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
17
2,822
0
09 Jun 2020
1