ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.17007
  4. Cited By
Improving Knowledge Distillation via Regularizing Feature Norm and
  Direction

Improving Knowledge Distillation via Regularizing Feature Norm and Direction

26 May 2023
Yuzhu Wang
Lechao Cheng
Manni Duan
Yongheng Wang
Zunlei Feng
Shu Kong
ArXivPDFHTML

Papers citing "Improving Knowledge Distillation via Regularizing Feature Norm and Direction"

12 / 12 papers shown
Title
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models
Junjie Yang
Junhao Song
Xudong Han
Ziqian Bi
Tianyang Wang
...
Y. Zhang
Qian Niu
Benji Peng
Keyu Chen
Ming Liu
VLM
47
0
0
18 Apr 2025
Neural Collapse Inspired Knowledge Distillation
Neural Collapse Inspired Knowledge Distillation
Shuoxi Zhang
Zijian Song
Kun He
69
1
0
16 Dec 2024
Knowledge Migration Framework for Smart Contract Vulnerability Detection
Knowledge Migration Framework for Smart Contract Vulnerability Detection
Luqi Wang
Wenbao Jiang
81
0
0
15 Dec 2024
Dual-Head Knowledge Distillation: Enhancing Logits Utilization with an
  Auxiliary Head
Dual-Head Knowledge Distillation: Enhancing Logits Utilization with an Auxiliary Head
Penghui Yang
Chen-Chen Zong
Sheng-Jun Huang
Lei Feng
Bo An
30
1
0
13 Nov 2024
VLM-KD: Knowledge Distillation from VLM for Long-Tail Visual Recognition
VLM-KD: Knowledge Distillation from VLM for Long-Tail Visual Recognition
Zaiwei Zhang
Gregory P. Meyer
Zhichao Lu
Ashish Shrivastava
Avinash Ravichandran
Eric M. Wolff
VLM
36
2
0
29 Aug 2024
Unveiling Incomplete Modality Brain Tumor Segmentation: Leveraging
  Masked Predicted Auto-Encoder and Divergence Learning
Unveiling Incomplete Modality Brain Tumor Segmentation: Leveraging Masked Predicted Auto-Encoder and Divergence Learning
Zhongao Sun
Jiameng Li
Yuhan Wang
Jiarong Cheng
Qing Zhou
Chun Li
MedIm
28
0
0
12 Jun 2024
FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning
  and Logit Distillation
FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning and Logit Distillation
Yuting Ma
Lechao Cheng
Yaxiong Wang
Zhun Zhong
Xiaohua Xu
Meng Wang
FedML
32
4
0
27 May 2024
Knowledge Distillation Based on Transformed Teacher Matching
Knowledge Distillation Based on Transformed Teacher Matching
Kaixiang Zheng
En-Hui Yang
27
19
0
17 Feb 2024
Progressive Feature Self-reinforcement for Weakly Supervised Semantic
  Segmentation
Progressive Feature Self-reinforcement for Weakly Supervised Semantic Segmentation
Jingxuan He
Lechao Cheng
Chaowei Fang
Zunlei Feng
Tingting Mu
Min-Gyoo Song
13
7
0
14 Dec 2023
Ever Evolving Evaluator (EV3): Towards Flexible and Reliable
  Meta-Optimization for Knowledge Distillation
Ever Evolving Evaluator (EV3): Towards Flexible and Reliable Meta-Optimization for Knowledge Distillation
Li Ding
M. Zoghi
Guy Tennenholtz
Maryam Karimzadehgan
16
0
0
29 Oct 2023
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
147
420
0
19 Apr 2021
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
287
39,194
0
01 Sep 2014
1