ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.01427
  4. Cited By
Logit Standardization in Knowledge Distillation

Logit Standardization in Knowledge Distillation

3 March 2024
Shangquan Sun
Wenqi Ren
Jingzhi Li
Rui Wang
Xiaochun Cao
ArXivPDFHTML

Papers citing "Logit Standardization in Knowledge Distillation"

37 / 37 papers shown
Title
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $α$-$β$-Divergence
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via ααα-βββ-Divergence
Guanghui Wang
Zhiyong Yang
Z. Wang
Shi Wang
Qianqian Xu
Q. Huang
37
0
0
07 May 2025
Head-Tail-Aware KL Divergence in Knowledge Distillation for Spiking Neural Networks
Head-Tail-Aware KL Divergence in Knowledge Distillation for Spiking Neural Networks
Tianqing Zhang
Zixin Zhu
Kairong Yu
Hongwei Wang
61
0
0
29 Apr 2025
Swapped Logit Distillation via Bi-level Teacher Alignment
Swapped Logit Distillation via Bi-level Teacher Alignment
Stephen Ekaputra Limantoro
Jhe-Hao Lin
Chih-Yu Wang
Yi-Lung Tsai
Hong-Han Shuai
Ching-Chun Huang
Wen-Huang Cheng
46
0
0
27 Apr 2025
Aerial Image Classification in Scarce and Unconstrained Environments via Conformal Prediction
Aerial Image Classification in Scarce and Unconstrained Environments via Conformal Prediction
Farhad Pourkamali-Anaraki
29
0
0
24 Apr 2025
Analytical Softmax Temperature Setting from Feature Dimensions for Model- and Domain-Robust Classification
Analytical Softmax Temperature Setting from Feature Dimensions for Model- and Domain-Robust Classification
Tatsuhito Hasegawa
Shunsuke Sakai
38
0
0
22 Apr 2025
Optimizing Multi-Gateway LoRaWAN via Cloud-Edge Collaboration and Knowledge Distillation
Optimizing Multi-Gateway LoRaWAN via Cloud-Edge Collaboration and Knowledge Distillation
Hong Yang
17
0
0
13 Apr 2025
Sample-level Adaptive Knowledge Distillation for Action Recognition
Sample-level Adaptive Knowledge Distillation for Action Recognition
Ping Li
Chenhao Ping
Wenxiao Wang
Mingli Song
49
0
0
01 Apr 2025
Adversarial Curriculum Graph-Free Knowledge Distillation for Graph Neural Networks
Adversarial Curriculum Graph-Free Knowledge Distillation for Graph Neural Networks
Yuang Jia
Xiaojuan Shan
Jun-Xiong Xia
Guancheng Wan
Y. Zhang
Wenke Huang
Mang Ye
Stan Z. Li
42
0
0
01 Apr 2025
Crossmodal Knowledge Distillation with WordNet-Relaxed Text Embeddings for Robust Image Classification
Crossmodal Knowledge Distillation with WordNet-Relaxed Text Embeddings for Robust Image Classification
Chenqi Guo
Mengshuo Rong
Qianli Feng
Rongfan Feng
Yinglong Ma
VLM
52
0
0
31 Mar 2025
Adaptive Temperature Based on Logits Correlation in Knowledge Distillation
Kazuhiro Matsuyama
Usman Anjum
Satoko Matsuyama
Tetsuo Shoda
J. Zhan
55
0
0
12 Mar 2025
Asymmetric Decision-Making in Online Knowledge Distillation:Unifying Consensus and Divergence
Zhaowei Chen
Borui Zhao
Yuchen Ge
Yuhao Chen
Renjie Song
Jiajun Liang
42
0
0
09 Mar 2025
VRM: Knowledge Distillation via Virtual Relation Matching
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
68
0
0
28 Feb 2025
Multi-Level Decoupled Relational Distillation for Heterogeneous Architectures
Yaoxin Yang
Peng Ye
Weihao Lin
Kangcong Li
Yan Wen
Jia Hao
Tao Chen
33
0
0
10 Feb 2025
Contrastive Representation Distillation via Multi-Scale Feature Decoupling
Contrastive Representation Distillation via Multi-Scale Feature Decoupling
Cuipeng Wang
Tieyuan Chen
Haipeng Wang
43
0
0
09 Feb 2025
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Jinjing Zhu
Songze Li
Lin Wang
36
0
0
13 Jan 2025
ECG-guided individual identification via PPG
ECG-guided individual identification via PPG
Riling Wei
Hanjie Chen
Kelu Yao
Chuanguang Yang
Jun Wang
Chao Li
26
0
0
30 Dec 2024
Central limit theorems for vector-valued composite functionals with
  smoothing and applications
Central limit theorems for vector-valued composite functionals with smoothing and applications
Huhui Chen
Darinka Dentcheva
Yang Lin
Gregory J. Stock
43
2
0
26 Dec 2024
EnsIR: An Ensemble Algorithm for Image Restoration via Gaussian Mixture
  Models
EnsIR: An Ensemble Algorithm for Image Restoration via Gaussian Mixture Models
Shangquan Sun
Wenqi Ren
Z. Liu
Hyunhee Park
Rui Wang
Xiaochun Cao
28
0
0
30 Oct 2024
TAS: Distilling Arbitrary Teacher and Student via a Hybrid Assistant
TAS: Distilling Arbitrary Teacher and Student via a Hybrid Assistant
Guopeng Li
Qiang Wang
K. Yan
Shouhong Ding
Yuan Gao
Gui-Song Xia
21
0
0
16 Oct 2024
Gap Preserving Distillation by Building Bidirectional Mappings with A
  Dynamic Teacher
Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher
Yong Guo
Shulian Zhang
Haolin Pan
Jing Liu
Yulun Zhang
Jian Chen
30
0
0
05 Oct 2024
DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images
DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images
Chen Liu
Danqi Liao
Alejandro Parada-Mayorga
Alejandro Ribeiro
Marcello DiStasio
Smita Krishnaswamy
26
4
0
04 Oct 2024
TrojVLM: Backdoor Attack Against Vision Language Models
TrojVLM: Backdoor Attack Against Vision Language Models
Weimin Lyu
Lu Pang
Tengfei Ma
Haibin Ling
Chao Chen
MLLM
29
6
0
28 Sep 2024
Harmonizing knowledge Transfer in Neural Network with Unified
  Distillation
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
Yaomin Huang
Zaomin Yan
Chaomin Shen
Faming Fang
Guixu Zhang
19
0
0
27 Sep 2024
Kendall's $τ$ Coefficient for Logits Distillation
Kendall's τττ Coefficient for Logits Distillation
Yuchen Guan
Runxi Cheng
Kang Liu
Chun Yuan
21
0
0
26 Sep 2024
Learn from Balance: Rectifying Knowledge Transfer for Long-Tailed
  Scenarios
Learn from Balance: Rectifying Knowledge Transfer for Long-Tailed Scenarios
Xinlei Huang
Jialiang Tang
Xubin Zheng
Jinjia Zhou
Wenxin Yu
Ning Jiang
18
0
0
12 Sep 2024
LoCa: Logit Calibration for Knowledge Distillation
LoCa: Logit Calibration for Knowledge Distillation
Runming Yang
Taiqiang Wu
Yujiu Yang
27
0
0
07 Sep 2024
Optimizing Vision Transformers with Data-Free Knowledge Transfer
Optimizing Vision Transformers with Data-Free Knowledge Transfer
Gousia Habib
Damandeep Singh
I. Malik
Brejesh Lall
35
1
0
12 Aug 2024
DisCoM-KD: Cross-Modal Knowledge Distillation via Disentanglement
  Representation and Adversarial Learning
DisCoM-KD: Cross-Modal Knowledge Distillation via Disentanglement Representation and Adversarial Learning
Dino Ienco
C. Dantas
25
2
0
05 Aug 2024
LLAVADI: What Matters For Multimodal Large Language Models Distillation
LLAVADI: What Matters For Multimodal Large Language Models Distillation
Shilin Xu
Xiangtai Li
Haobo Yuan
Lu Qi
Yunhai Tong
Ming-Hsuan Yang
34
3
0
28 Jul 2024
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
Fang Chen
Gourav Datta
Mujahid Al Rafi
Hyeran Jeon
Meng Tang
91
1
0
06 Jun 2024
Aligning in a Compact Space: Contrastive Knowledge Distillation between
  Heterogeneous Architectures
Aligning in a Compact Space: Contrastive Knowledge Distillation between Heterogeneous Architectures
Hongjun Wu
Li Xiao
Xingkuo Zhang
Yining Miao
31
1
0
28 May 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
51
1
0
22 Apr 2024
Domain Generalization for Crop Segmentation with Standardized Ensemble
  Knowledge Distillation
Domain Generalization for Crop Segmentation with Standardized Ensemble Knowledge Distillation
Simone Angarano
Mauro Martini
Alessandro Navone
Marcello Chiaberge
16
2
0
03 Apr 2023
Revisiting Label Smoothing and Knowledge Distillation Compatibility:
  What was Missing?
Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing?
Keshigeyan Chandrasegaran
Ngoc-Trung Tran
Yunqing Zhao
Ngai-man Cheung
75
41
0
29 Jun 2022
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
147
416
0
19 Apr 2021
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
948
20,214
0
17 Apr 2017
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
279
39,083
0
01 Sep 2014
1