ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.09050
  4. Cited By
End-to-End Training Induces Information Bottleneck through Layer-Role
  Differentiation: A Comparative Analysis with Layer-wise Training

End-to-End Training Induces Information Bottleneck through Layer-Role Differentiation: A Comparative Analysis with Layer-wise Training

14 February 2024
Keitaro Sakamoto
Issei Sato
ArXivPDFHTML

Papers citing "End-to-End Training Induces Information Bottleneck through Layer-Role Differentiation: A Comparative Analysis with Layer-wise Training"

9 / 9 papers shown
Title
Scalable Model Merging with Progressive Layer-wise Distillation
Scalable Model Merging with Progressive Layer-wise Distillation
Jing Xu
Jiazheng Li
J. Zhang
MoMe
FedML
85
0
0
18 Feb 2025
NeuLite: Memory-Efficient Federated Learning via Elastic Progressive
  Training
NeuLite: Memory-Efficient Federated Learning via Elastic Progressive Training
Yebo Wu
Li Li
Chunlin Tian
Dubing Chen
Chengzhong Xu
FedML
19
3
0
20 Aug 2024
Real Time American Sign Language Detection Using Yolo-v9
Real Time American Sign Language Detection Using Yolo-v9
Amna Imran
Meghana Shashishekhara Hulikal
Hamza A. A. Gardi
ObjD
31
2
0
25 Jul 2024
DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Zifeng Wang
Zheng Zhan
Yifan Gong
Yucai Shao
Stratis Ioannidis
Yanzhi Wang
Jennifer Dy
CLL
45
10
0
30 Apr 2023
Pruning Adversarially Robust Neural Networks without Adversarial
  Examples
Pruning Adversarially Robust Neural Networks without Adversarial Examples
T. Jian
Zifeng Wang
Yanzhi Wang
Jennifer Dy
Stratis Ioannidis
AAML
VLM
39
11
0
09 Oct 2022
SoftHebb: Bayesian Inference in Unsupervised Hebbian Soft
  Winner-Take-All Networks
SoftHebb: Bayesian Inference in Unsupervised Hebbian Soft Winner-Take-All Networks
Timoleon Moraitis
Dmitry Toichkin
Adrien Journé
Yansong Chua
Qinghai Guo
AAML
BDL
68
28
0
12 Jul 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
124
671
0
24 Jan 2021
Training Deep Architectures Without End-to-End Backpropagation: A Survey
  on the Provably Optimal Methods
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods
Shiyu Duan
José C. Príncipe
MQ
20
3
0
09 Jan 2021
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
183
1,027
0
06 Mar 2020
1