ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.08310
  4. Cited By
Learning to Weight Samples for Dynamic Early-exiting Networks

Learning to Weight Samples for Dynamic Early-exiting Networks

17 September 2022
Yizeng Han
Yifan Pu
Zihang Lai
Chaofei Wang
S. Song
Junfen Cao
Wenhui Huang
Chao Deng
Gao Huang
ArXivPDFHTML

Papers citing "Learning to Weight Samples for Dynamic Early-exiting Networks"

11 / 11 papers shown
Title
ORXE: Orchestrating Experts for Dynamically Configurable Efficiency
ORXE: Orchestrating Experts for Dynamically Configurable Efficiency
Qingyuan Wang
Guoxin Wang
B. Cardiff
Deepu John
26
0
0
07 May 2025
DyDiT++: Dynamic Diffusion Transformers for Efficient Visual Generation
DyDiT++: Dynamic Diffusion Transformers for Efficient Visual Generation
Wangbo Zhao
Yizeng Han
Jiasheng Tang
Kai Wang
Hao Luo
Yibing Song
Gao Huang
Fan Wang
Yang You
59
0
0
09 Apr 2025
Fast and Accurate Gigapixel Pathological Image Classification with Hierarchical Distillation Multi-Instance Learning
Fast and Accurate Gigapixel Pathological Image Classification with Hierarchical Distillation Multi-Instance Learning
Jiuyang Dong
Junjun Jiang
Kui Jiang
Jiahan Li
Yongbing Zhang
38
0
0
28 Feb 2025
Towards Unbiased and Robust Spatio-Temporal Scene Graph Generation and Anticipation
Towards Unbiased and Robust Spatio-Temporal Scene Graph Generation and Anticipation
Rohith Peddi
Saurabh
Ayush Abhay Shrivastava
Parag Singla
Vibhav Gogate
65
0
0
20 Nov 2024
Predicting Probabilities of Error to Combine Quantization and Early
  Exiting: QuEE
Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE
Florence Regol
Joud Chataoui
Bertrand Charpentier
Mark J. Coates
Pablo Piantanida
Stephan Gunnemann
24
0
0
20 Jun 2024
Tiny Models are the Computational Saver for Large Models
Tiny Models are the Computational Saver for Large Models
Qingyuan Wang
B. Cardiff
Antoine Frappé
Benoît Larras
Deepu John
20
2
0
26 Mar 2024
Deep Incubation: Training Large Models by Divide-and-Conquering
Deep Incubation: Training Large Models by Divide-and-Conquering
Zanlin Ni
Yulin Wang
Jiangwei Yu
Haojun Jiang
Yu Cao
Gao Huang
VLM
10
11
0
08 Dec 2022
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
948
20,214
0
17 Apr 2017
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
237
11,568
0
09 Mar 2017
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
L. V. D. van der Maaten
Kilian Q. Weinberger
PINN
3DV
244
35,884
0
25 Aug 2016
SMOTE: Synthetic Minority Over-sampling Technique
SMOTE: Synthetic Minority Over-sampling Technique
Nitesh V. Chawla
Kevin W. Bowyer
Lawrence Hall
W. Kegelmeyer
AI4TS
154
25,150
0
09 Jun 2011
1