ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.04947
  4. Cited By
InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning

InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning

8 March 2023
Ziheng Qin
K. Wang
Zangwei Zheng
Jianyang Gu
Xiang Peng
Zhaopan Xu
Daquan Zhou
Lei Shang
Baigui Sun
Xuansong Xie
Yang You
ArXivPDFHTML

Papers citing "InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning"

11 / 11 papers shown
Title
KDSelector: A Knowledge-Enhanced and Data-Efficient Model Selector Learning Framework for Time Series Anomaly Detection
KDSelector: A Knowledge-Enhanced and Data-Efficient Model Selector Learning Framework for Time Series Anomaly Detection
Zhiyu Liang
Dongrui Cai
C. Zhang
Zheng Liang
Chen Liang
Bo Zheng
Shi Qiu
Jin Wang
Hongzhi Wang
48
0
0
16 Mar 2025
FLOPS: Forward Learning with OPtimal Sampling
FLOPS: Forward Learning with OPtimal Sampling
Tao Ren
Zishi Zhang
Jinyang Jiang
Guanghao Li
Zeliang Zhang
Mingqian Feng
Yijie Peng
21
1
0
08 Oct 2024
CHG Shapley: Efficient Data Valuation and Selection towards Trustworthy Machine Learning
CHG Shapley: Efficient Data Valuation and Selection towards Trustworthy Machine Learning
Huaiguang Cai
FedML
TDI
35
1
0
17 Jun 2024
Asymptotic Unbiased Sample Sampling to Speed Up Sharpness-Aware Minimization
Asymptotic Unbiased Sample Sampling to Speed Up Sharpness-Aware Minimization
Jiaxin Deng
Junbiao Pang
Baochang Zhang
48
1
0
12 Jun 2024
A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training
A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training
Kai Wang
Yukun Zhou
Mingjia Shi
Zhihang Yuan
Yuzhang Shang
Yuzhang Shang
Hanwang Zhang
Hanwang Zhang
Yang You
60
9
0
27 May 2024
Dataset Pruning: Reducing Training Data by Examining Generalization
  Influence
Dataset Pruning: Reducing Training Data by Examining Generalization Influence
Shuo Yang
Zeke Xie
Hanyu Peng
Minjing Xu
Mingming Sun
P. Li
DD
127
70
0
19 May 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
255
5,353
0
11 Nov 2021
ResNet strikes back: An improved training procedure in timm
ResNet strikes back: An improved training procedure in timm
Ross Wightman
Hugo Touvron
Hervé Jégou
AI4TS
194
403
0
01 Oct 2021
Core-set Sampling for Efficient Neural Architecture Search
Core-set Sampling for Efficient Neural Architecture Search
Jaewoong Shim
Kyeongbo Kong
Suk-Ju Kang
92
20
0
08 Jul 2021
GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient
  Deep Model Training
GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training
Krishnateja Killamsetty
D. Sivasubramanian
Ganesh Ramakrishnan
A. De
Rishabh K. Iyer
OOD
73
134
0
27 Feb 2021
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
259
10,183
0
12 Dec 2018
1