ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.16667
  4. Cited By
Dynamic Sparse Training via Balancing the Exploration-Exploitation
  Trade-off

Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off

30 November 2022
Shaoyi Huang
Bowen Lei
Dongkuan Xu
Hongwu Peng
Yue Sun
Mimi Xie
Caiwen Ding
ArXivPDFHTML

Papers citing "Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off"

9 / 9 papers shown
Title
DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
Hao Wu
Haomin Wen
Guibin Zhang
Yutong Xia
Kai Wang
Yuxuan Liang
Yu Zheng
Kun Wang
60
2
0
17 Jan 2025
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real
  World
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Bani Mallick
UQCV
31
0
0
29 Mar 2024
Accel-GCN: High-Performance GPU Accelerator Design for Graph Convolution
  Networks
Accel-GCN: High-Performance GPU Accelerator Design for Graph Convolution Networks
Xiaoru Xie
Hongwu Peng
Amit Hasan
Shaoyi Huang
Jiahui Zhao
Haowen Fang
Wei Zhang
Tong Geng
O. Khan
Caiwen Ding
GNN
30
30
0
22 Aug 2023
AutoReP: Automatic ReLU Replacement for Fast Private Network Inference
AutoReP: Automatic ReLU Replacement for Fast Private Network Inference
Hongwu Peng
Shaoyi Huang
Tong Zhou
Yukui Luo
Chenghong Wang
...
Tony Geng
Kaleel Mahmood
Wujie Wen
Xiaolin Xu
Caiwen Ding
OffRL
32
38
0
20 Aug 2023
Training Large Language Models Efficiently with Sparsity and Dataflow
Training Large Language Models Efficiently with Sparsity and Dataflow
V. Srinivasan
Darshan Gandhi
Urmish Thakker
R. Prabhakar
MoE
28
6
0
11 Apr 2023
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training
  Efficiency
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
Shreyas Saxena
Abhay Gupta
Sean Lie
23
3
0
21 Mar 2023
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient
  Correction
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Shuren He
Bani Mallick
27
6
0
09 Jan 2023
Towards Sparsification of Graph Neural Networks
Towards Sparsification of Graph Neural Networks
Hongwu Peng
Deniz Gurevin
Shaoyi Huang
Tong Geng
Weiwen Jiang
O. Khan
Caiwen Ding
GNN
30
24
0
11 Sep 2022
Carbon Emissions and Large Neural Network Training
Carbon Emissions and Large Neural Network Training
David A. Patterson
Joseph E. Gonzalez
Quoc V. Le
Chen Liang
Lluís-Miquel Munguía
D. Rothchild
David R. So
Maud Texier
J. Dean
AI4CE
239
643
0
21 Apr 2021
1