ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.13113
  4. Cited By
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training

FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training

24 December 2020
Y. Fu
Haoran You
Yang Katie Zhao
Yue Wang
Chaojian Li
K. Gopalakrishnan
Zhangyang Wang
Yingyan Lin
    MQ
ArXivPDFHTML

Papers citing "FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training"

20 / 20 papers shown
Title
Empowering Edge Intelligence: A Comprehensive Survey on On-Device AI Models
Empowering Edge Intelligence: A Comprehensive Survey on On-Device AI Models
Xubin Wang
Zhiqing Tang
Jianxiong Guo
Tianhui Meng
Chenhao Wang
Tian-sheng Wang
Weijia Jia
37
0
0
08 Mar 2025
Optimizing Edge AI: A Comprehensive Survey on Data, Model, and System Strategies
Optimizing Edge AI: A Comprehensive Survey on Data, Model, and System Strategies
Xubin Wang
Weijia Jia
34
0
0
08 Jan 2025
CycleBNN: Cyclic Precision Training in Binary Neural Networks
CycleBNN: Cyclic Precision Training in Binary Neural Networks
Federico Fontana
Romeo Lanzino
Anxhelo Diko
G. Foresti
Luigi Cinque
MQ
22
0
0
28 Sep 2024
A General and Efficient Training for Transformer via Token Expansion
A General and Efficient Training for Transformer via Token Expansion
Wenxuan Huang
Yunhang Shen
Jiao Xie
Baochang Zhang
Gaoqi He
Ke Li
Xing Sun
Shaohui Lin
38
2
0
31 Mar 2024
Dynamic Stashing Quantization for Efficient Transformer Training
Dynamic Stashing Quantization for Efficient Transformer Training
Guofu Yang
Daniel Lo
Robert D. Mullins
Yiren Zhao
MQ
21
7
0
09 Mar 2023
Accuracy Booster: Enabling 4-bit Fixed-point Arithmetic for DNN Training
Accuracy Booster: Enabling 4-bit Fixed-point Arithmetic for DNN Training
Simla Burcu Harma
Canberk Sonmez
Nicholas Sperry
Babak Falsafi
Martin Jaggi
Yunho Oh
MQ
13
4
0
19 Nov 2022
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Y. Fu
Haichuan Yang
Jiayi Yuan
Meng Li
Cheng Wan
Raghuraman Krishnamoorthi
Vikas Chandra
Yingyan Lin
20
18
0
02 Jun 2022
LDP: Learnable Dynamic Precision for Efficient Deep Neural Network
  Training and Inference
LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Zhongzhi Yu
Y. Fu
Shang Wu
Mengquan Li
Haoran You
Yingyan Lin
16
1
0
15 Mar 2022
Auto-scaling Vision Transformers without Training
Auto-scaling Vision Transformers without Training
Wuyang Chen
Wei Huang
Xianzhi Du
Xiaodan Song
Zhangyang Wang
Denny Zhou
ViT
17
21
0
24 Feb 2022
F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
Qing Jin
Jian Ren
Richard Zhuang
Sumant Hanumante
Zhengang Li
Zhiyu Chen
Yanzhi Wang
Kai-Min Yang
Sergey Tulyakov
MQ
16
41
0
10 Feb 2022
Overview frequency principle/spectral bias in deep learning
Overview frequency principle/spectral bias in deep learning
Z. Xu
Yaoyu Zhang
Tao Luo
FaML
12
62
0
19 Jan 2022
MIA-Former: Efficient and Robust Vision Transformers via Multi-grained
  Input-Adaptation
MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Zhongzhi Yu
Y. Fu
Sicheng Li
Chaojian Li
Yingyan Lin
ViT
21
18
0
21 Dec 2021
DAdaQuant: Doubly-adaptive quantization for communication-efficient
  Federated Learning
DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning
Robert Hönig
Yiren Zhao
Robert D. Mullins
FedML
89
53
0
31 Oct 2021
Shift-BNN: Highly-Efficient Probabilistic Bayesian Neural Network
  Training via Memory-Friendly Pattern Retrieving
Shift-BNN: Highly-Efficient Probabilistic Bayesian Neural Network Training via Memory-Friendly Pattern Retrieving
Qiyu Wan
Haojun Xia
Xingyao Zhang
Lening Wang
S. Song
Xin Fu
OOD
16
7
0
07 Oct 2021
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency
Yonggan Fu
Yang Katie Zhao
Qixuan Yu
Chaojian Li
Yingyan Lin
AAML
33
13
0
11 Sep 2021
InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks
InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks
Yonggan Fu
Zhongzhi Yu
Yongan Zhang
Yifan Jiang
Chaojian Li
Yongyuan Liang
Mingchao Jiang
Zhangyang Wang
Yingyan Lin
13
3
0
22 Apr 2021
"BNN - BN = ?": Training Binary Neural Networks without Batch
  Normalization
"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization
Tianlong Chen
Zhenyu (Allen) Zhang
Xu Ouyang
Zechun Liu
Zhiqiang Shen
Zhangyang Wang
MQ
24
34
0
16 Apr 2021
Enabling Design Methodologies and Future Trends for Edge AI:
  Specialization and Co-design
Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design
Cong Hao
Jordan Dotzel
Jinjun Xiong
Luca Benini
Zhiru Zhang
Deming Chen
37
34
0
25 Mar 2021
SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and
  Training
SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training
Xiaohan Chen
Yang Katie Zhao
Yue Wang
Pengfei Xu
Haoran You
Chaojian Li
Y. Fu
Yingyan Lin
Zhangyang Wang
32
1
0
04 Jan 2021
Training High-Performance and Large-Scale Deep Neural Networks with Full
  8-bit Integers
Training High-Performance and Large-Scale Deep Neural Networks with Full 8-bit Integers
Yukuan Yang
Shuang Wu
Lei Deng
Tianyi Yan
Yuan Xie
Guoqi Li
MQ
94
108
0
05 Sep 2019
1