ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.14267
  4. Cited By
1-Bit FQT: Pushing the Limit of Fully Quantized Training to 1-bit

1-Bit FQT: Pushing the Limit of Fully Quantized Training to 1-bit

26 August 2024
Chang Gao
J. Chen
Kang Zhao
Jiaqi Wang
Liping Jing
    MQ
ArXivPDFHTML

Papers citing "1-Bit FQT: Pushing the Limit of Fully Quantized Training to 1-bit"

5 / 5 papers shown
Title
Pushing the Limits of Low-Bit Optimizers: A Focus on EMA Dynamics
Pushing the Limits of Low-Bit Optimizers: A Focus on EMA Dynamics
Cong Xu
Wenbin Liang
Mo Yu
Anan Liu
K. Zhang
Lizhuang Ma
J. Wang
J. Wang
W. Zhang
MQ
51
0
0
01 May 2025
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
239
2,554
0
04 May 2021
BiDet: An Efficient Binarized Object Detector
BiDet: An Efficient Binarized Object Detector
Ziwei Wang
Ziyi Wu
Jiwen Lu
Jie Zhou
MQ
44
64
0
09 Mar 2020
Training High-Performance and Large-Scale Deep Neural Networks with Full
  8-bit Integers
Training High-Performance and Large-Scale Deep Neural Networks with Full 8-bit Integers
Yukuan Yang
Shuang Wu
Lei Deng
Tianyi Yan
Yuan Xie
Guoqi Li
MQ
99
108
0
05 Sep 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
300
1,046
0
10 Feb 2017
1