Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.07988
Cited By
Backprop with Approximate Activations for Memory-efficient Network Training
23 January 2019
Ayan Chakrabarti
Benjamin Moseley
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Backprop with Approximate Activations for Memory-efficient Network Training"
6 / 6 papers shown
Title
Reducing Fine-Tuning Memory Overhead by Approximate and Memory-Sharing Backpropagation
Yuchen Yang
Yingdong Shi
Cheems Wang
Xiantong Zhen
Yuxuan Shi
Jun Xu
37
1
0
24 Jun 2024
Do We Really Need a Large Number of Visual Prompts?
Youngeun Kim
Yuhang Li
Abhishek Moitra
Ruokai Yin
Priyadarshini Panda
VLM
VPVLM
40
5
0
26 May 2023
DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training
Joya Chen
Kai Xu
Yuhui Wang
Yifei Cheng
Angela Yao
19
7
0
28 Feb 2022
Mesa: A Memory-saving Training Framework for Transformers
Zizheng Pan
Peng Chen
Haoyu He
Jing Liu
Jianfei Cai
Bohan Zhuang
23
20
0
22 Nov 2021
Enabling Binary Neural Network Training on the Edge
Erwei Wang
James J. Davis
Daniele Moro
Piotr Zielinski
Jia Jie Lim
C. Coelho
S. Chatterjee
P. Cheung
G. Constantinides
MQ
20
24
0
08 Feb 2021
Faster Neural Network Training with Approximate Tensor Operations
Menachem Adelman
Kfir Y. Levy
Ido Hakimi
M. Silberstein
21
26
0
21 May 2018
1