ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.13349
  4. Cited By
E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings

E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings

29 October 2019
Yue Wang
Ziyu Jiang
Xiaohan Chen
Pengfei Xu
Yang Katie Zhao
Yingyan Lin
Zhangyang Wang
    MQ
ArXivPDFHTML

Papers citing "E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings"

26 / 26 papers shown
Title
AdaShadow: Responsive Test-time Model Adaptation in Non-stationary
  Mobile Environments
AdaShadow: Responsive Test-time Model Adaptation in Non-stationary Mobile Environments
Cheng Fang
Sicong Liu
Zimu Zhou
Bin Guo
Jiaqi Tang
Ke Ma
Zhiwen Yu
TTA
31
1
0
10 Oct 2024
Cost-effective On-device Continual Learning over Memory Hierarchy with
  Miro
Cost-effective On-device Continual Learning over Memory Hierarchy with Miro
Xinyue Ma
Suyeon Jeong
Minjia Zhang
Di Wang
Jonghyun Choi
Myeongjae Jeon
CLL
16
13
0
11 Aug 2023
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Patrik Okanovic
R. Waleffe
Vasilis Mageirakos
Konstantinos E. Nikolakakis
Amin Karbasi
Dionysis Kalogerias
Nezihe Merve Gürel
Theodoros Rekatsinas
DD
45
12
0
28 May 2023
Efficient On-device Training via Gradient Filtering
Efficient On-device Training via Gradient Filtering
Yuedong Yang
Guihong Li
R. Marculescu
31
18
0
01 Jan 2023
Semantic Self-adaptation: Enhancing Generalization with a Single Sample
Semantic Self-adaptation: Enhancing Generalization with a Single Sample
Sherwin Bahmani
Oliver Hahn
Eduard Zamfir
Nikita Araslanov
Daniel Cremers
Stefan Roth
OOD
TTA
VLM
32
6
0
10 Aug 2022
POET: Training Neural Networks on Tiny Devices with Integrated
  Rematerialization and Paging
POET: Training Neural Networks on Tiny Devices with Integrated Rematerialization and Paging
Shishir G. Patil
Paras Jain
P. Dutta
Ion Stoica
Joseph E. Gonzalez
12
35
0
15 Jul 2022
GhostNets on Heterogeneous Devices via Cheap Operations
GhostNets on Heterogeneous Devices via Cheap Operations
Kai Han
Yunhe Wang
Chang Xu
Jianyuan Guo
Chunjing Xu
Enhua Wu
Qi Tian
19
102
0
10 Jan 2022
3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization,
  and Ultra-Low Latency Acceleration
3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Yao Chen
Cole Hawkins
Kaiqi Zhang
Zheng-Wei Zhang
Cong Hao
18
8
0
11 May 2021
"BNN - BN = ?": Training Binary Neural Networks without Batch
  Normalization
"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization
Tianlong Chen
Zhenyu (Allen) Zhang
Xu Ouyang
Zechun Liu
Zhiqiang Shen
Zhangyang Wang
MQ
37
36
0
16 Apr 2021
No frame left behind: Full Video Action Recognition
No frame left behind: Full Video Action Recognition
X. Liu
S. Pintea
F. Karimi Nejadasl
O. Booij
J. C. V. Gemert
19
40
0
29 Mar 2021
Enabling Design Methodologies and Future Trends for Edge AI:
  Specialization and Co-design
Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design
Cong Hao
Jordan Dotzel
Jinjun Xiong
Luca Benini
Zhiru Zhang
Deming Chen
50
34
0
25 Mar 2021
HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark
HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark
Chaojian Li
Zhongzhi Yu
Yonggan Fu
Yongan Zhang
Yang Katie Zhao
Haoran You
Qixuan Yu
Yue Wang
Yingyan Lin
44
106
0
19 Mar 2021
GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient
  Deep Model Training
GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training
Krishnateja Killamsetty
D. Sivasubramanian
Ganesh Ramakrishnan
A. De
Rishabh K. Iyer
OOD
88
188
0
27 Feb 2021
Adaptive Precision Training for Resource Constrained Devices
Adaptive Precision Training for Resource Constrained Devices
Tian Huang
Tao Luo
Joey Tianyi Zhou
34
5
0
23 Dec 2020
Bringing AI To Edge: From Deep Learning's Perspective
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
52
116
0
25 Nov 2020
DNA: Differentiable Network-Accelerator Co-Search
DNA: Differentiable Network-Accelerator Co-Search
Yongan Zhang
Y. Fu
Weiwen Jiang
Chaojian Li
Haoran You
Meng Li
Vikas Chandra
Yingyan Lin
23
17
0
28 Oct 2020
ShiftAddNet: A Hardware-Inspired Deep Network
ShiftAddNet: A Hardware-Inspired Deep Network
Haoran You
Xiaohan Chen
Yongan Zhang
Chaojian Li
Sicheng Li
Zihao Liu
Zhangyang Wang
Yingyan Lin
OOD
MQ
73
76
0
24 Oct 2020
Enabling On-Device CNN Training by Self-Supervised Instance Filtering
  and Error Map Pruning
Enabling On-Device CNN Training by Self-Supervised Instance Filtering and Error Map Pruning
Yawen Wu
Zhepeng Wang
Yiyu Shi
J. Hu
16
44
0
07 Jul 2020
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost
  Computation
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Yang Katie Zhao
Xiaohan Chen
Yue Wang
Chaojian Li
Haoran You
Y. Fu
Yuan Xie
Zhangyang Wang
Yingyan Lin
MQ
32
43
0
07 May 2020
TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators
  Towards Local and in Time Domain
TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain
Weitao Li
Pengfei Xu
Yang Katie Zhao
Haitong Li
Yuan Xie
Yingyan Lin
9
68
0
03 May 2020
L$^2$-GCN: Layer-Wise and Learned Efficient Training of Graph
  Convolutional Networks
L2^22-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks
Yuning You
Tianlong Chen
Zhangyang Wang
Yang Shen
GNN
101
82
0
30 Mar 2020
DNN-Chip Predictor: An Analytical Performance Predictor for DNN
  Accelerators with Various Dataflows and Hardware Architectures
DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Yang Katie Zhao
Chaojian Li
Yue Wang
Pengfei Xu
Yongan Zhang
Yingyan Lin
17
41
0
26 Feb 2020
AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs
  and ASICs
AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Pengfei Xu
Xiaofan Zhang
Cong Hao
Yang Katie Zhao
Yongan Zhang
Yue Wang
Chaojian Li
Zetong Guan
Deming Chen
Yingyan Lin
23
88
0
06 Jan 2020
AdderNet: Do We Really Need Multiplications in Deep Learning?
AdderNet: Do We Really Need Multiplications in Deep Learning?
Hanting Chen
Yunhe Wang
Chunjing Xu
Boxin Shi
Chao Xu
Qi Tian
Chang Xu
18
194
0
31 Dec 2019
GhostNet: More Features from Cheap Operations
GhostNet: More Features from Cheap Operations
Kai Han
Yunhe Wang
Qi Tian
Jianyuan Guo
Chunjing Xu
Chang Xu
20
2,579
0
27 Nov 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,889
0
15 Sep 2016
1