Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.08501
Cited By
VecQ: Minimal Loss DNN Model Compression With Vectorized Weight Quantization
18 May 2020
Cheng Gong
Yao Chen
Ye Lu
Tao Li
Cong Hao
Deming Chen
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"VecQ: Minimal Loss DNN Model Compression With Vectorized Weight Quantization"
6 / 6 papers shown
Title
Hybrid-Parallel: Achieving High Performance and Energy Efficient Distributed Inference on Robots
Zekai Sun
Xiuxian Guan
Junming Wang
Haoze Song
Yuhao Qing
Tianxiang Shen
Dong Huang
Fangming Liu
Heming Cui
34
0
0
29 May 2024
AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks
Cheng Gong
Ye Lu
Surong Dai
Deng Qian
Chenkun Du
Tao Li
MQ
27
0
0
07 Apr 2023
Elastic Significant Bit Quantization and Acceleration for Deep Neural Networks
Cheng Gong
Ye Lu
Kunpeng Xie
Zongming Jin
Tao Li
Yanzhi Wang
MQ
25
7
0
08 Sep 2021
3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Yao Chen
Cole Hawkins
Kaiqi Zhang
Zheng-Wei Zhang
Cong Hao
18
8
0
11 May 2021
Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design
Cong Hao
Jordan Dotzel
Jinjun Xiong
Luca Benini
Zhiru Zhang
Deming Chen
50
34
0
25 Mar 2021
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
319
1,049
0
10 Feb 2017
1