ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.13635
  4. Cited By
Low Rank Optimization for Efficient Deep Learning: Making A Balance
  between Compact Architecture and Fast Training

Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training

22 March 2023
Xinwei Ou
Zhangxin Chen
Ce Zhu
Yipeng Liu
ArXivPDFHTML

Papers citing "Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training"

6 / 6 papers shown
Title
Semi-tensor Product-based TensorDecomposition for Neural Network
  Compression
Semi-tensor Product-based TensorDecomposition for Neural Network Compression
Hengling Zhao
Yipeng Liu
Xiaolin Huang
Ce Zhu
31
6
0
30 Sep 2021
Convolutional Neural Network Compression through Generalized Kronecker
  Product Decomposition
Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition
Marawan Gamal Abdel Hameed
Marzieh S. Tahaei
A. Mosleh
V. Nia
39
25
0
29 Sep 2021
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
185
1,027
0
06 Mar 2020
Universal Deep Neural Network Compression
Universal Deep Neural Network Compression
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
MQ
81
85
0
07 Feb 2018
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
264
5,326
0
05 Nov 2016
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,634
0
03 Jul 2012
1