ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.07354
  4. Cited By
Convergent Block Coordinate Descent for Training Tikhonov Regularized
  Deep Neural Networks

Convergent Block Coordinate Descent for Training Tikhonov Regularized Deep Neural Networks

20 November 2017
Ziming Zhang
M. Brand
ArXivPDFHTML

Papers citing "Convergent Block Coordinate Descent for Training Tikhonov Regularized Deep Neural Networks"

41 / 41 papers shown
Title
A Primal-dual algorithm for image reconstruction with ICNNs
A Primal-dual algorithm for image reconstruction with ICNNs
Hok Shing Wong
Matthias Joachim Ehrhardt
Subhadip Mukherjee
26
1
0
16 Oct 2024
Towards training digitally-tied analog blocks via hybrid gradient
  computation
Towards training digitally-tied analog blocks via hybrid gradient computation
Timothy Nest
M. Ernoult
44
1
0
05 Sep 2024
Differentially Private Neural Network Training under Hidden State
  Assumption
Differentially Private Neural Network Training under Hidden State Assumption
Ding Chen
Chen Liu
FedML
29
0
0
11 Jul 2024
Complexity of Block Coordinate Descent with Proximal Regularization and
  Applications to Wasserstein CP-dictionary Learning
Complexity of Block Coordinate Descent with Proximal Regularization and Applications to Wasserstein CP-dictionary Learning
Dohyun Kwon
Hanbaek Lyu
14
3
0
04 Jun 2023
On Model Compression for Neural Networks: Framework, Algorithm, and
  Convergence Guarantee
On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
Chenyang Li
Jihoon Chung
Mengnan Du
Haimin Wang
Xianlian Zhou
Bohao Shen
33
1
0
13 Mar 2023
Offsite-Tuning: Transfer Learning without Full Model
Offsite-Tuning: Transfer Learning without Full Model
Guangxuan Xiao
Ji Lin
Song Han
35
67
0
09 Feb 2023
Dual Propagation: Accelerating Contrastive Hebbian Learning with Dyadic
  Neurons
Dual Propagation: Accelerating Contrastive Hebbian Learning with Dyadic Neurons
R. Høier
D. Staudt
Christopher Zach
26
11
0
02 Feb 2023
Deep Incubation: Training Large Models by Divide-and-Conquering
Deep Incubation: Training Large Models by Divide-and-Conquering
Zanlin Ni
Yulin Wang
Jiangwei Yu
Haojun Jiang
Yu Cao
Gao Huang
VLM
18
11
0
08 Dec 2022
Convergence Rates of Training Deep Neural Networks via Alternating
  Minimization Methods
Convergence Rates of Training Deep Neural Networks via Alternating Minimization Methods
Jintao Xu
Chenglong Bao
W. Xing
8
3
0
30 Aug 2022
0/1 Deep Neural Networks via Block Coordinate Descent
0/1 Deep Neural Networks via Block Coordinate Descent
Hui Zhang
Shenglong Zhou
Geoffrey Ye Li
N. Xiu
33
7
0
19 Jun 2022
Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning
  Optimization Landscape
Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape
Devansh Bisla
Jing Wang
A. Choromańska
25
34
0
20 Jan 2022
Personalized On-Device E-health Analytics with Decentralized Block
  Coordinate Descent
Personalized On-Device E-health Analytics with Decentralized Block Coordinate Descent
Guanhua Ye
Hongzhi Yin
Tong Chen
Miao Xu
Quoc Viet Hung Nguyen
Jiangning Song
37
9
0
17 Dec 2021
Modeling Design and Control Problems Involving Neural Network Surrogates
Modeling Design and Control Problems Involving Neural Network Surrogates
Dominic Yang
Prasanna Balaprakash
S. Leyffer
23
13
0
20 Nov 2021
On Training Implicit Models
On Training Implicit Models
Zhengyang Geng
Xinyu Zhang
Shaojie Bai
Yisen Wang
Zhouchen Lin
59
69
0
09 Nov 2021
Transformer-Encoder-GRU (T-E-GRU) for Chinese Sentiment Analysis on
  Chinese Comment Text
Transformer-Encoder-GRU (T-E-GRU) for Chinese Sentiment Analysis on Chinese Comment Text
Binlong Zhang
Wei Zhou
11
17
0
01 Aug 2021
LocoProp: Enhancing BackProp via Local Loss Optimization
LocoProp: Enhancing BackProp via Local Loss Optimization
Ehsan Amid
Rohan Anil
Manfred K. Warmuth
ODL
13
19
0
11 Jun 2021
Bilevel Programs Meet Deep Learning: A Unifying View on Inference
  Learning Methods
Bilevel Programs Meet Deep Learning: A Unifying View on Inference Learning Methods
Christopher Zach
FedML
11
5
0
15 May 2021
Stochastic Block-ADMM for Training Deep Networks
Stochastic Block-ADMM for Training Deep Networks
Saeed Khorram
Xiao Fu
Mohamad H. Danesh
Zhongang Qi
Li Fuxin
29
3
0
01 May 2021
Training Deep Neural Networks via Branch-and-Bound
Training Deep Neural Networks via Branch-and-Bound
Yuanwei Wu
Ziming Zhang
Guanghui Wang
ODL
20
0
0
05 Apr 2021
Inertial Proximal Deep Learning Alternating Minimization for Efficient
  Neutral Network Training
Inertial Proximal Deep Learning Alternating Minimization for Efficient Neutral Network Training
Linbo Qiao
Tao Sun
H. Pan
Dongsheng Li
11
3
0
30 Jan 2021
Learning DNN networks using un-rectifying ReLU with compressed sensing
  application
Learning DNN networks using un-rectifying ReLU with compressed sensing application
W. Hwang
Shih-Shuo Tung
6
2
0
18 Jan 2021
Training Deep Architectures Without End-to-End Backpropagation: A Survey
  on the Provably Optimal Methods
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods
Shiyu Duan
José C. Príncipe
MQ
20
3
0
09 Jan 2021
Lifted Regression/Reconstruction Networks
Lifted Regression/Reconstruction Networks
R. Høier
Christopher Zach
8
7
0
07 May 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive
  Survey
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
S. Shi
Wei Wang
Bo-wen Li
Xiaowen Chu
19
48
0
10 Mar 2020
Semi-Implicit Back Propagation
Semi-Implicit Back Propagation
Ren Liu
Xiaoqun Zhang
ODL
8
0
0
10 Feb 2020
Self-Orthogonality Module: A Network Architecture Plug-in for Learning
  Orthogonal Filters
Self-Orthogonality Module: A Network Architecture Plug-in for Learning Orthogonal Filters
Ziming Zhang
Wenchi Ma
Yuanwei Wu
Guanghui Wang
32
10
0
05 Jan 2020
Effects of Depth, Width, and Initialization: A Convergence Analysis of
  Layer-wise Training for Deep Linear Neural Networks
Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural Networks
Yeonjong Shin
17
12
0
14 Oct 2019
Towards Learning Affine-Invariant Representations via Data-Efficient
  CNNs
Towards Learning Affine-Invariant Representations via Data-Efficient CNNs
Xenju Xu
Guanghui Wang
Alan Sullivan
Ziming Zhang
17
23
0
31 Aug 2019
Implicit Deep Learning
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
34
176
0
17 Aug 2019
Contrastive Learning for Lifted Networks
Contrastive Learning for Lifted Networks
Christopher Zach
V. Estellers
SSL
17
12
0
07 May 2019
NIPS - Not Even Wrong? A Systematic Review of Empirically Complete
  Demonstrations of Algorithmic Effectiveness in the Machine Learning and
  Artificial Intelligence Literature
NIPS - Not Even Wrong? A Systematic Review of Empirically Complete Demonstrations of Algorithmic Effectiveness in the Machine Learning and Artificial Intelligence Literature
Franz J. Király
Bilal A. Mateen
R. Sonabend
13
10
0
18 Dec 2018
Fenchel Lifted Networks: A Lagrange Relaxation of Neural Network
  Training
Fenchel Lifted Networks: A Lagrange Relaxation of Neural Network Training
Fangda Gu
Armin Askari
L. Ghaoui
8
39
0
20 Nov 2018
Lifted Proximal Operator Machines
Lifted Proximal Operator Machines
Jia Li♂
Cong Fang
Zhouchen Lin
ODL
9
36
0
05 Nov 2018
A Block Coordinate Descent Proximal Method for Simultaneous Filtering
  and Parameter Estimation
A Block Coordinate Descent Proximal Method for Simultaneous Filtering and Parameter Estimation
Ramin Raziperchikolaei
Harish S. Bhat
9
5
0
16 Oct 2018
Beyond Backprop: Online Alternating Minimization with Auxiliary
  Variables
Beyond Backprop: Online Alternating Minimization with Auxiliary Variables
A. Choromańska
Benjamin Cowen
Sadhana Kumaravel
Ronny Luss
Mattia Rigotti
...
Brian Kingsbury
Paolo Diachille
V. Gurev
Ravi Tejwani
Djallel Bouneffouf
6
52
0
24 Jun 2018
A Unified Framework for Training Neural Networks
A Unified Framework for Training Neural Networks
H. Ghauch
H. S. Ghadikolaei
Carlo Fischione
Mikael Skoglund
AI4CE
11
0
0
23 May 2018
A Proximal Block Coordinate Descent Algorithm for Deep Neural Network
  Training
A Proximal Block Coordinate Descent Algorithm for Deep Neural Network Training
Tim Tsz-Kit Lau
Jinshan Zeng
Baoyuan Wu
Y. Yao
ODL
17
33
0
24 Mar 2018
Global Convergence of Block Coordinate Descent in Deep Learning
Global Convergence of Block Coordinate Descent in Deep Learning
Jinshan Zeng
Tim Tsz-Kit Lau
Shaobo Lin
Y. Yao
15
77
0
01 Mar 2018
Block-Cyclic Stochastic Coordinate Descent for Deep Neural Networks
Block-Cyclic Stochastic Coordinate Descent for Deep Neural Networks
Kensuke Nakamura
Stefano Soatto
Byung-Woo Hong
BDL
ODL
27
6
0
20 Nov 2017
BPGrad: Towards Global Optimality in Deep Learning via Branch and
  Pruning
BPGrad: Towards Global Optimality in Deep Learning via Branch and Pruning
Ziming Zhang
Yuanwei Wu
Guanghui Wang
ODL
28
28
0
19 Nov 2017
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
1