ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1502.08053
  4. Cited By
Stochastic Dual Coordinate Ascent with Adaptive Probabilities

Stochastic Dual Coordinate Ascent with Adaptive Probabilities

27 February 2015
Dominik Csiba
Zheng Qu
Peter Richtárik
    ODL
ArXiv (abs)PDFHTML

Papers citing "Stochastic Dual Coordinate Ascent with Adaptive Probabilities"

41 / 41 papers shown
Towards a Better Theoretical Understanding of Independent Subnetwork
  Training
Towards a Better Theoretical Understanding of Independent Subnetwork TrainingInternational Conference on Machine Learning (ICML), 2023
Egor Shulgin
Peter Richtárik
AI4CE
400
8
0
28 Jun 2023
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
370
7
0
27 Aug 2022
Stability and Generalization of Stochastic Optimization with Nonconvex
  and Nonsmooth Problems
Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth ProblemsAnnual Conference Computational Learning Theory (COLT), 2022
Yunwen Lei
312
31
0
14 Jun 2022
SGD with Coordinate Sampling: Theory and Practice
SGD with Coordinate Sampling: Theory and PracticeJournal of machine learning research (JMLR), 2021
Rémi Leluc
Franccois Portier
266
8
0
25 May 2021
Adam with Bandit Sampling for Deep Learning
Adam with Bandit Sampling for Deep LearningNeural Information Processing Systems (NeurIPS), 2020
Rui Liu
Tianyi Wu
Barzan Mozafari
287
28
0
24 Oct 2020
Variance-Reduced Methods for Machine Learning
Variance-Reduced Methods for Machine LearningProceedings of the IEEE (Proc. IEEE), 2020
Robert Mansel Gower
Mark Schmidt
Francis R. Bach
Peter Richtárik
325
151
0
02 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
232
0
0
26 Aug 2020
Minimal Variance Sampling with Provable Guarantees for Fast Training of
  Graph Neural Networks
Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural NetworksKnowledge Discovery and Data Mining (KDD), 2020
Weilin Cong
R. Forsati
M. Kandemir
M. Mahdavi
368
96
0
24 Jun 2020
Stochastic batch size for adaptive regularization in deep network
  optimization
Stochastic batch size for adaptive regularization in deep network optimizationPattern Recognition (Pattern Recognit.), 2020
Kensuke Nakamura
Stefano Soatto
Byung-Woo Hong
ODL
280
7
0
14 Apr 2020
Stochastic Coordinate Minimization with Progressive Precision for
  Stochastic Convex Optimization
Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex OptimizationInternational Conference on Machine Learning (ICML), 2020
Sudeep Salgia
Qing Zhao
Sattar Vakili
244
2
0
11 Mar 2020
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual
  Algorithm for High-Dimensional Data Mining
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual Algorithm for High-Dimensional Data Mining
Zhouyuan Huo
Heng-Chiao Huang
FedML
189
5
0
09 Oct 2019
Randomized Iterative Methods for Linear Systems: Momentum, Inexactness
  and Gossip
Randomized Iterative Methods for Linear Systems: Momentum, Inexactness and Gossip
Nicolas Loizou
200
5
0
26 Sep 2019
Nearly Consistent Finite Particle Estimates in Streaming Importance
  Sampling
Nearly Consistent Finite Particle Estimates in Streaming Importance Sampling
Alec Koppel
Amrit Singh Bedi
Brian M. Sadler
Victor Elvira
297
2
0
23 Sep 2019
ADASS: Adaptive Sample Selection for Training Acceleration
ADASS: Adaptive Sample Selection for Training Acceleration
Shen-Yi Zhao
Hao Gao
Wu-Jun Li
309
0
0
11 Jun 2019
On Linear Learning with Manycore Processors
On Linear Learning with Manycore ProcessorsInternational Conference on High Performance Computing (HiPC), 2019
Eliza Wszola
Celestine Mendler-Dünner
Martin Jaggi
Markus Püschel
479
1
0
02 May 2019
Estimate Sequences for Stochastic Composite Optimization: Variance
  Reduction, Acceleration, and Robustness to Noise
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
A. Kulunchakov
Julien Mairal
572
46
0
25 Jan 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are
  Better Without the Outer Loop
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
375
173
0
24 Jan 2019
Double Adaptive Stochastic Gradient Optimization
Double Adaptive Stochastic Gradient Optimization
Rajaditya Mukherjee
Jin Li
Shicheng Chu
Huamin Wang
ODL
155
0
0
06 Nov 2018
Accelerating Stochastic Gradient Descent Using Antithetic Sampling
Accelerating Stochastic Gradient Descent Using Antithetic Sampling
Jingchang Liu
Linli Xu
191
3
0
07 Oct 2018
A Fast, Principled Working Set Algorithm for Exploiting Piecewise Linear
  Structure in Convex Problems
A Fast, Principled Working Set Algorithm for Exploiting Piecewise Linear Structure in Convex Problems
Tyler B. Johnson
Carlos Guestrin
215
5
0
20 Jul 2018
Adaptive Stochastic Dual Coordinate Ascent for Conditional Random Fields
Adaptive Stochastic Dual Coordinate Ascent for Conditional Random FieldsConference on Uncertainty in Artificial Intelligence (UAI), 2017
Rémi Le Priol
Alexandre Piché
Damien Scieur
276
5
0
22 Dec 2017
Coordinate Descent with Bandit Sampling
Coordinate Descent with Bandit Sampling
Farnood Salehi
Patrick Thiran
L. E. Celis
355
17
0
08 Dec 2017
Safe Adaptive Importance Sampling
Safe Adaptive Importance Sampling
Sebastian U. Stich
Anant Raj
Martin Jaggi
200
59
0
07 Nov 2017
Efficient Use of Limited-Memory Accelerators for Linear Learning on
  Heterogeneous Systems
Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems
Celestine Mendler-Dünner
Thomas Parnell
Martin Jaggi
FedML
223
0
0
17 Aug 2017
Stochastic, Distributed and Federated Optimization for Machine Learning
Stochastic, Distributed and Federated Optimization for Machine Learning
Jakub Konecný
FedML
221
38
0
04 Jul 2017
Approximate Steepest Coordinate Descent
Approximate Steepest Coordinate Descent
Sebastian U. Stich
Anant Raj
Martin Jaggi
170
16
0
26 Jun 2017
IS-ASGD: Accelerating Asynchronous SGD using Importance Sampling
IS-ASGD: Accelerating Asynchronous SGD using Importance Sampling
Fei Wang
Jun Ye
Weichen Li
Guihai Chen
368
1
0
26 Jun 2017
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling
  and Imaging Applications
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
A. Chambolle
Matthias Joachim Ehrhardt
Peter Richtárik
Carola-Bibiane Schönlieb
284
208
0
15 Jun 2017
Stochastic Primal Dual Coordinate Method with Non-Uniform Sampling Based
  on Optimality Violations
Stochastic Primal Dual Coordinate Method with Non-Uniform Sampling Based on Optimality Violations
Atsushi Shibagaki
Ichiro Takeuchi
188
5
0
21 Mar 2017
Faster Coordinate Descent via Adaptive Importance Sampling
Faster Coordinate Descent via Adaptive Importance Sampling
Dmytro Perekrestenko
Volkan Cevher
Martin Jaggi
274
42
0
07 Mar 2017
Linear convergence of SDCA in statistical estimation
Linear convergence of SDCA in statistical estimation
Chao Qu
Huan Xu
223
8
0
26 Jan 2017
A Primer on Coordinate Descent Algorithms
A Primer on Coordinate Descent Algorithms
Hao-Jun Michael Shi
Shenyinying Tu
Yangyang Xu
W. Yin
394
98
0
30 Sep 2016
Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs
Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs
A. Osokin
Jean-Baptiste Alayrac
Isabella Lukasewitz
P. Dokania
Damien Scieur
197
69
0
30 May 2016
Distributed Inexact Damped Newton Method: Data Partitioning and
  Load-Balancing
Distributed Inexact Damped Newton Method: Data Partitioning and Load-Balancing
Chenxin Ma
Martin Takáč
239
10
0
16 Mar 2016
Importance Sampling for Minibatches
Importance Sampling for Minibatches
Dominik Csiba
Peter Richtárik
288
130
0
06 Feb 2016
Reducing Runtime by Recycling Samples
Reducing Runtime by Recycling Samples
Jialei Wang
Hai Wang
Nathan Srebro
274
3
0
05 Feb 2016
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Zeyuan Allen-Zhu
Zheng Qu
Peter Richtárik
Yang Yuan
402
178
0
30 Dec 2015
Distributed Optimization with Arbitrary Local Solvers
Distributed Optimization with Arbitrary Local Solvers
Chenxin Ma
Jakub Konecný
Martin Jaggi
Virginia Smith
Sai Li
Peter Richtárik
Martin Takáč
456
205
0
13 Dec 2015
Dual Free Adaptive Mini-batch SDCA for Empirical Risk Minimization
Dual Free Adaptive Mini-batch SDCA for Empirical Risk Minimization
Xi He
Martin Takávc
221
1
0
22 Oct 2015
Doubly Stochastic Primal-Dual Coordinate Method for Bilinear
  Saddle-Point Problem
Doubly Stochastic Primal-Dual Coordinate Method for Bilinear Saddle-Point Problem
Adams Wei Yu
Qihang Lin
Tianbao Yang
282
7
0
14 Aug 2015
Primal Method for ERM with Flexible Mini-batching Schemes and Non-convex
  Losses
Primal Method for ERM with Flexible Mini-batching Schemes and Non-convex Losses
Dominik Csiba
Peter Richtárik
257
23
0
07 Jun 2015
1
Page 1 of 1