ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1502.08053
  4. Cited By
Stochastic Dual Coordinate Ascent with Adaptive Probabilities

Stochastic Dual Coordinate Ascent with Adaptive Probabilities

27 February 2015
Dominik Csiba
Zheng Qu
Peter Richtárik
    ODL
ArXivPDFHTML

Papers citing "Stochastic Dual Coordinate Ascent with Adaptive Probabilities"

43 / 43 papers shown
Title
Towards a Better Theoretical Understanding of Independent Subnetwork
  Training
Towards a Better Theoretical Understanding of Independent Subnetwork Training
Egor Shulgin
Peter Richtárik
AI4CE
4
6
0
28 Jun 2023
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
10
5
0
27 Aug 2022
Stability and Generalization of Stochastic Optimization with Nonconvex
  and Nonsmooth Problems
Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems
Yunwen Lei
9
14
0
14 Jun 2022
SGD with Coordinate Sampling: Theory and Practice
SGD with Coordinate Sampling: Theory and Practice
Rémi Leluc
Franccois Portier
14
2
0
25 May 2021
Adam with Bandit Sampling for Deep Learning
Adam with Bandit Sampling for Deep Learning
Rui Liu
Tianyi Wu
Barzan Mozafari
11
22
0
24 Oct 2020
Variance-Reduced Methods for Machine Learning
Variance-Reduced Methods for Machine Learning
Robert Mansel Gower
Mark W. Schmidt
Francis R. Bach
Peter Richtárik
11
109
0
02 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
11
0
0
26 Aug 2020
Minimal Variance Sampling with Provable Guarantees for Fast Training of
  Graph Neural Networks
Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks
Weilin Cong
R. Forsati
M. Kandemir
M. Mahdavi
10
76
0
24 Jun 2020
Stochastic batch size for adaptive regularization in deep network
  optimization
Stochastic batch size for adaptive regularization in deep network optimization
Kensuke Nakamura
Stefano Soatto
Byung-Woo Hong
ODL
13
5
0
14 Apr 2020
Stochastic Coordinate Minimization with Progressive Precision for
  Stochastic Convex Optimization
Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization
Sudeep Salgia
Qing Zhao
Sattar Vakili
31
2
0
11 Mar 2020
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual
  Algorithm for High-Dimensional Data Mining
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual Algorithm for High-Dimensional Data Mining
Zhouyuan Huo
Heng-Chiao Huang
FedML
12
4
0
09 Oct 2019
Randomized Iterative Methods for Linear Systems: Momentum, Inexactness
  and Gossip
Randomized Iterative Methods for Linear Systems: Momentum, Inexactness and Gossip
Nicolas Loizou
11
5
0
26 Sep 2019
Nearly Consistent Finite Particle Estimates in Streaming Importance
  Sampling
Nearly Consistent Finite Particle Estimates in Streaming Importance Sampling
Alec Koppel
Amrit Singh Bedi
Brian M. Sadler
Victor Elvira
13
2
0
23 Sep 2019
ADASS: Adaptive Sample Selection for Training Acceleration
ADASS: Adaptive Sample Selection for Training Acceleration
Shen-Yi Zhao
Hao Gao
Wu-Jun Li
13
0
0
11 Jun 2019
On Linear Learning with Manycore Processors
On Linear Learning with Manycore Processors
Eliza Wszola
Celestine Mendler-Dünner
Martin Jaggi
Markus Püschel
11
1
0
02 May 2019
Estimate Sequences for Stochastic Composite Optimization: Variance
  Reduction, Acceleration, and Robustness to Noise
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
A. Kulunchakov
Julien Mairal
18
44
0
25 Jan 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are
  Better Without the Outer Loop
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
28
154
0
24 Jan 2019
Double Adaptive Stochastic Gradient Optimization
Double Adaptive Stochastic Gradient Optimization
Rajaditya Mukherjee
Jin Li
Shicheng Chu
Huamin Wang
ODL
14
0
0
06 Nov 2018
Accelerating Stochastic Gradient Descent Using Antithetic Sampling
Accelerating Stochastic Gradient Descent Using Antithetic Sampling
Jingchang Liu
Linli Xu
6
2
0
07 Oct 2018
A Fast, Principled Working Set Algorithm for Exploiting Piecewise Linear
  Structure in Convex Problems
A Fast, Principled Working Set Algorithm for Exploiting Piecewise Linear Structure in Convex Problems
Tyler B. Johnson
Carlos Guestrin
13
5
0
20 Jul 2018
Adaptive Stochastic Dual Coordinate Ascent for Conditional Random Fields
Adaptive Stochastic Dual Coordinate Ascent for Conditional Random Fields
Rémi Le Priol
Alexandre Piché
Simon Lacoste-Julien
19
5
0
22 Dec 2017
Coordinate Descent with Bandit Sampling
Coordinate Descent with Bandit Sampling
Farnood Salehi
Patrick Thiran
L. E. Celis
16
17
0
08 Dec 2017
Safe Adaptive Importance Sampling
Safe Adaptive Importance Sampling
Sebastian U. Stich
Anant Raj
Martin Jaggi
16
52
0
07 Nov 2017
Efficient Use of Limited-Memory Accelerators for Linear Learning on
  Heterogeneous Systems
Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems
Celestine Mendler-Dünner
Thomas Parnell
Martin Jaggi
FedML
26
0
0
17 Aug 2017
Stochastic, Distributed and Federated Optimization for Machine Learning
Stochastic, Distributed and Federated Optimization for Machine Learning
Jakub Konecný
FedML
13
38
0
04 Jul 2017
Approximate Steepest Coordinate Descent
Approximate Steepest Coordinate Descent
Sebastian U. Stich
Anant Raj
Martin Jaggi
6
15
0
26 Jun 2017
IS-ASGD: Accelerating Asynchronous SGD using Importance Sampling
IS-ASGD: Accelerating Asynchronous SGD using Importance Sampling
Fei Wang
Jun Ye
Weichen Li
Guihai Chen
17
1
0
26 Jun 2017
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling
  and Imaging Applications
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
A. Chambolle
Matthias Joachim Ehrhardt
Peter Richtárik
Carola-Bibiane Schönlieb
16
182
0
15 Jun 2017
Stochastic Primal Dual Coordinate Method with Non-Uniform Sampling Based
  on Optimality Violations
Stochastic Primal Dual Coordinate Method with Non-Uniform Sampling Based on Optimality Violations
Atsushi Shibagaki
Ichiro Takeuchi
18
5
0
21 Mar 2017
Faster Coordinate Descent via Adaptive Importance Sampling
Faster Coordinate Descent via Adaptive Importance Sampling
Dmytro Perekrestenko
V. Cevher
Martin Jaggi
11
42
0
07 Mar 2017
Linear convergence of SDCA in statistical estimation
Linear convergence of SDCA in statistical estimation
C. Qu
Huan Xu
33
8
0
26 Jan 2017
A Primer on Coordinate Descent Algorithms
A Primer on Coordinate Descent Algorithms
Hao-Jun Michael Shi
Shenyinying Tu
Yangyang Xu
W. Yin
24
91
0
30 Sep 2016
Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs
Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs
A. Osokin
Jean-Baptiste Alayrac
Isabella Lukasewitz
P. Dokania
Simon Lacoste-Julien
19
64
0
30 May 2016
Distributed Inexact Damped Newton Method: Data Partitioning and
  Load-Balancing
Distributed Inexact Damped Newton Method: Data Partitioning and Load-Balancing
Chenxin Ma
Martin Takáč
23
10
0
16 Mar 2016
Importance Sampling for Minibatches
Importance Sampling for Minibatches
Dominik Csiba
Peter Richtárik
19
113
0
06 Feb 2016
Reducing Runtime by Recycling Samples
Reducing Runtime by Recycling Samples
Jialei Wang
Hai Wang
Nathan Srebro
26
3
0
05 Feb 2016
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Zeyuan Allen-Zhu
Zheng Qu
Peter Richtárik
Yang Yuan
30
172
0
30 Dec 2015
Distributed Optimization with Arbitrary Local Solvers
Distributed Optimization with Arbitrary Local Solvers
Chenxin Ma
Jakub Konecný
Martin Jaggi
Virginia Smith
Michael I. Jordan
Peter Richtárik
Martin Takáč
21
182
0
13 Dec 2015
Dual Free Adaptive Mini-batch SDCA for Empirical Risk Minimization
Dual Free Adaptive Mini-batch SDCA for Empirical Risk Minimization
Xi He
Martin Takávc
12
1
0
22 Oct 2015
Doubly Stochastic Primal-Dual Coordinate Method for Bilinear
  Saddle-Point Problem
Doubly Stochastic Primal-Dual Coordinate Method for Bilinear Saddle-Point Problem
Adams Wei Yu
Qihang Lin
Tianbao Yang
20
7
0
14 Aug 2015
Primal Method for ERM with Flexible Mini-batching Schemes and Non-convex
  Losses
Primal Method for ERM with Flexible Mini-batching Schemes and Non-convex Losses
Dominik Csiba
Peter Richtárik
22
22
0
07 Jun 2015
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
76
736
0
19 Mar 2014
Incremental Majorization-Minimization Optimization with Application to
  Large-Scale Machine Learning
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
60
317
0
18 Feb 2014
1