ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.00599
  4. Cited By
First-Order Adaptive Sample Size Methods to Reduce Complexity of
  Empirical Risk Minimization

First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization

2 September 2017
Aryan Mokhtari
Alejandro Ribeiro
ArXiv (abs)PDFHTML

Papers citing "First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization"

10 / 10 papers shown
Computational Complexity of Sub-Linear Convergent Algorithms
Computational Complexity of Sub-Linear Convergent Algorithms
Hilal AlQuabeh
Farha AlBreiki
Dilshod Azizov
123
1
0
29 Sep 2022
Sampling Through the Lens of Sequential Decision Making
Sampling Through the Lens of Sequential Decision Making
J. Dou
Alvin Pan
Runxue Bao
Haiyi Mao
Lei Luo
Zhi-Hong Mao
390
21
0
17 Aug 2022
Pairwise Learning via Stagewise Training in Proximal Setting
Pairwise Learning via Stagewise Training in Proximal Setting
Hilal AlQuabeh
Aliakbar Abdurahimov
139
2
0
08 Aug 2022
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive
  Sample Size Approach
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size ApproachNeural Information Processing Systems (NeurIPS), 2021
Qiujiang Jin
Aryan Mokhtari
161
4
0
10 Jun 2021
Constrained and Composite Optimization via Adaptive Sampling Methods
Constrained and Composite Optimization via Adaptive Sampling MethodsIMA Journal of Numerical Analysis (IMA J. Numer. Anal.), 2020
Yuchen Xie
Raghu Bollapragada
R. Byrd
J. Nocedal
167
18
0
31 Dec 2020
Straggler-Resilient Federated Learning: Leveraging the Interplay Between
  Statistical Accuracy and System Heterogeneity
Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System HeterogeneityIEEE Journal on Selected Areas in Information Theory (JSAIT), 2020
Amirhossein Reisizadeh
Isidoros Tziotis
Hamed Hassani
Aryan Mokhtari
Ramtin Pedarsani
FedML
396
132
0
28 Dec 2020
Hybrid Stochastic-Deterministic Minibatch Proximal Gradient:
  Less-Than-Single-Pass Optimization with Nearly Optimal Generalization
Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-Than-Single-Pass Optimization with Nearly Optimal GeneralizationInternational Conference on Machine Learning (ICML), 2020
Pan Zhou
Xiaotong Yuan
152
6
0
18 Sep 2020
NIPS - Not Even Wrong? A Systematic Review of Empirically Complete
  Demonstrations of Algorithmic Effectiveness in the Machine Learning and
  Artificial Intelligence Literature
NIPS - Not Even Wrong? A Systematic Review of Empirically Complete Demonstrations of Algorithmic Effectiveness in the Machine Learning and Artificial Intelligence Literature
Franz J. Király
Bilal A. Mateen
R. Sonabend
208
10
0
18 Dec 2018
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical
  Risk Minimization via Accumulating Sample Strategy
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy
Majid Jahani
Xi He
Chenxin Ma
Aryan Mokhtari
Dheevatsa Mudigere
Alejandro Ribeiro
Martin Takáč
233
19
0
26 Oct 2018
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton
  Method
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method
Mark Eisen
Aryan Mokhtari
Alejandro Ribeiro
266
16
0
22 May 2017
1
Page 1 of 1