ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1401.2753
  4. Cited By
Stochastic Optimization with Importance Sampling
v1v2 (latest)

Stochastic Optimization with Importance Sampling

13 January 2014
P. Zhao
Tong Zhang
ArXiv (abs)PDFHTML

Papers citing "Stochastic Optimization with Importance Sampling"

50 / 183 papers shown
Title
MSTGD:A Memory Stochastic sTratified Gradient Descent Method with an
  Exponential Convergence Rate
MSTGD:A Memory Stochastic sTratified Gradient Descent Method with an Exponential Convergence Rate
Aixiang Chen
Chen
Jinting Zhang
Zanbo Zhang
Zhihong Li
60
0
0
21 Feb 2022
L-SVRG and L-Katyusha with Adaptive Sampling
L-SVRG and L-Katyusha with Adaptive Sampling
Boxin Zhao
Boxiang Lyu
Mladen Kolar
75
3
0
31 Jan 2022
Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
Boxin Zhao
Lingxiao Wang
Mladen Kolar
Ziqi Liu
Qing Cui
Jun Zhou
Chaochao Chen
FedML
152
11
0
28 Dec 2021
Tackling System and Statistical Heterogeneity for Federated Learning
  with Adaptive Client Sampling
Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling
Bing Luo
Wenli Xiao
Shiqiang Wang
Jianwei Huang
Leandros Tassiulas
FedML
112
177
0
21 Dec 2021
Adaptive Importance Sampling meets Mirror Descent: a Bias-variance
  tradeoff
Adaptive Importance Sampling meets Mirror Descent: a Bias-variance tradeoff
Anna Korba
Franccois Portier
80
14
0
29 Oct 2021
Iterative Teaching by Label Synthesis
Iterative Teaching by Label Synthesis
Weiyang Liu
Zhen Liu
Hanchen Wang
Liam Paull
Bernhard Schölkopf
Adrian Weller
154
16
0
27 Oct 2021
How Important is Importance Sampling for Deep Budgeted Training?
How Important is Importance Sampling for Deep Budgeted Training?
Eric Arazo
Diego Ortego
Paul Albert
Noel E. O'Connor
Kevin McGuinness
119
8
0
27 Oct 2021
Meta-learning with an Adaptive Task Scheduler
Meta-learning with an Adaptive Task Scheduler
Huaxiu Yao
Yu Wang
Ying Wei
P. Zhao
M. Mahdavi
Defu Lian
Chelsea Finn
OOD
84
48
0
26 Oct 2021
Large Batch Experience Replay
Large Batch Experience Replay
Thibault Lahire
Matthieu Geist
Emmanuel Rachelson
OffRL
100
13
0
04 Oct 2021
Large Scale Private Learning via Low-rank Reparametrization
Large Scale Private Learning via Low-rank Reparametrization
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
76
106
0
17 Jun 2021
Efficient Lottery Ticket Finding: Less Data is More
Efficient Lottery Ticket Finding: Less Data is More
Zhenyu Zhang
Xuxi Chen
Tianlong Chen
Zhangyang Wang
111
54
0
06 Jun 2021
Combining resampling and reweighting for faithful stochastic
  optimization
Combining resampling and reweighting for faithful stochastic optimization
Jing An
Lexing Ying
39
1
0
31 May 2021
SGD with Coordinate Sampling: Theory and Practice
SGD with Coordinate Sampling: Theory and Practice
Rémi Leluc
Franccois Portier
60
6
0
25 May 2021
One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning
One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning
Chaosheng Dong
Xiaojie Jin
Weihao Gao
Yijia Wang
Hongyi Zhang
Xiang Wu
Jianchao Yang
Xiaobing Liu
75
5
0
27 Apr 2021
Improved Analysis and Rates for Variance Reduction under
  Without-replacement Sampling Orders
Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders
Xinmeng Huang
Kun Yuan
Xianghui Mao
W. Yin
64
13
0
25 Apr 2021
Distributed Learning Systems with First-order Methods
Distributed Learning Systems with First-order Methods
Ji Liu
Ce Zhang
36
44
0
12 Apr 2021
Stochastic Reweighted Gradient Descent
Stochastic Reweighted Gradient Descent
Ayoub El Hanchi
D. Stephens
42
8
0
23 Mar 2021
Adaptive Importance Sampling for Finite-Sum Optimization and Sampling
  with Decreasing Step-Sizes
Adaptive Importance Sampling for Finite-Sum Optimization and Sampling with Decreasing Step-Sizes
Ayoub El Hanchi
D. Stephens
47
13
0
23 Mar 2021
Statistical Measures For Defining Curriculum Scoring Function
Statistical Measures For Defining Curriculum Scoring Function
Vinu Sankar Sadasivan
A. Dasgupta
54
2
0
27 Feb 2021
Proximal and Federated Random Reshuffling
Proximal and Federated Random Reshuffling
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
FedML
77
32
0
12 Feb 2021
A Comprehensive Study on Optimization Strategies for Gradient Descent In
  Deep Learning
A Comprehensive Study on Optimization Strategies for Gradient Descent In Deep Learning
K. Yadav
42
1
0
07 Jan 2021
Quantizing data for distributed learning
Quantizing data for distributed learning
Osama A. Hanna
Yahya H. Ezzeldin
Christina Fragouli
Suhas Diggavi
FedML
101
21
0
14 Dec 2020
Federated Learning under Importance Sampling
Federated Learning under Importance Sampling
Elsa Rizk
Stefan Vlaski
Ali H. Sayed
FedML
79
53
0
14 Dec 2020
Optimal Client Sampling for Federated Learning
Optimal Client Sampling for Federated Learning
Jiajun He
Samuel Horváth
Peter Richtárik
FedML
89
201
0
26 Oct 2020
Optimal Importance Sampling for Federated Learning
Optimal Importance Sampling for Federated Learning
Elsa Rizk
Stefan Vlaski
Ali H. Sayed
FedML
82
46
0
26 Oct 2020
Adam with Bandit Sampling for Deep Learning
Adam with Bandit Sampling for Deep Learning
Rui Liu
Tianyi Wu
Barzan Mozafari
84
24
0
24 Oct 2020
Efficient, Simple and Automated Negative Sampling for Knowledge Graph
  Embedding
Efficient, Simple and Automated Negative Sampling for Knowledge Graph Embedding
Yongqi Zhang
Quanming Yao
Lei Chen
BDL
43
7
0
24 Oct 2020
Enabling Fast Differentially Private SGD via Just-in-Time Compilation
  and Vectorization
Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization
P. Subramani
Nicholas Vadivelu
Gautam Kamath
101
83
0
18 Oct 2020
Oort: Efficient Federated Learning via Guided Participant Selection
Oort: Efficient Federated Learning via Guided Participant Selection
Fan Lai
Xiangfeng Zhu
H. Madhyastha
Mosharaf Chowdhury
FedMLOODD
133
275
0
12 Oct 2020
Simplify and Robustify Negative Sampling for Implicit Collaborative
  Filtering
Simplify and Robustify Negative Sampling for Implicit Collaborative Filtering
Jingtao Ding
Yuhan Quan
Quanming Yao
Yong Li
Depeng Jin
68
100
0
07 Sep 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
87
0
0
26 Aug 2020
Adaptive Task Sampling for Meta-Learning
Adaptive Task Sampling for Meta-Learning
Chenghao Liu
Zhihao Wang
Doyen Sahoo
Yuan Fang
Kun Zhang
Guosheng Lin
104
55
0
17 Jul 2020
An Equivalence between Loss Functions and Non-Uniform Sampling in
  Experience Replay
An Equivalence between Loss Functions and Non-Uniform Sampling in Experience Replay
Scott Fujimoto
David Meger
Doina Precup
96
58
0
12 Jul 2020
AdaScale SGD: A User-Friendly Algorithm for Distributed Training
AdaScale SGD: A User-Friendly Algorithm for Distributed Training
Tyler B. Johnson
Pulkit Agrawal
Haijie Gu
Carlos Guestrin
ODL
90
37
0
09 Jul 2020
Minimal Variance Sampling with Provable Guarantees for Fast Training of
  Graph Neural Networks
Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks
Weilin Cong
R. Forsati
M. Kandemir
M. Mahdavi
101
88
0
24 Jun 2020
Unified Analysis of Stochastic Gradient Methods for Composite Convex and
  Smooth Optimization
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
Ahmed Khaled
Othmane Sebbouh
Nicolas Loizou
Robert Mansel Gower
Peter Richtárik
119
47
0
20 Jun 2020
Gradient Descent in RKHS with Importance Labeling
Gradient Descent in RKHS with Importance Labeling
Tomoya Murata
Taiji Suzuki
51
3
0
19 Jun 2020
Graph Learning with Loss-Guided Training
Graph Learning with Loss-Guided Training
Eliav Buchnik
E. Cohen
45
1
0
31 May 2020
Accelerated Convergence for Counterfactual Learning to Rank
Accelerated Convergence for Counterfactual Learning to Rank
R. Jagerman
Maarten de Rijke
BDLOffRL
16
13
0
21 May 2020
Scheduling for Cellular Federated Edge Learning with Importance and
  Channel Awareness
Scheduling for Cellular Federated Edge Learning with Importance and Channel Awareness
Jinke Ren
Yinghui He
Dingzhu Wen
Guanding Yu
Kaibin Huang
Dongning Guo
108
197
0
01 Apr 2020
Weighting Is Worth the Wait: Bayesian Optimization with Importance
  Sampling
Weighting Is Worth the Wait: Bayesian Optimization with Importance Sampling
Setareh Ariafar
Zelda E. Mariet
Ehsan Elhamifar
Dana Brooks
Jennifer Dy
Jasper Snoek
50
3
0
23 Feb 2020
Improving Sampling Accuracy of Stochastic Gradient MCMC Methods via
  Non-uniform Subsampling of Gradients
Improving Sampling Accuracy of Stochastic Gradient MCMC Methods via Non-uniform Subsampling of Gradients
Ruilin Li
Xin Wang
H. Zha
Molei Tao
34
4
0
20 Feb 2020
Adaptive Sampling Distributed Stochastic Variance Reduced Gradient for
  Heterogeneous Distributed Datasets
Adaptive Sampling Distributed Stochastic Variance Reduced Gradient for Heterogeneous Distributed Datasets
Ilqar Ramazanli
Han Nguyen
Hai Pham
Sashank J. Reddi
Barnabás Póczós
77
11
0
20 Feb 2020
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic
  Gradient Methods
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
54
4
0
13 Feb 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
114
30
0
13 Feb 2020
Federated Learning of a Mixture of Global and Local Models
Federated Learning of a Mixture of Global and Local Models
Filip Hanzely
Peter Richtárik
FedML
90
386
0
10 Feb 2020
Better Theory for SGD in the Nonconvex World
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
105
187
0
09 Feb 2020
Faster Activity and Data Detection in Massive Random Access: A
  Multi-armed Bandit Approach
Faster Activity and Data Detection in Massive Random Access: A Multi-armed Bandit Approach
Jialin Dong
Jun Zhang
Yuanming Shi
Jessie Hui Wang
43
23
0
28 Jan 2020
Choosing the Sample with Lowest Loss makes SGD Robust
Choosing the Sample with Lowest Loss makes SGD Robust
Vatsal Shah
Xiaoxia Wu
Sujay Sanghavi
65
44
0
10 Jan 2020
A Fast Sampling Gradient Tree Boosting Framework
A Fast Sampling Gradient Tree Boosting Framework
D. Zhou
Zhongming Jin
Tong Zhang
16
2
0
20 Nov 2019
Previous
1234
Next