Proximal SCOPE for Distributed Sparse Learning: Better Data Partition
Implies Faster Convergence RateNeural Information Processing Systems (NeurIPS), 2018 |
VR-SGD: A Simple Stochastic Variance Reduction Method for Machine
LearningIEEE Transactions on Knowledge and Data Engineering (TKDE), 2018 |
Guaranteed Sufficient Decrease for Stochastic Variance Reduced Gradient
OptimizationInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2018 |
Gradient Sparsification for Communication-Efficient Distributed
OptimizationNeural Information Processing Systems (NeurIPS), 2017 |
Perturbed Iterate Analysis for Asynchronous Stochastic OptimizationSIAM Journal on Optimization (SIAM J. Optim.), 2015 |