ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.10367
  4. Cited By
Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization
v1v2 (latest)

Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization

25 May 2018
Sijia Liu
B. Kailkhura
Pin-Yu Chen
Pai-Shun Ting
Shiyu Chang
Lisa Amini
ArXiv (abs)PDFHTML

Papers citing "Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization"

50 / 103 papers shown
Title
Zeroth-Order SciML: Non-intrusive Integration of Scientific Software
  with Deep Learning
Zeroth-Order SciML: Non-intrusive Integration of Scientific Software with Deep Learning
Ioannis C. Tsaknakis
B. Kailkhura
Sijia Liu
Donald Loveland
James Diffenderfer
A. Hiszpanski
Min-Fong Hong
48
3
0
04 Jun 2022
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization
  Perspective
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
Yimeng Zhang
Yuguang Yao
Jinghan Jia
Jinfeng Yi
Min-Fong Hong
Shiyu Chang
Sijia Liu
AAML
116
34
0
27 Mar 2022
Desirable Companion for Vertical Federated Learning: New Zeroth-Order
  Gradient Based Algorithm
Desirable Companion for Vertical Federated Learning: New Zeroth-Order Gradient Based Algorithm
Qingsong Zhang
Bin Gu
Zhiyuan Dang
Cheng Deng
Heng-Chiao Huang
FedML
105
16
0
19 Mar 2022
Holistic Adversarial Robustness of Deep Learning Models
Holistic Adversarial Robustness of Deep Learning Models
Pin-Yu Chen
Sijia Liu
AAML
94
16
0
15 Feb 2022
Communication-Efficient Stochastic Zeroth-Order Optimization for
  Federated Learning
Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning
Wenzhi Fang
Ziyi Yu
Yuning Jiang
Yuanming Shi
Colin N. Jones
Yong Zhou
FedML
123
60
0
24 Jan 2022
Decentralized Multi-Task Stochastic Optimization With Compressed
  Communications
Decentralized Multi-Task Stochastic Optimization With Compressed Communications
Navjot Singh
Xuanyu Cao
Suhas Diggavi
Tamer Basar
57
9
0
23 Dec 2021
On the Convergence Theory for Hessian-Free Bilevel Algorithms
On the Convergence Theory for Hessian-Free Bilevel Algorithms
Daouda Sow
Kaiyi Ji
Yingbin Liang
100
28
0
13 Oct 2021
ZARTS: On Zero-order Optimization for Neural Architecture Search
ZARTS: On Zero-order Optimization for Neural Architecture Search
Xiaoxing Wang
Wenxuan Guo
Junchi Yan
Jianlin Su
Xiaokang Yang
67
25
0
10 Oct 2021
Curvature-Aware Derivative-Free Optimization
Curvature-Aware Derivative-Free Optimization
Bumsu Kim
HanQin Cai
Daniel McKenzie
W. Yin
ODL
111
12
0
27 Sep 2021
Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic
  Optimization
Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization
Raghu Bollapragada
Stefan M. Wild
74
11
0
24 Sep 2021
An Accelerated Variance-Reduced Conditional Gradient Sliding Algorithm
  for First-order and Zeroth-order Optimization
An Accelerated Variance-Reduced Conditional Gradient Sliding Algorithm for First-order and Zeroth-order Optimization
Xiyuan Wei
Bin Gu
Heng-Chiao Huang
59
1
0
18 Sep 2021
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Jiaming Mu
Binghui Wang
Qi Li
Kun Sun
Mingwei Xu
Zhuotao Liu
AAML
68
37
0
21 Aug 2021
Zeroth and First Order Stochastic Frank-Wolfe Algorithms for Constrained
  Optimization
Zeroth and First Order Stochastic Frank-Wolfe Algorithms for Constrained Optimization
Zeeshan Akhtar
K. Rajawat
77
7
0
14 Jul 2021
Distributed Zeroth-Order Stochastic Optimization in Time-varying
  Networks
Distributed Zeroth-Order Stochastic Optimization in Time-varying Networks
Wenjie Li
Mohamad Assaad
45
3
0
26 May 2021
Distributed Learning Systems with First-order Methods
Distributed Learning Systems with First-order Methods
Ji Liu
Ce Zhang
36
44
0
12 Apr 2021
Learning Sampling Policy for Faster Derivative Free Optimization
Learning Sampling Policy for Faster Derivative Free Optimization
Zhou Zhai
Bin Gu
Heng-Chiao Huang
40
1
0
09 Apr 2021
Convergence Analysis of Nonconvex Distributed Stochastic Zeroth-order
  Coordinate Method
Convergence Analysis of Nonconvex Distributed Stochastic Zeroth-order Coordinate Method
Shengjun Zhang
Yunlong Dong
Dong Xie
Lisha Yao
Colleen P. Bailey
Shengli Fu
21
5
0
24 Mar 2021
Don't Forget to Sign the Gradients!
Don't Forget to Sign the Gradients!
Omid Aramoon
Pin-Yu Chen
Gang Qu
47
5
0
05 Mar 2021
Statistical Inference for Polyak-Ruppert Averaged Zeroth-order
  Stochastic Gradient Algorithm
Statistical Inference for Polyak-Ruppert Averaged Zeroth-order Stochastic Gradient Algorithm
Yanhao Jin
Tesi Xiao
Krishnakumar Balasubramanian
72
6
0
10 Feb 2021
Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
  Optimization Framework
Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box Optimization Framework
Pranay Sharma
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Xue Lin
P. Varshney
16
2
0
21 Dec 2020
Efficient On-Chip Learning for Optical Neural Networks Through
  Power-Aware Sparse Zeroth-Order Optimization
Efficient On-Chip Learning for Optical Neural Networks Through Power-Aware Sparse Zeroth-Order Optimization
Jiaqi Gu
Chenghao Feng
Zheng Zhao
Zhoufeng Ying
Ray T. Chen
David Z. Pan
94
31
0
21 Dec 2020
Regularization in network optimization via trimmed stochastic gradient
  descent with noisy label
Regularization in network optimization via trimmed stochastic gradient descent with noisy label
Kensuke Nakamura
Bong-Soo Sohn
Kyoung-Jae Won
Byung-Woo Hong
NoLa
51
0
0
21 Dec 2020
Recent Theoretical Advances in Non-Convex Optimization
Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova
Pavel Dvurechensky
Alexander Gasnikov
Eduard A. Gorbunov
Sergey Guminov
Dmitry Kamzolov
Innokentiy Shibaev
129
79
0
11 Dec 2020
On the Convergence of SGD with Biased Gradients
On the Convergence of SGD with Biased Gradients
Ahmad Ajalloeian
Sebastian U. Stich
78
90
0
31 Jul 2020
Transfer Learning without Knowing: Reprogramming Black-box Machine
  Learning Models with Scarce Data and Limited Resources
Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources
Yun-Yun Tsai
Pin-Yu Chen
Tsung-Yi Ho
AAMLMLAUBDL
82
99
0
17 Jul 2020
Accelerated Stochastic Gradient-free and Projection-free Methods
Accelerated Stochastic Gradient-free and Projection-free Methods
Feihu Huang
Lue Tao
Songcan Chen
76
21
0
16 Jul 2020
An Accelerated DFO Algorithm for Finite-sum Convex Functions
An Accelerated DFO Algorithm for Finite-sum Convex Functions
Yuwen Chen
Antonio Orvieto
Aurelien Lucchi
79
15
0
07 Jul 2020
A Primer on Zeroth-Order Optimization in Signal Processing and Machine
  Learning
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
Sijia Liu
Pin-Yu Chen
B. Kailkhura
Gaoyuan Zhang
A. Hero III
P. Varshney
82
235
0
11 Jun 2020
Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order
  Optimization
Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order Optimization
Mayumi Ohta
Nathaniel Berger
Artem Sokolov
Stefan Riezler
ODL
38
9
0
02 Jun 2020
Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic
  Optimization Problems
Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic Optimization Problems
Parvin Nazari
Davoud Ataee Tarzanagh
George Michailidis
ODL
56
14
0
19 May 2020
Stochastic batch size for adaptive regularization in deep network
  optimization
Stochastic batch size for adaptive regularization in deep network optimization
Kensuke Nakamura
Stefano Soatto
Byung-Woo Hong
ODL
44
6
0
14 Apr 2020
A Hybrid-Order Distributed SGD Method for Non-Convex Optimization to
  Balance Communication Overhead, Computational Complexity, and Convergence
  Rate
A Hybrid-Order Distributed SGD Method for Non-Convex Optimization to Balance Communication Overhead, Computational Complexity, and Convergence Rate
Naeimeh Omidvar
M. Maddah-ali
Hamed Mahdavi
ODL
35
3
0
27 Mar 2020
Boolean learning under noise-perturbations in hardware neural networks
Boolean learning under noise-perturbations in hardware neural networks
Louis Andréoli
X. Porte
Stéphane Chrétien
M. Jacquot
L. Larger
Daniel Brunner
114
12
0
27 Mar 2020
Non-asymptotic bounds for stochastic optimization with biased noisy
  gradient oracles
Non-asymptotic bounds for stochastic optimization with biased noisy gradient oracles
Nirav Bhavsar
Prashanth L.A.
22
11
0
26 Feb 2020
Towards an Efficient and General Framework of Robust Training for Graph
  Neural Networks
Towards an Efficient and General Framework of Robust Training for Graph Neural Networks
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Mengshu Sun
Caiwen Ding
B. Kailkhura
Xinyu Lin
OODAAML
58
7
0
25 Feb 2020
Efficiently avoiding saddle points with zero order methods: No gradients
  required
Efficiently avoiding saddle points with zero order methods: No gradients required
Lampros Flokas
Emmanouil-Vasileios Vlatakis-Gkaragkounis
Georgios Piliouras
70
34
0
29 Oct 2019
Improved Zeroth-Order Variance Reduced Algorithms and Analysis for
  Nonconvex Optimization
Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization
Kaiyi Ji
Zhe Wang
Yi Zhou
Yingbin Liang
62
77
0
27 Oct 2019
Learning to Learn by Zeroth-Order Oracle
Learning to Learn by Zeroth-Order Oracle
Yangjun Ruan
Yuanhao Xiong
Sashank J. Reddi
Sanjiv Kumar
Cho-Jui Hsieh
62
17
0
21 Oct 2019
ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box
  Optimization
ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization
Xiangyi Chen
Sijia Liu
Kaidi Xu
Xingguo Li
Xue Lin
Mingyi Hong
David Cox
ODL
97
111
0
15 Oct 2019
Man-in-the-Middle Attacks against Machine Learning Classifiers via
  Malicious Generative Models
Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models
Derui Wang
Wang
Chaoran Li
S. Wen
Surya Nepal
Yang Xiang
AAML
34
35
0
14 Oct 2019
Min-Max Optimization without Gradients: Convergence and Applications to
  Adversarial ML
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
Sijia Liu
Songtao Lu
Xiangyi Chen
Yao Feng
Kaidi Xu
Abdullah Al-Dujaili
Mingyi Hong
Una-May Obelilly
94
26
0
30 Sep 2019
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Minhao Cheng
Simranjit Singh
Patrick H. Chen
Pin-Yu Chen
Sijia Liu
Cho-Jui Hsieh
AAML
237
224
0
24 Sep 2019
Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query
  Complexity
Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query Complexity
Feihu Huang
Shangqian Gao
J. Pei
Heng-Chiao Huang
57
8
0
30 Jul 2019
Adaptive Weight Decay for Deep Neural Networks
Adaptive Weight Decay for Deep Neural Networks
Kensuke Nakamura
Byung-Woo Hong
63
43
0
21 Jul 2019
Model Agnostic Contrastive Explanations for Structured Data
Model Agnostic Contrastive Explanations for Structured Data
Amit Dhurandhar
Tejaswini Pedapati
Avinash Balakrishnan
Pin-Yu Chen
Karthikeyan Shanmugam
Ruchi Puri
FAtt
88
83
0
31 May 2019
Zeroth-Order Stochastic Alternating Direction Method of Multipliers for
  Nonconvex Nonsmooth Optimization
Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization
Feihu Huang
Shangqian Gao
Songcan Chen
Heng-Chiao Huang
33
18
0
29 May 2019
An ADMM Based Framework for AutoML Pipeline Configuration
An ADMM Based Framework for AutoML Pipeline Configuration
Sijia Liu
Parikshit Ram
Deepak Vijaykeerthy
Djallel Bouneffouf
Gregory Bramble
Horst Samulowitz
Dakuo Wang
A. Conn
Alexander G. Gray
99
76
0
01 May 2019
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
Jianbo Chen
Michael I. Jordan
Martin J. Wainwright
AAML
113
671
0
03 Apr 2019
Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth
  Optimization
Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization
Feihu Huang
Bin Gu
Zhouyuan Huo
Songcan Chen
Heng-Chiao Huang
67
26
0
16 Feb 2019
Daedalus: Breaking Non-Maximum Suppression in Object Detection via
  Adversarial Examples
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
Derui Wang
Chaoran Li
S. Wen
Qing-Long Han
Surya Nepal
Xiangyu Zhang
Yang Xiang
AAML
75
40
0
06 Feb 2019
Previous
123
Next