ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.14853
  4. Cited By
Near-Optimal Nonconvex-Strongly-Convex Bilevel Optimization with Fully
  First-Order Oracles

Near-Optimal Nonconvex-Strongly-Convex Bilevel Optimization with Fully First-Order Oracles

26 June 2023
Le‐Yu Chen
Yaohua Ma
J. Zhang
ArXivPDFHTML

Papers citing "Near-Optimal Nonconvex-Strongly-Convex Bilevel Optimization with Fully First-Order Oracles"

12 / 12 papers shown
Title
Efficient Curvature-Aware Hypergradient Approximation for Bilevel Optimization
Efficient Curvature-Aware Hypergradient Approximation for Bilevel Optimization
Youran Dong
Junfeng Yang
Wei-Ting Yao
Jin Zhang
36
0
0
04 May 2025
Adversarial Training Should Be Cast as a Non-Zero-Sum Game
Adversarial Training Should Be Cast as a Non-Zero-Sum Game
Alexander Robey
Fabian Latorre
George J. Pappas
Hamed Hassani
V. Cevher
AAML
66
12
0
19 Jun 2023
First-order penalty methods for bilevel optimization
First-order penalty methods for bilevel optimization
Zhaosong Lu
Sanyou Mei
53
31
0
04 Jan 2023
On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness
  Results and Improved Analysis
On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness Results and Improved Analysis
Le‐Yu Chen
Jing Xu
J. Zhang
26
11
0
02 Jan 2023
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
Mao Ye
B. Liu
S. Wright
Peter Stone
Qian Liu
72
82
0
19 Sep 2022
Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning
Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning
Jiahui Gao
Renjie Pi
Yong Lin
Hang Xu
Jiacheng Ye
Zhiyong Wu
Weizhong Zhang
Xiaodan Liang
Zhenguo Li
Lingpeng Kong
SyDa
VLM
52
45
0
25 May 2022
A framework for bilevel optimization that enables stochastic and global
  variance reduction algorithms
A framework for bilevel optimization that enables stochastic and global variance reduction algorithms
Mathieu Dagréou
Pierre Ablin
Samuel Vaiter
Thomas Moreau
129
95
0
31 Jan 2022
Restarted Nonconvex Accelerated Gradient Descent: No More
  Polylogarithmic Factor in the $O(ε^{-7/4})$ Complexity
Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the O(ε−7/4)O(ε^{-7/4})O(ε−7/4) Complexity
Huan Li
Zhouchen Lin
29
21
0
27 Jan 2022
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Luca Franceschi
P. Frasconi
Saverio Salzo
Riccardo Grazzi
Massimiliano Pontil
96
714
0
13 Jun 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
237
11,568
0
09 Mar 2017
Forward and Reverse Gradient-Based Hyperparameter Optimization
Forward and Reverse Gradient-Based Hyperparameter Optimization
Luca Franceschi
Michele Donini
P. Frasconi
Massimiliano Pontil
112
370
0
06 Mar 2017
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
264
5,290
0
05 Nov 2016
1