ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.06101
  4. Cited By
Prodigy: An Expeditiously Adaptive Parameter-Free Learner

Prodigy: An Expeditiously Adaptive Parameter-Free Learner

9 June 2023
Konstantin Mishchenko
Aaron Defazio
    ODL
ArXivPDFHTML

Papers citing "Prodigy: An Expeditiously Adaptive Parameter-Free Learner"

43 / 43 papers shown
Title
FLUX-Text: A Simple and Advanced Diffusion Transformer Baseline for Scene Text Editing
FLUX-Text: A Simple and Advanced Diffusion Transformer Baseline for Scene Text Editing
Rui Lan
Y. Bai
Xu Duan
M. Li
Lei Sun
X. Chu
DiffM
79
0
0
06 May 2025
Deep Physics Prior for First Order Inverse Optimization
Deep Physics Prior for First Order Inverse Optimization
Haoyu Yang
Kamyar Azizzadenesheli
Haoxing Ren
PINN
AI4CE
75
0
0
28 Apr 2025
From Reflection to Perfection: Scaling Inference-Time Optimization for Text-to-Image Diffusion Models via Reflection Tuning
From Reflection to Perfection: Scaling Inference-Time Optimization for Text-to-Image Diffusion Models via Reflection Tuning
Le Zhuo
Liangbing Zhao
Sayak Paul
Yue Liao
Renrui Zhang
Yi Xin
Peng Gao
Mohamed Elhoseiny
H. Li
VLM
63
0
0
22 Apr 2025
Insert Anything: Image Insertion via In-Context Editing in DiT
Insert Anything: Image Insertion via In-Context Editing in DiT
Wensong Song
Hong Jiang
Zongxing Yang
Ruijie Quan
Yi Yang
DiffM
40
0
0
21 Apr 2025
DreamFuse: Adaptive Image Fusion with Diffusion Transformer
DreamFuse: Adaptive Image Fusion with Diffusion Transformer
Junjia Huang
Pengxiang Yan
Jiyang Liu
Jie Wu
Zhao Wang
Yitong Wang
Liang Lin
G. Li
35
0
0
11 Apr 2025
Analysis of an Idealized Stochastic Polyak Method and its Application to Black-Box Model Distillation
Analysis of an Idealized Stochastic Polyak Method and its Application to Black-Box Model Distillation
Robert M. Gower
Guillaume Garrigos
Nicolas Loizou
Dimitris Oikonomou
Konstantin Mishchenko
Fabian Schaipp
31
0
0
02 Apr 2025
IntrinsiX: High-Quality PBR Generation using Image Priors
IntrinsiX: High-Quality PBR Generation using Image Priors
Peter Kocsis
Lukas Höllein
Matthias Nießner
33
0
0
01 Apr 2025
An Empirical Study of Validating Synthetic Data for Text-Based Person Retrieval
An Empirical Study of Validating Synthetic Data for Text-Based Person Retrieval
Min Cao
Ziyin Zeng
YuXin Lu
Mang Ye
Dong Yi
Jinqiao Wang
SyDa
52
0
0
28 Mar 2025
Benefits of Learning Rate Annealing for Tuning-Robustness in Stochastic Optimization
Amit Attia
Tomer Koren
56
1
0
13 Mar 2025
OminiControl2: Efficient Conditioning for Diffusion Transformers
Zhenxiong Tan
Qiaochu Xue
Xingyi Yang
Songhua Liu
Xinchao Wang
DiffM
42
0
0
11 Mar 2025
Towards hyperparameter-free optimization with differential privacy
Zhiqi Bu
Ruixuan Liu
24
1
0
02 Mar 2025
ART: Anonymous Region Transformer for Variable Multi-Layer Transparent Image Generation
ART: Anonymous Region Transformer for Variable Multi-Layer Transparent Image Generation
Yifan Pu
Yiming Zhao
Zhicong Tang
Ruihong Yin
Haoxing Ye
...
Ji Li
Xiu Li
Z. Lian
Gao Huang
Baining Guo
DiffM
62
1
0
25 Feb 2025
A Hessian-informed hyperparameter optimization for differential learning rate
A Hessian-informed hyperparameter optimization for differential learning rate
Shiyun Xu
Zhiqi Bu
Yiliang Zhang
Ian J. Barnett
39
1
0
12 Jan 2025
MC-VTON: Minimal Control Virtual Try-On Diffusion Transformer
MC-VTON: Minimal Control Virtual Try-On Diffusion Transformer
Junsheng Luan
Guangyuan Li
Lei Zhao
Wei Xing
DiffM
35
1
0
07 Jan 2025
Temporal Context Consistency Above All: Enhancing Long-Term Anticipation
  by Learning and Enforcing Temporal Constraints
Temporal Context Consistency Above All: Enhancing Long-Term Anticipation by Learning and Enforcing Temporal Constraints
Alberto Maté
Mariella Dimiccoli
AI4TS
26
0
0
27 Dec 2024
MARINA-P: Superior Performance in Non-smooth Federated Optimization with Adaptive Stepsizes
Igor Sokolov
Peter Richtárik
72
1
0
22 Dec 2024
No More Adam: Learning Rate Scaling at Initialization is All You Need
No More Adam: Learning Rate Scaling at Initialization is All You Need
Minghao Xu
Lichuan Xiang
Xu Cai
Hongkai Wen
73
2
0
16 Dec 2024
Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate
Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate
Zhiqi Bu
Xiaomeng Jin
Bhanukiran Vinzamuri
Anil Ramakrishna
Kai-Wei Chang
V. Cevher
Mingyi Hong
MU
83
6
0
29 Oct 2024
Tuning-free coreset Markov chain Monte Carlo
Tuning-free coreset Markov chain Monte Carlo
Naitong Chen
Jonathan H. Huggins
Trevor Campbell
22
0
0
24 Oct 2024
A second-order-like optimizer with adaptive gradient scaling for deep
  learning
A second-order-like optimizer with adaptive gradient scaling for deep learning
Jérôme Bolte
Ryan Boustany
Edouard Pauwels
Andrei Purica
ODL
25
0
0
08 Oct 2024
Diffusing to the Top: Boost Graph Neural Networks with Minimal
  Hyperparameter Tuning
Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning
Lequan Lin
Dai Shi
Andi Han
Zhiyong Wang
Junbin Gao
23
0
0
08 Oct 2024
Old Optimizer, New Norm: An Anthology
Old Optimizer, New Norm: An Anthology
Jeremy Bernstein
Laker Newhouse
ODL
36
12
0
30 Sep 2024
Exploring Foundation Models for Synthetic Medical Imaging: A Study on
  Chest X-Rays and Fine-Tuning Techniques
Exploring Foundation Models for Synthetic Medical Imaging: A Study on Chest X-Rays and Fine-Tuning Techniques
Davide Clode da Silva
Marina Musse Bernardes
Nathalia Giacomini Ceretta
Gabriel Vaz de Souza
Gabriel Fonseca Silva
Rafael Heitor Bordini
S. Musse
MedIm
LM&MA
23
0
0
06 Sep 2024
Learning Rate-Free Reinforcement Learning: A Case for Model Selection
  with Non-Stationary Objectives
Learning Rate-Free Reinforcement Learning: A Case for Model Selection with Non-Stationary Objectives
Aida Afshar
Aldo Pacchiano
29
0
0
07 Aug 2024
Stepping on the Edge: Curvature Aware Learning Rate Tuners
Stepping on the Edge: Curvature Aware Learning Rate Tuners
Vincent Roulet
Atish Agarwala
Jean-Bastien Grill
Grzegorz Swirszcz
Mathieu Blondel
Fabian Pedregosa
34
1
0
08 Jul 2024
An Adaptive Stochastic Gradient Method with Non-negative Gauss-Newton
  Stepsizes
An Adaptive Stochastic Gradient Method with Non-negative Gauss-Newton Stepsizes
Antonio Orvieto
Lin Xiao
32
2
0
05 Jul 2024
Lift Your Molecules: Molecular Graph Generation in Latent Euclidean
  Space
Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space
Mohamed Amine Ketata
Nicholas Gao
Johanna Sommer
Tom Wollschlager
Stephan Günnemann
DiffM
31
1
0
15 Jun 2024
Fully Unconstrained Online Learning
Fully Unconstrained Online Learning
Ashok Cutkosky
Zakaria Mhammedi
CLL
27
1
0
30 May 2024
Scalable Optimization in the Modular Norm
Scalable Optimization in the Modular Norm
Tim Large
Yang Liu
Minyoung Huh
Hyojin Bahng
Phillip Isola
Jeremy Bernstein
33
12
0
23 May 2024
Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations
Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations
Nicholas Gao
Stephan Günnemann
33
4
0
23 May 2024
Unleash Graph Neural Networks from Heavy Tuning
Unleash Graph Neural Networks from Heavy Tuning
Lequan Lin
Dai Shi
Andi Han
Zhiyong Wang
Junbin Gao
AI4CE
27
2
0
21 May 2024
Towards Stability of Parameter-free Optimization
Towards Stability of Parameter-free Optimization
Yijiang Pang
Shuyang Yu
Hoang Bao
Jiayu Zhou
21
1
0
07 May 2024
On Representing Electronic Wave Functions with Sign Equivariant Neural
  Networks
On Representing Electronic Wave Functions with Sign Equivariant Neural Networks
Nicholas Gao
Stephan Günnemann
29
2
0
08 Mar 2024
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Sayantan Choudhury
N. Tupitsa
Nicolas Loizou
Samuel Horváth
Martin Takáč
Eduard A. Gorbunov
25
1
0
05 Mar 2024
The Price of Adaptivity in Stochastic Convex Optimization
The Price of Adaptivity in Stochastic Convex Optimization
Y. Carmon
Oliver Hinder
15
6
0
16 Feb 2024
How Free is Parameter-Free Stochastic Optimization?
How Free is Parameter-Free Stochastic Optimization?
Amit Attia
Tomer Koren
ODL
33
4
0
05 Feb 2024
MetaOptimize: A Framework for Optimizing Step Sizes and Other
  Meta-parameters
MetaOptimize: A Framework for Optimizing Step Sizes and Other Meta-parameters
Arsalan Sharifnassab
Saber Salehkaleybar
Richard Sutton
19
3
0
04 Feb 2024
Interpreting Adaptive Gradient Methods by Parameter Scaling for
  Learning-Rate-Free Optimization
Interpreting Adaptive Gradient Methods by Parameter Scaling for Learning-Rate-Free Optimization
Min-Kook Suh
Seung-Woo Seo
ODL
24
0
0
06 Jan 2024
SANIA: Polyak-type Optimization Framework Leads to Scale Invariant
  Stochastic Algorithms
SANIA: Polyak-type Optimization Framework Leads to Scale Invariant Stochastic Algorithms
Farshed Abdukhakimov
Chulu Xiang
Dmitry Kamzolov
Robert Mansel Gower
Martin Takáč
27
2
0
28 Dec 2023
Non-Uniform Smoothness for Gradient Descent
Non-Uniform Smoothness for Gradient Descent
A. Berahas
Lindon Roberts
Fred Roosta
18
3
0
15 Nov 2023
ELRA: Exponential learning rate adaption gradient descent optimization
  method
ELRA: Exponential learning rate adaption gradient descent optimization method
Alexander Kleinsorge
Stefan Kupper
Alexander Fauck
Felix Rothe
ODL
19
2
0
12 Sep 2023
Estimating class separability of text embeddings with persistent
  homology
Estimating class separability of text embeddings with persistent homology
Kostis Gourgoulias
Najah F. Ghalyan
Maxime Labonne
Yash Satsangi
Sean J. Moran
Joseph Sabelja
25
0
0
24 May 2023
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
264
5,319
0
05 Nov 2016
1