Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1906.04540
Cited By
v1
v2
v3 (latest)
Characterizing the implicit bias via a primal-dual analysis
11 June 2019
Ziwei Ji
Matus Telgarsky
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Characterizing the implicit bias via a primal-dual analysis"
14 / 14 papers shown
Multiclass Loss Geometry Matters for Generalization of Gradient Descent in Separable Classification
Matan Schliserman
Tomer Koren
215
1
0
28 May 2025
The Implicit Bias of Heterogeneity towards Invariance: A Study of Multi-Environment Matrix Sensing
Yang Xu
Yihong Gu
Cong Fang
421
0
0
03 Mar 2024
Tight Risk Bounds for Gradient Descent on Separable Data
Neural Information Processing Systems (NeurIPS), 2023
Matan Schliserman
Tomer Koren
278
9
0
02 Mar 2023
Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond
Annual Conference Computational Learning Theory (COLT), 2022
Matan Schliserman
Tomer Koren
227
31
0
27 Feb 2022
Understanding Deflation Process in Over-parametrized Tensor Decomposition
Neural Information Processing Systems (NeurIPS), 2021
Rong Ge
Y. Ren
Xiang Wang
Mo Zhou
235
20
0
11 Jun 2021
Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning
International Conference on Learning Representations (ICLR), 2020
Zhiyuan Li
Yuping Luo
Kaifeng Lyu
329
151
0
17 Dec 2020
Implicit bias of any algorithm: bounding bias via margin
Elvis Dohmatob
177
0
0
12 Nov 2020
Inductive Bias of Gradient Descent for Weight Normalized Smooth Homogeneous Neural Nets
International Conference on Algorithmic Learning Theory (ALT), 2020
Depen Morwani
H. G. Ramaswamy
329
3
0
24 Oct 2020
A Unifying View on Implicit Bias in Training Linear Neural Networks
International Conference on Learning Representations (ICLR), 2020
Chulhee Yun
Shankar Krishnan
H. Mobahi
MLT
579
94
0
06 Oct 2020
Understanding Implicit Regularization in Over-Parameterized Single Index Model
Journal of the American Statistical Association (JASA), 2020
Jianqing Fan
Zhuoran Yang
Mengxin Yu
348
23
0
16 Jul 2020
Implicitly Maximizing Margins with the Hinge Loss
Justin Lizama
97
1
0
25 Jun 2020
Directional convergence and alignment in deep learning
Neural Information Processing Systems (NeurIPS), 2020
Ziwei Ji
Matus Telgarsky
393
210
0
11 Jun 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Annual Conference Computational Learning Theory (COLT), 2020
Lénaïc Chizat
Francis R. Bach
MLT
795
374
0
11 Feb 2020
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
International Conference on Learning Representations (ICLR), 2019
Kaifeng Lyu
Jian Li
578
391
0
13 Jun 2019
1
Page 1 of 1