ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.04540
  4. Cited By
Characterizing the implicit bias via a primal-dual analysis

Characterizing the implicit bias via a primal-dual analysis

11 June 2019
Ziwei Ji
Matus Telgarsky
ArXivPDFHTML

Papers citing "Characterizing the implicit bias via a primal-dual analysis"

6 / 6 papers shown
Title
Tight Risk Bounds for Gradient Descent on Separable Data
Tight Risk Bounds for Gradient Descent on Separable Data
Matan Schliserman
Tomer Koren
30
4
0
02 Mar 2023
Stability vs Implicit Bias of Gradient Methods on Separable Data and
  Beyond
Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond
Matan Schliserman
Tomer Koren
31
23
0
27 Feb 2022
A Unifying View on Implicit Bias in Training Linear Neural Networks
A Unifying View on Implicit Bias in Training Linear Neural Networks
Chulhee Yun
Shankar Krishnan
H. Mobahi
MLT
42
80
0
06 Oct 2020
Directional convergence and alignment in deep learning
Directional convergence and alignment in deep learning
Ziwei Ji
Matus Telgarsky
28
166
0
11 Jun 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks
  Trained with the Logistic Loss
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
46
330
0
11 Feb 2020
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Kaifeng Lyu
Jian Li
57
327
0
13 Jun 2019
1