ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.12711
24
30

On Convergence and Generalization of Dropout Training

23 October 2020
Poorya Mianjy
R. Arora
ArXivPDFHTML
Abstract

We study dropout in two-layer neural networks with rectified linear unit (ReLU) activations. Under mild overparametrization and assuming that the limiting kernel can separate the data distribution with a positive margin, we show that dropout training with logistic loss achieves ϵ\epsilonϵ-suboptimality in test error in O(1/ϵ)O(1/\epsilon)O(1/ϵ) iterations.

View on arXiv
Comments on this paper