ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12700
29
0

A Two-Phase Perspective on Deep Learning Dynamics

17 April 2025
Robert de Mello Koch
Animik Ghosh
ArXivPDFHTML
Abstract

We propose that learning in deep neural networks proceeds in two phases: a rapid curve fitting phase followed by a slower compression or coarse graining phase. This view is supported by the shared temporal structure of three phenomena: grokking, double descent and the information bottleneck, all of which exhibit a delayed onset of generalization well after training error reaches zero. We empirically show that the associated timescales align in two rather different settings. Mutual information between hidden layers and input data emerges as a natural progress measure, complementing circuit-based metrics such as local complexity and the linear mapping number. We argue that the second phase is not actively optimized by standard training algorithms and may be unnecessarily prolonged. Drawing on an analogy with the renormalization group, we suggest that this compression phase reflects a principled form of forgetting, critical for generalization.

View on arXiv
@article{koch2025_2504.12700,
  title={ A Two-Phase Perspective on Deep Learning Dynamics },
  author={ Robert de Mello Koch and Animik Ghosh },
  journal={arXiv preprint arXiv:2504.12700},
  year={ 2025 }
}
Comments on this paper