377
v1v2 (latest)

Reevaluating Theoretical Analysis Methods for Optimization in Deep Learning

Main:10 Pages
20 Figures
Bibliography:6 Pages
1 Tables
Appendix:6 Pages
Abstract

There is a significant gap between our theoretical understanding of optimization algorithms used in deep learning and their practical performance. Theoretical development usually focuses on proving convergence guarantees under a variety of different assumptions, which are themselves often chosen based on a rough combination of intuitive match to practice and analytical convenience. In this paper, we carefully measure the degree to which the standard optimization analyses are capable of explaining modern algorithms. To do this, we develop new empirical metrics that compare real optimization behavior with analytically predicted behavior. Our investigation is notable for its tight integration with modern optimization analysis: rather than simply checking high-level assumptions made in the analysis (e.g. smoothness), we also verify key low-level identities used by the analysis to explain optimization behavior that might hold even if the high-level motivating assumptions do not. Notably, we find that smoothness-based analyses fail in practice under most scenarios, but the key identities commonly used in convex-optimization analyses often hold in practice despite the objective's global non-convexity.

View on arXiv
Comments on this paper