101

Deep Learning as a Convex Paradigm of Computation: Minimizing Circuit Size with ResNets

Main:25 Pages
1 Figures
Bibliography:4 Pages
Abstract

This paper argues that DNNs implement a computational Occam's razor -- finding the `simplest' algorithm that fits the data -- and that this could explain their incredible and wide-ranging success over more traditional statistical methods. We start with the discovery that the set of real-valued function ff that can be ϵ\epsilon-approximated with a binary circuit of size at most cϵγc\epsilon^{-\gamma} becomes convex in the `Harder than Monte Carlo' (HTMC) regime, when γ>2\gamma>2, allowing for the definition of a HTMC norm on functions. In parallel one can define a complexity measure on the parameters of a ResNets (a weighted 1\ell_1 norm of the parameters), which induce a `ResNet norm' on functions. The HTMC and ResNet norms can then be related by an almost matching sandwich bound. Thus minimizing this ResNet norm is equivalent to finding a circuit that fits the data with an almost minimal number of nodes (within a power of 2 of being optimal). ResNets thus appear as an alternative model for computation of real functions, better adapted to the HTMC regime and its convexity.

View on arXiv
Comments on this paper