ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.08404
10
18

Theoretical Analysis of Inductive Biases in Deep Convolutional Networks

15 May 2023
Zihao Wang
Lei Wu
ArXivPDFHTML
Abstract

In this paper, we provide a theoretical analysis of the inductive biases in convolutional neural networks (CNNs). We start by examining the universality of CNNs, i.e., the ability to approximate any continuous functions. We prove that a depth of O(log⁡d)\mathcal{O}(\log d)O(logd) suffices for deep CNNs to achieve this universality, where ddd in the input dimension. Additionally, we establish that learning sparse functions with CNNs requires only O~(log⁡2d)\widetilde{\mathcal{O}}(\log^2d)O(log2d) samples, indicating that deep CNNs can efficiently capture {\em long-range} sparse correlations. These results are made possible through a novel combination of the multichanneling and downsampling when increasing the network depth. We also delve into the distinct roles of weight sharing and locality in CNNs. To this end, we compare the performance of CNNs, locally-connected networks (LCNs), and fully-connected networks (FCNs) on a simple regression task, where LCNs can be viewed as CNNs without weight sharing. On the one hand, we prove that LCNs require Ω(d){\Omega}(d)Ω(d) samples while CNNs need only O~(log⁡2d)\widetilde{\mathcal{O}}(\log^2d)O(log2d) samples, highlighting the critical role of weight sharing. On the other hand, we prove that FCNs require Ω(d2)\Omega(d^2)Ω(d2) samples, whereas LCNs need only O~(d)\widetilde{\mathcal{O}}(d)O(d) samples, underscoring the importance of locality. These provable separations quantify the difference between the two biases, and the major observation behind our proof is that weight sharing and locality break different symmetries in the learning process.

View on arXiv
Comments on this paper