ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.06037
13
3

On the Maximum Mutual Information Capacity of Neural Architectures

10 June 2020
Brandon Foggo
Nan Yu
    TPM
ArXivPDFHTML
Abstract

We derive the closed-form expression of the maximum mutual information - the maximum value of I(X;Z)I(X;Z)I(X;Z) obtainable via training - for a broad family of neural network architectures. The quantity is essential to several branches of machine learning theory and practice. Quantitatively, we show that the maximum mutual information for these families all stem from generalizations of a single catch-all formula. Qualitatively, we show that the maximum mutual information of an architecture is most strongly influenced by the width of the smallest layer of the network - the "information bottleneck" in a different sense of the phrase, and by any statistical invariances captured by the architecture.

View on arXiv
Comments on this paper