ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.01272
357
13
v1v2v3v4v5v6 (latest)

Layer-wise Learning of Stochastic Neural Networks with Information Bottleneck

4 December 2017
Thanh T. Nguyen
Jaesik Choi
ArXiv (abs)PDFHTML
Abstract

In this paper, we present a layer-wise learning of stochastic neural networks (SNNs) in an information-theoretic perspective. In each layer of an SNN, the compression and the relevance are defined to quantify the amount of information that the layer contains about the input space and the target space, respectively. We jointly optimize the compression and the relevance of all parameters in an SNN to better exploit the neural network's representation. Previously, the Information Bottleneck (IB) framework ([28]) extracts relevant information for a target variable. Here, we propose Parametric Information Bottleneck (PIB) for a neural network by utilizing (only) its model parameters explicitly to approximate the compression and the relevance. We show that, the PIB framework can be considered as an extension of the maximum likelihood estimate (MLE) principle to every layer level. We also show that, as compared to the MLE principle, PIB : (i) improves the generalization of neural networks in classification tasks, (ii) is more efficient to exploit a neural network's representation by pushing it closer to the optimal information-theoretical representation in a faster manner.

View on arXiv
Comments on this paper