ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.08013
  4. Cited By
Information Bottleneck Analysis of Deep Neural Networks via Lossy
  Compression
v1v2 (latest)

Information Bottleneck Analysis of Deep Neural Networks via Lossy Compression

International Conference on Learning Representations (ICLR), 2023
13 May 2023
I. Butakov
Alexander Tolmachev
S. Malanchuk
A. Neopryatnaya
Alexey Frolov
K. Andreev
ArXiv (abs)PDFHTML

Papers citing "Information Bottleneck Analysis of Deep Neural Networks via Lossy Compression"

11 / 11 papers shown
Title
Multimodal Datasets with Controllable Mutual Information
Multimodal Datasets with Controllable Mutual Information
Raheem Karim Hashmani
G. W. Merz
Helen Qu
Mariel Pettee
K. Cranmer
116
0
0
24 Oct 2025
Neural Mutual Information Estimation with Vector Copulas
Neural Mutual Information Estimation with Vector Copulas
Yanzhi Chen
Zijing Ou
Adrian Weller
Michael U. Gutmann
107
2
0
23 Oct 2025
Explaining Grokking and Information Bottleneck through Neural Collapse Emergence
Explaining Grokking and Information Bottleneck through Neural Collapse Emergence
Keitaro Sakamoto
Issei Sato
144
0
0
25 Sep 2025
InfoQ: Mixed-Precision Quantization via Global Information Flow
InfoQ: Mixed-Precision Quantization via Global Information Flow
Mehmet Emre Akbulut
Hazem Hesham Yousef Shalby
Fabrizio Pittorino
Manuel Roveri
MQ
44
0
0
06 Aug 2025
Curse of Slicing: Why Sliced Mutual Information is a Deceptive Measure of Statistical Dependence
Curse of Slicing: Why Sliced Mutual Information is a Deceptive Measure of Statistical Dependence
Alexander Semenenko
I. Butakov
Alexey Frolov
Ivan Oseledets
223
1
0
04 Jun 2025
Binarized Neural Networks Converge Toward Algorithmic Simplicity: Empirical Support for the Learning-as-Compression Hypothesis
Binarized Neural Networks Converge Toward Algorithmic Simplicity: Empirical Support for the Learning-as-Compression Hypothesis
Eduardo Y. Sakabe
Felipe S. Abrahão
A. S. Simões
Esther L. Colombini
P. Costa
Ricardo Ribeiro Gudwin
Hector Zenil
298
0
0
27 May 2025
InfoBridge: Mutual Information estimation via Bridge Matching
InfoBridge: Mutual Information estimation via Bridge Matching
Sergei Kholkin
Ivan Butakov
Evgeny Burnaev
Nikita Gushchin
Alexander Korotin
DiffMFedML
419
1
0
03 Feb 2025
Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax
Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax
I. Butakov
Alexander Sememenko
Alexander Tolmachev
Andrey Gladkov
Marina Munkhoeva
Alexey Frolov
358
2
0
09 Oct 2024
Real Time American Sign Language Detection Using Yolo-v9
Real Time American Sign Language Detection Using Yolo-v9
Amna Imran
Meghana Shashishekhara Hulikal
Hamza A. A. Gardi
ObjD
195
8
0
25 Jul 2024
Mutual Information Estimation via Normalizing Flows
Mutual Information Estimation via Normalizing Flows
I. Butakov
Alexander Tolmachev
S. Malanchuk
A. Neopryatnaya
Alexey Frolov
261
16
0
04 Mar 2024
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
2.0K
19,365
0
16 Feb 2016
1