Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1902.06918
Cited By
Explaining a black-box using Deep Variational Information Bottleneck Approach
19 February 2019
Seo-Jin Bang
P. Xie
Heewook Lee
Wei Wu
Eric Xing
XAI
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explaining a black-box using Deep Variational Information Bottleneck Approach"
19 / 19 papers shown
Title
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
Mohamed Ali Souibgui
Changkyu Choi
Andrey Barsky
Kangsoo Jung
Ernest Valveny
Dimosthenis Karatzas
28
0
0
12 May 2025
CRFU: Compressive Representation Forgetting Against Privacy Leakage on Machine Unlearning
Weiqi Wang
Chenhan Zhang
Zhiyi Tian
Shushu Liu
Shui Yu
MU
47
0
0
27 Feb 2025
Task-Augmented Cross-View Imputation Network for Partial Multi-View Incomplete Multi-Label Classification
Xiaohuan Lu
Lian Zhao
Wai Keung Wong
Jie Wen
Jiang Long
Wulin Xie
36
1
0
12 Sep 2024
ICST-DNET: An Interpretable Causal Spatio-Temporal Diffusion Network for Traffic Speed Prediction
Yi Rong
Yingchi Mao
Yinqiu Liu
Ling Chen
Xiaoming He
Dusit Niyato
DiffM
23
1
0
22 Apr 2024
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
37
1
0
18 May 2023
Posthoc Interpretation via Quantization
Francesco Paissan
Cem Subakan
Mirco Ravanelli
MQ
24
6
0
22 Mar 2023
Interpretability with full complexity by constraining feature information
Kieran A. Murphy
Danielle Bassett
FAtt
35
5
0
30 Nov 2022
Revisiting Attention Weights as Explanations from an Information Theoretic Perspective
Bingyang Wen
K. P. Subbalakshmi
Fan Yang
FAtt
27
6
0
31 Oct 2022
Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models
Liu Zhendong
Wenyu Jiang
Yan Zhang
Chongjun Wang
CML
11
0
0
22 Jun 2022
Variational Distillation for Multi-View Learning
Xudong Tian
Zhizhong Zhang
Cong Wang
Wensheng Zhang
Yanyun Qu
Lizhuang Ma
Zongze Wu
Yuan Xie
Dacheng Tao
26
5
0
20 Jun 2022
Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF
Jayneel Parekh
Sanjeel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
G. Richard
24
22
0
23 Feb 2022
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
Peter Hase
Harry Xie
Joey Tianyi Zhou
OODD
LRM
FAtt
29
91
0
01 Jun 2021
Progressive Interpretation Synthesis: Interpreting Task Solving by Quantifying Previously Used and Unused Information
Zhengqi He
Taro Toyoizumi
19
1
0
08 Jan 2021
Inserting Information Bottlenecks for Attribution in Transformers
Zhiying Jiang
Raphael Tang
Ji Xin
Jimmy J. Lin
41
6
0
27 Dec 2020
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
15
63
0
01 Oct 2020
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
30
73
0
24 Jun 2020
On the Maximum Mutual Information Capacity of Neural Architectures
Brandon Foggo
Nan Yu
TPM
23
3
0
10 Jun 2020
Why Attentions May Not Be Interpretable?
Bing Bai
Jian Liang
Guanhua Zhang
Hao Li
Kun Bai
Fei Wang
FAtt
25
56
0
10 Jun 2020
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei Wang
AAML
54
18
0
09 Jun 2020
1