ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.05254
  4. Cited By
Optimizing for Interpretability in Deep Neural Networks with Tree
  Regularization

Optimizing for Interpretability in Deep Neural Networks with Tree Regularization

Journal of Artificial Intelligence Research (JAIR), 2019
14 August 2019
Mike Wu
S. Parbhoo
M. C. Hughes
Volker Roth
Finale Doshi-Velez
    AI4CE
ArXiv (abs)PDFHTML

Papers citing "Optimizing for Interpretability in Deep Neural Networks with Tree Regularization"

15 / 15 papers shown
NIMO: a Nonlinear Interpretable MOdel
NIMO: a Nonlinear Interpretable MOdel
Shijian Xu
M. Negri
Volker Roth
FAtt
397
0
0
05 Jun 2025
Smooth InfoMax -- Towards Easier Post-Hoc Interpretability
Smooth InfoMax -- Towards Easier Post-Hoc Interpretability
Fabian Denoodt
Bart de Boer
José Oramas
586
2
0
23 Aug 2024
A priori Estimates for Deep Residual Network in Continuous-time
  Reinforcement Learning
A priori Estimates for Deep Residual Network in Continuous-time Reinforcement Learning
Shuyu Yin
Qixuan Zhou
Fei Wen
Tao Luo
339
0
0
24 Feb 2024
3VL: Using Trees to Improve Vision-Language Models' Interpretability
3VL: Using Trees to Improve Vision-Language Models' InterpretabilityIEEE Transactions on Image Processing (IEEE TIP), 2023
Nir Yellinek
Leonid Karlinsky
Raja Giryes
CoGeVLM
665
8
0
28 Dec 2023
Variational Information Pursuit for Interpretable Predictions
Variational Information Pursuit for Interpretable PredictionsInternational Conference on Learning Representations (ICLR), 2023
Aditya Chattopadhyay
Kwan Ho Ryan Chan
B. Haeffele
D. Geman
René Vidal
DRL
345
23
0
06 Feb 2023
SpArX: Sparse Argumentative Explanations for Neural Networks [Technical
  Report]
SpArX: Sparse Argumentative Explanations for Neural Networks [Technical Report]European Conference on Artificial Intelligence (ECAI), 2023
Hamed Ayoobi
Nico Potyka
Francesca Toni
199
26
0
23 Jan 2023
Interpreting Neural Policies with Disentangled Tree Representations
Interpreting Neural Policies with Disentangled Tree Representations
Tsun-Hsuan Wang
Wei Xiao
Tim Seyde
Ramin Hasani
Daniela Rus
DRL
306
2
0
13 Oct 2022
Interpretable Deep Tracking
Interpretable Deep Tracking
Benjamin Thérien
Krzysztof Czarnecki
293
0
0
03 Oct 2022
A Survey of Neural Trees
A Survey of Neural Trees
Haoling Li
Mingli Song
Mengqi Xue
Haofei Zhang
Jingwen Ye
Lechao Cheng
Weilong Dai
AI4CE
390
7
0
07 Sep 2022
Generating Synthetic Clinical Data that Capture Class Imbalanced
  Distributions with Generative Adversarial Networks: Example using
  Antiretroviral Therapy for HIV
Generating Synthetic Clinical Data that Capture Class Imbalanced Distributions with Generative Adversarial Networks: Example using Antiretroviral Therapy for HIVJournal of Biomedical Informatics (JBI), 2022
N. Kuo
Federico Garcia
Anders Sönnerborg
Maurizio Zazzi
Michael Böhm
Rolf Kaiser
Mark Polizzotto
Louisa R Jorm
S. Barbieri
GAN
372
41
0
18 Aug 2022
Interpretable by Design: Learning Predictors by Composing Interpretable
  Queries
Interpretable by Design: Learning Predictors by Composing Interpretable QueriesIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Aditya Chattopadhyay
Stewart Slocum
B. Haeffele
René Vidal
D. Geman
307
33
0
03 Jul 2022
A Survey on Interpretable Reinforcement Learning
A Survey on Interpretable Reinforcement LearningMachine-mediated learning (ML), 2021
Claire Glanois
Paul Weng
Matthieu Zimmer
Dong Li
Zhenxing Ge
Jianye Hao
Wulong Liu
OffRL
442
160
0
24 Dec 2021
On Explaining Decision Trees
On Explaining Decision Trees
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
264
109
0
21 Oct 2020
SIDU: Similarity Difference and Uniqueness Method for Explainable AI
SIDU: Similarity Difference and Uniqueness Method for Explainable AI
Satya M. Muddamsetty
M. N. Jahromi
T. Moeslund
132
15
0
04 Jun 2020
Purifying Interaction Effects with the Functional ANOVA: An Efficient
  Algorithm for Recovering Identifiable Additive Models
Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive ModelsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Benjamin J. Lengerich
S. Tan
C. Chang
Giles Hooker
R. Caruana
349
57
0
12 Nov 2019
1
Page 1 of 1