ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.11081
  4. Cited By
Improving Feature Attribution through Input-specific Network Pruning
v1v2 (latest)

Improving Feature Attribution through Input-specific Network Pruning

25 November 2019
Ashkan Khakzar
Soroosh Baselizadeh
Saurabh Khanduja
Christian Rupprecht
S. T. Kim
Nassir Navab
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Improving Feature Attribution through Input-specific Network Pruning"

9 / 9 papers shown
Title
Smaller is Better: Enhancing Transparency in Vehicle AI Systems via Pruning
Smaller is Better: Enhancing Transparency in Vehicle AI Systems via Pruning
Sanish Suwal
Shaurya Garg
Dipkamal Bhusal
Michael Clifford
Nidhi Rastogi
AAML
86
1
0
24 Sep 2025
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
Dennis Fucci
Marco Gaido
Beatrice Savoldi
Matteo Negri
Mauro Cettolo
L. Bentivogli
499
5
0
03 Nov 2024
sMRI-PatchNet: A novel explainable patch-based deep learning network for
  Alzheimer's disease diagnosis and discriminative atrophy localisation with
  Structural MRI
sMRI-PatchNet: A novel explainable patch-based deep learning network for Alzheimer's disease diagnosis and discriminative atrophy localisation with Structural MRIIEEE Access (IEEE Access), 2023
Xin Zhang
Liangxiu Han
Lianghao Han
Haoming Chen
Darren Dancey
Daoqiang Zhang
MedIm
191
16
0
17 Feb 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
461
5
0
17 Feb 2023
Explainable Model-Agnostic Similarity and Confidence in Face
  Verification
Explainable Model-Agnostic Similarity and Confidence in Face Verification
Martin Knoche
Torben Teepe
S. Hörmann
Gerhard Rigoll
AAMLCVBM
165
20
0
24 Nov 2022
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic,
  Complete and Sound
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and SoundNeural Information Processing Systems (NeurIPS), 2022
Arushi Gupta
Nikunj Saunshi
Dingli Yu
Kaifeng Lyu
Sanjeev Arora
AAMLFAttXAI
120
8
0
05 Nov 2022
Analyzing the Effects of Handling Data Imbalance on Learned Features
  from Medical Images by Looking Into the Models
Analyzing the Effects of Handling Data Imbalance on Learned Features from Medical Images by Looking Into the Models
Ashkan Khakzar
Yawei Li
Yang Zhang
Mirac Sanisoglu
Seong Tae Kim
Mina Rezaei
B. Bischl
Nassir Navab
153
0
0
04 Apr 2022
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Haiwei Yang
AAML
167
19
0
09 Jun 2020
Restricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for AttributionInternational Conference on Learning Representations (ICLR), 2020
Karl Schulz
Leon Sixt
Federico Tombari
Tim Landgraf
FAtt
460
206
0
02 Jan 2020
1