ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.11479
  4. Cited By
Listen to Interpret: Post-hoc Interpretability for Audio Networks with
  NMF

Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF

23 February 2022
Jayneel Parekh
Sanjeel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
G. Richard
ArXivPDFHTML

Papers citing "Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF"

19 / 19 papers shown
Title
Transformation of audio embeddings into interpretable, concept-based representations
Transformation of audio embeddings into interpretable, concept-based representations
Alice Zhang
Edison Thomaz
Lie Lu
27
0
0
18 Apr 2025
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Zhen Tan
Song Wang
Yifan Li
Yu Kong
Jundong Li
Tianlong Chen
Huan Liu
FAtt
43
0
0
11 Apr 2025
From Vision to Sound: Advancing Audio Anomaly Detection with Vision-Based Algorithms
From Vision to Sound: Advancing Audio Anomaly Detection with Vision-Based Algorithms
Manuel Barusco
Francesco Borsatti
Davide Dalle Pezze
Francesco Paissan
Elisabetta Farella
Gian Antonio Susto
52
0
0
25 Feb 2025
Investigating the Effectiveness of Explainability Methods in Parkinson's
  Detection from Speech
Investigating the Effectiveness of Explainability Methods in Parkinson's Detection from Speech
Eleonora Mancini
Francesco Paissan
Paolo Torroni
Mirco Ravanelli
Cem Subakan
46
0
0
12 Nov 2024
Audio Explanation Synthesis with Generative Foundation Models
Audio Explanation Synthesis with Generative Foundation Models
Alican Akman
Qiyang Sun
Björn W. Schuller
20
1
0
10 Oct 2024
One Wave to Explain Them All: A Unifying Perspective on Post-hoc
  Explainability
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability
Gabriel Kasmi
Amandine Brunetto
Thomas Fel
Jayneel Parekh
AAML
FAtt
25
0
0
02 Oct 2024
LMAC-TD: Producing Time Domain Explanations for Audio Classifiers
LMAC-TD: Producing Time Domain Explanations for Audio Classifiers
Eleonora Mancini
Francesco Paissan
Mirco Ravanelli
Cem Subakan
29
1
0
13 Sep 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
61
1
0
01 Jul 2024
Open-Source Conversational AI with SpeechBrain 1.0
Open-Source Conversational AI with SpeechBrain 1.0
Mirco Ravanelli
Titouan Parcollet
Adel Moumen
Sylvain de Langen
Cem Subakan
...
Salima Mdhaffar
G. Laperriere
Mickael Rouvier
Renato De Mori
Yannick Esteve
VLM
34
10
0
29 Jun 2024
Phoneme Discretized Saliency Maps for Explainable Detection of
  AI-Generated Voice
Phoneme Discretized Saliency Maps for Explainable Detection of AI-Generated Voice
Shubham Gupta
Mirco Ravanelli
Pascal Germain
Cem Subakan
FAtt
37
3
0
14 Jun 2024
Listenable Maps for Zero-Shot Audio Classifiers
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Mirco Ravanelli
Cem Subakan
32
4
0
27 May 2024
Listenable Maps for Audio Classifiers
Listenable Maps for Audio Classifiers
Francesco Paissan
Mirco Ravanelli
Cem Subakan
30
7
0
19 Mar 2024
Focal Modulation Networks for Interpretable Sound Classification
Focal Modulation Networks for Interpretable Sound Classification
Luca Della Libera
Cem Subakan
Mirco Ravanelli
28
2
0
05 Feb 2024
A Holistic Approach to Unifying Automatic Concept Extraction and Concept
  Importance Estimation
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
16
49
0
11 Jun 2023
Tackling Interpretability in Audio Classification Networks with
  Non-negative Matrix Factorization
Tackling Interpretability in Audio Classification Networks with Non-negative Matrix Factorization
Jayneel Parekh
Sanjeel Parekh
Pavlo Mozharovskyi
Gaël Richard
Florence dÁlché-Buc
28
6
0
11 May 2023
Posthoc Interpretation via Quantization
Posthoc Interpretation via Quantization
Francesco Paissan
Cem Subakan
Mirco Ravanelli
MQ
16
6
0
22 Mar 2023
Concept-Based Techniques for "Musicologist-friendly" Explanations in a
  Deep Music Classifier
Concept-Based Techniques for "Musicologist-friendly" Explanations in a Deep Music Classifier
Francesco Foscarin
Katharina Hoedt
Verena Praher
A. Flexer
Gerhard Widmer
21
11
0
26 Aug 2022
AudioMNIST: Exploring Explainable Artificial Intelligence for Audio
  Analysis on a Simple Benchmark
AudioMNIST: Exploring Explainable Artificial Intelligence for Audio Analysis on a Simple Benchmark
Sören Becker
Johanna Vielhaben
M. Ackermann
Klaus-Robert Muller
Sebastian Lapuschkin
Wojciech Samek
XAI
24
94
0
09 Jul 2018
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,235
0
24 Jun 2017
1